Welcome!

@DXWorldExpo Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, William Schmarzo, Elizabeth White

Related Topics: @DXWorldExpo, @CloudExpo, Apache

@DXWorldExpo: Blog Post

Taking Apache Spark for a Spin | @BigDataExpo #BigData

What is Spark?

You might have looked at some of the articles on Apache Spark on the Web and wondered if you could try it out for yourself. While Spark and Hadoop are designed for clusters, you might think you need to have lots of nodes.

If you wanted to see what you could do with Spark, you could set up a home lab with a few servers from Ebay. But there’s no rule saying that you need more than one machine just to learn Spark. Today’s multi-core processors are like having a cluster already on your desk. Even better, with a laptop, you can pick up your cluster and take it with you. Try doing that with your rack-mount servers.

What is Spark?
If you’re looking to try out Apache Spark, it helps to know what it actually is. Spark is a cluster computing framework that builds on Hadoop to support not only cluster computing, but also real-time cluster computing.

Spark consists of the Spark Core, which handles the actual dispatching, scheduling and I/O. Spark’s key feature is the Resilient Distributed Dataset, or RDD. RDDs are the basic data abstraction, containing a distributed list of elements. You can perform actions on RDDs, which return values, and transformations, which return new RDDs. It’s similar to functional programming, where functions return outputs and don’t have any side effects.

Spark is so fast because it represents RDDs in memory—and because RDDs are lazily evaluated. Transformations will not be calculated until an action on the RDDs has been requested to produce some form of output.

Spark also gives you access to some powerful tools like the real-time Spark Streaming engine for streaming analytics and the MLlib machine learning library.

Installing Spark

Installing Spark is easy enough. While the MapR distribution is essential for production use, you can install Spark from the project website on your own machine, whether you’re running Windows or Linux. It’s a good idea to set up a virtual machine for exploring Spark, just to keep it separate and reduce the possible security risk of running a server on your machine. This way, you can just turn it off when you don’t need it. Linux is a good choice because that’s what most servers will be running.

You can install also install Spark from your favorite distribution’s package manager. At least with the package manager, you won’t have to worry about dependencies like Scala. You can also install them from the respective websites or even build from source if you want.

Using the REPL

One of Spark’s greatest strengths is its interactive capabilities. Like most modern languages, Spark offers a REPL: A Read-Eval-Print-Loop. It’s just like the shell, or a Python interactive prompt.

Spark is actually implemented in Scala and you can use Scala or Python interactively. Learning both of these languages is beyond the scope of this article, but Python tends to be more familiar to people than Scala. In any case, if you’re interested at all in technologies like Spark, you likely have experience in some programming, and either Scala or Python shouldn’t be too hard to pick up. Of course if you have experience in Java that will work as well.

When you’ve got Spark up and running, you’ll be able to try out all the actions and transformations on your data.

The Spark equivalent of a “Hello, world!” seems to be a word count.

Here is an example shown in Python:

text_file = spark.textFile("hdfs://...")

 

text_file.flatMap(lambda line: line.split())

.map(lambda word: (word, 1))

.reduceByKey(lambda a, b: a+b)

You can see that even in Python, Spark makes uses of functional programming concepts such as maps and lambdas. The Spark documentation has an extensive reference of commands for both Python and Scala. The shell lets you quickly and easily experiment with data. Give it a try for yourself to see what Spark can really do.

Conclusion

If you’ve been curious about Spark and its ability to offer both batch and stream processing, and want to try it out, there’s no need to feel left out just because you don’t have your own cluster. Whether you’re a developer, a student, or a manager, you can get a taste of what Apache Spark has to offer. When you’re ready for production use, opt for the MapR Spark distribution when you’re ready for a complete, reliable version.

To further explore Spark, jump over to Getting Started with Apache Spark: From Inception to Production, a free interactive ebook by James A. Scott.

More Stories By Jim Scott

Jim has held positions running Operations, Engineering, Architecture and QA teams in the Consumer Packaged Goods, Digital Advertising, Digital Mapping, Chemical and Pharmaceutical industries. Jim has built systems that handle more than 50 billion transactions per day and his work with high-throughput computing at Dow Chemical was a precursor to more standardized big data concepts like Hadoop.

DXWorldEXPO Digital Transformation Stories
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, softwar...
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
Serveless Architectures brings the ability to independently scale, deploy and heal based on workloads and move away from monolithic designs. From the front-end, middle-ware and back-end layers, serverless workloads potentially have a larger security risk surface due to the many moving pieces. This talk will focus on key areas to consider for securing end to end, from dev to prod. We will discuss patterns for end to end TLS, session management, scaling to absorb attacks and mitigation techniques.
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve fu...
Only Adobe gives everyone - from emerging artists to global brands - everything they need to design and deliver exceptional digital experiences. Adobe Systems Incorporated develops, markets, and supports computer software products and technologies. The Company's products allow users to express and use information across all print and electronic media. The Company's Digital Media segment provides tools and solutions that enable individuals, small and medium businesses and enterprises to cre...