Welcome!

Big Data Journal Authors: Jason Bloomberg, Elizabeth White, Liz McMillan, John Savageau, Nitin Bandugula

Blog Feed Post

Big Data Needs a Better Network

Earlier this week I had some interesting conversations with @davehusak. Where the conversation started early in the day with a discussion on overlay networks and what network functions are performed where and in what context, later in the afternoon the discussion moved to networking solutions (and specifically Plexxi solutions) for big data applications.

It’s easy to jump to Hadoop or similarly structured cluster computing applications (Spark, Storm, or a long list of others) as the definition of a big data application. With all its simplicity for the overall distribution of work, Hadoop is a fairly tough network problem to solve if you want to do anything more than “throw bandwidth at the problem”. And when you do throw bandwidth at the problem, the extreme burstiness of the traffic will still significantly drag down the performance of the overall solution. And for many, CPU cycles to reduce the data is not the biggest challenge, storage and movement of data throughout a big data cluster is the biggest pain point. Intel has done a fine job providing compute firepower that far outpaces the evolution of network capacity.

The network plays a significant factor in several stages of a Hadoop solution cycle. It starts with chopping the to-be-analyzed data into chunks and distributing it across the datanodes. Hadoop has a notion of a rack and it has some basic intelligence when placing data and jobs that work on that data. By default the data will be replicated 3 times across at least 2 racks, if racks have been defined. The data to be distributed is easily in the 100s of Gigabytes or even Terabytes, so triple that data is being moved throughout the Hadoop cluster to the datanodes.

Once distributed, the actual Map jobs are launched against that data, these are the tasks that take the data and perform a first pass mapping into (in its most basic form) <key, value> tuples. Again here there is an attempt to have jobs work on local data, where local can be defined as local to the server that has that chunk of data or local to the rack, in an attempt to avoid as much cross rack communication as possible. This is based on the assumption that cross rack communication is much more constrained and aggregated and therefore more prone to congestion and packet loss.

Once the mapper jobs complete their task, the results of the mapping exercise is sent to reducers. Reducers take the <key, value> information and essentially tally the results. This transfer is the most taxing part of a Hadoop cycle on the network. Since most Hadoop mapping jobs run the same function on a similar sized dataset, that first set of mappers will all complete their task at about the same time and will all start sending their results to the same set of reducers, creating 1) a lot of traffic and 2) a lot of traffic to the same set of destinations.  Depending on the amount of data, mappers and compute nodes, this cycle repeats (the next set of mapping jobs are fired off) and at the end of each cycle a very significant spike in network traffic appears. At the very end, all results are brought together for one last spike in traffic. Each one of these network events is a source of significant congestion.

Many variables contribute to the overall performance of the Hadoop solution. What is the relationship between the chunks of data and the amount of servers and jobs? How many reducers are used? Where are the reducers in relation to the mappers? Is the Map function compute heavy or I/O heavy? How aggressive is the speculative scheduling that allows the same data to be worked on by multiple mappers?

With that many variables that can be tuned, and with so many variables different from one analysis to the next, it is hard to imagine that a single network design or implementation provides the best supporting infrastructure. There are assumptions in Hadoop placement of data and jobs that can easily be altered. The basic concept of a rack can easily expanded into a multi layer locality definition. With the right tools in the network, the definition of a rack, or even the locality and closeness of nodes in a cluster or virtual cluster can be adjusted according to the analysis to be completed.

We tend to give our applications variables to tune its performance based on what the network provides. It is time that the network adjusts itself based on the application needs. In a clustered application like Hadoop, there is lots of knowledge and even some predictability of network traffic. Wouldn’t that make for a great opportunity to infuse the network with some of that knowledge and have it morph itself to provide the best possible service? And Hadoop is not unique, cluster compute framework almost all carefully track placement of data and compute jobs, which makes them all great candidates to share some of that information with a smart network.

There are far simpler big data needs and applications that can and should be supported by flexible networks. Storage networks are still often separated from data networks for performance reasons and fear of interference. If you could actually separate the various types of data with logically or even physically different paths in the same network, would you still?

 

[Today's fun fact: An MLB baseball lasts on average 7 pitches. A google search asking how many baseballs are used in a single MLB season returns several pages worth of different answers. They must all be correct, it's on the Internet afterall.]

The post Big Data Needs a Better Network appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@BigDataExpo Stories
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, ...
SYS-CON Media announced that Cisco, a worldwide leader in IT that helps companies seize the opportunities of tomorrow, has launched a new ad campaign in Cloud Computing Journal. The ad campaign, a webcast titled 'Is Your Data Center Ready for the Application Economy?', focuses on the latest data center networking technologies, including SDN or ACI, and how customers are using SDN and ACI in their organizations to achieve business agility. The Cisco webcast is available on-demand.
Datapipe has acquired GoGrid, a provider of multi-cloud solutions for Big Data deployments. GoGrid’s proprietary orchestration and automation technologies provide 1-Button deployment for Big Data solutions that speed creation and results of new cloud projects. “GoGrid has made it easy for companies to stand up Big Data solutions quickly,” said Robb Allen, CEO, Datapipe. “Datapipe customers will achieve significant value from the speed at which we can now create new Big Data projects in the clou...
Dale Kim is the Director of Industry Solutions at MapR. His background includes a variety of technical and management roles at information technology companies. While his experience includes work with relational databases, much of his career pertains to non-relational data in the areas of search, content management, and NoSQL, and includes senior roles in technical marketing, sales engineering, and support engineering. Dale holds an MBA from Santa Clara University, and a BA in Computer Science f...
The Internet of Things (IoT) is rapidly in the process of breaking from its heretofore relatively obscure enterprise applications (such as plant floor control and supply chain management) and going mainstream into the consumer space. More and more creative folks are interconnecting everyday products such as household items, mobile devices, appliances and cars, and unleashing new and imaginative scenarios. We are seeing a lot of excitement around applications in home automation, personal fitness,...
The Internet of Things (IoT) promises to evolve the way the world does business; however, understanding how to apply it to your company can be a mystery. Most people struggle with understanding the potential business uses or tend to get caught up in the technology, resulting in solutions that fail to meet even minimum business goals. In his session at @ThingsExpo, Jesse Shiah, CEO / President / Co-Founder of AgilePoint Inc., showed what is needed to leverage the IoT to transform your business. ...
SYS-CON Events announced today that CodeFutures, a leading supplier of database performance tools, has been named a “Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. CodeFutures is an independent software vendor focused on providing tools that deliver database performance tools that increase productivity during database development and increase database performance and scalability during production.
Things are being built upon cloud foundations to transform organizations. This CEO Power Panel at 15th Cloud Expo, moderated by Roger Strukhoff, Cloud Expo and @ThingsExpo conference chair, addressed the big issues involving these technologies and, more important, the results they will achieve. Rodney Rogers, chairman and CEO of Virtustream; Brendan O'Brien, co-founder of Aria Systems, Bart Copeland, president and CEO of ActiveState Software; Jim Cowie, chief scientist at Dyn; Dave Wagstaff, VP ...
“We help people build clusters, in the classical sense of the cluster. We help people put a full stack on top of every single one of those machines. We do the full bare metal install," explained Greg Bruno, Vice President of Engineering and co-founder of StackIQ, in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, discussed how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your onlin...
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore's Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at Big Data Expo, Mason Katz, CTO and co-founder of StackIQ, disc...
SYS-CON Media announced that Splunk, a provider of the leading software platform for real-time Operational Intelligence, has launched an ad campaign on Big Data Journal. Splunk software and cloud services enable organizations to search, monitor, analyze and visualize machine-generated big data coming from websites, applications, servers, networks, sensors and mobile devices. The ads focus on delivering ROI - how improved uptime delivered $6M in annual ROI, improving customer operations by minin...
Software Defined Storage provides many benefits for customers including agility, flexibility, faster adoption of new technology and cost effectiveness. However, for IT organizations it can be challenging and complex to build your Enterprise Grade Storage from software. In his session at Cloud Expo, Paul Turner, CMO at Cloudian, looked at the new Original Design Manufacturer (ODM) market and how it is changing the storage world. Now Software Defined Storage companies can build Enterprise grade ...
Companies today struggle to manage the types and volume of data their customers and employees generate and use every day. With billions of requests daily, operational consistency can be elusive. In his session at Big Data Expo, Dave McCrory, CTO at Basho Technologies, will explore how a distributed systems solution, such as NoSQL, can give organizations the consistency and availability necessary to succeed with on-demand data, offering high availability at massive scale.
Vormetric on Wednesday announced the results of its 2015 Insider Threat Report (ITR), conducted online on their behalf by Harris Poll and in conjunction with analyst firm Ovum in fall 2014 among 818 IT decision makers in various countries, including 408 in the United States. The report details striking findings around how U.S. and international enterprises perceive security threats, the types of employees considered most dangerous, environments at the greatest risk for data loss and the steps or...
CodeFutures has announced Dan Lynn as its new CEO. Lynn assumes the role from Founder Cory Isaacson, who has joined RMS and will now serve as chairman of CodeFutures. Lynn brings more than 14 years of advanced technology and business success experience, and will help CodeFutures build on its industry leadership around its Agile Big Data initiatives. His technical expertise will be invaluable in advancing CodeFutures’ AgilData platform and new processes for streamlining and gaining value from gro...
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization's assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In...
SYS-CON Events announced today that ActiveState, the leading independent Cloud Foundry and Docker-based PaaS provider, has been named “Silver Sponsor” of SYS-CON's DevOps Summit New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. ActiveState believes that enterprises gain a competitive advantage when they are able to quickly create, deploy and efficiently manage software solutions that immediately create business value, but they face many challenges that ...
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core...