Welcome!

@BigDataExpo Authors: Elizabeth White, Yeshim Deniz, XebiaLabs Blog, SmartBear Blog, Carmen Gonzalez

Blog Feed Post

Big Data Needs a Better Network

Earlier this week I had some interesting conversations with @davehusak. Where the conversation started early in the day with a discussion on overlay networks and what network functions are performed where and in what context, later in the afternoon the discussion moved to networking solutions (and specifically Plexxi solutions) for big data applications.

It’s easy to jump to Hadoop or similarly structured cluster computing applications (Spark, Storm, or a long list of others) as the definition of a big data application. With all its simplicity for the overall distribution of work, Hadoop is a fairly tough network problem to solve if you want to do anything more than “throw bandwidth at the problem”. And when you do throw bandwidth at the problem, the extreme burstiness of the traffic will still significantly drag down the performance of the overall solution. And for many, CPU cycles to reduce the data is not the biggest challenge, storage and movement of data throughout a big data cluster is the biggest pain point. Intel has done a fine job providing compute firepower that far outpaces the evolution of network capacity.

The network plays a significant factor in several stages of a Hadoop solution cycle. It starts with chopping the to-be-analyzed data into chunks and distributing it across the datanodes. Hadoop has a notion of a rack and it has some basic intelligence when placing data and jobs that work on that data. By default the data will be replicated 3 times across at least 2 racks, if racks have been defined. The data to be distributed is easily in the 100s of Gigabytes or even Terabytes, so triple that data is being moved throughout the Hadoop cluster to the datanodes.

Once distributed, the actual Map jobs are launched against that data, these are the tasks that take the data and perform a first pass mapping into (in its most basic form) <key, value> tuples. Again here there is an attempt to have jobs work on local data, where local can be defined as local to the server that has that chunk of data or local to the rack, in an attempt to avoid as much cross rack communication as possible. This is based on the assumption that cross rack communication is much more constrained and aggregated and therefore more prone to congestion and packet loss.

Once the mapper jobs complete their task, the results of the mapping exercise is sent to reducers. Reducers take the <key, value> information and essentially tally the results. This transfer is the most taxing part of a Hadoop cycle on the network. Since most Hadoop mapping jobs run the same function on a similar sized dataset, that first set of mappers will all complete their task at about the same time and will all start sending their results to the same set of reducers, creating 1) a lot of traffic and 2) a lot of traffic to the same set of destinations.  Depending on the amount of data, mappers and compute nodes, this cycle repeats (the next set of mapping jobs are fired off) and at the end of each cycle a very significant spike in network traffic appears. At the very end, all results are brought together for one last spike in traffic. Each one of these network events is a source of significant congestion.

Many variables contribute to the overall performance of the Hadoop solution. What is the relationship between the chunks of data and the amount of servers and jobs? How many reducers are used? Where are the reducers in relation to the mappers? Is the Map function compute heavy or I/O heavy? How aggressive is the speculative scheduling that allows the same data to be worked on by multiple mappers?

With that many variables that can be tuned, and with so many variables different from one analysis to the next, it is hard to imagine that a single network design or implementation provides the best supporting infrastructure. There are assumptions in Hadoop placement of data and jobs that can easily be altered. The basic concept of a rack can easily expanded into a multi layer locality definition. With the right tools in the network, the definition of a rack, or even the locality and closeness of nodes in a cluster or virtual cluster can be adjusted according to the analysis to be completed.

We tend to give our applications variables to tune its performance based on what the network provides. It is time that the network adjusts itself based on the application needs. In a clustered application like Hadoop, there is lots of knowledge and even some predictability of network traffic. Wouldn’t that make for a great opportunity to infuse the network with some of that knowledge and have it morph itself to provide the best possible service? And Hadoop is not unique, cluster compute framework almost all carefully track placement of data and compute jobs, which makes them all great candidates to share some of that information with a smart network.

There are far simpler big data needs and applications that can and should be supported by flexible networks. Storage networks are still often separated from data networks for performance reasons and fear of interference. If you could actually separate the various types of data with logically or even physically different paths in the same network, would you still?

 

[Today's fun fact: An MLB baseball lasts on average 7 pitches. A google search asking how many baseballs are used in a single MLB season returns several pages worth of different answers. They must all be correct, it's on the Internet afterall.]

The post Big Data Needs a Better Network appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@BigDataExpo Stories
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists will examine how DevOps helps to meet th...
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and 21st International Cloud Expo, which will take place in November in Silicon Valley, California.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help globa...
SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
SYS-CON Events announced today that Juniper Networks (NYSE: JNPR), an industry leader in automated, scalable and secure networks, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Juniper Networks challenges the status quo with products, solutions and services that transform the economics of networking. The company co-innovates with customers and partners to deliver automated, scalable and secure network...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain.
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software in the hope of capturing value in IoT. Although IoT is relatively new in the market, it has already gone through many promotional terms such as IoE, IoX, SDX, Edge/Fog, Mist Compute, etc. Ultimately, irrespective of the name, it is about deriving value from independent software assets participating in an ecosystem as one comprehensive solution.
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across supply chain networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost and time for product recall as well as advance trade. Are you curious about Blockchain and how it can provide you with new opportunities for innovation and growth? In her session at 20th Cloud Exp...
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help globa...
Quickly find the root cause of complex database problems slowing down your applications. Up to 88% of all application performance issues are related to the database. DPA’s unique response time analysis shows you exactly what needs fixing - in four clicks or less. Optimize performance anywhere. Database Performance Analyzer monitors on-premises, on VMware®, and in the Cloud, including Amazon® AWS and Azure™ virtual machines.