Click here to close now.

Welcome!

Big Data Journal Authors: Carmen Gonzalez, Elizabeth White, Ian Khan, Liz McMillan, William Schmarzo

Related Topics: Microservices Journal, Java, Linux, Virtualization, AJAX & REA, Big Data Journal

Microservices Journal: Article

Understanding Application Performance on the Network | Part 2

Bandwidth and Congestion

When we think of application performance problems that are network-related, we often immediately think of bandwidth and congestion as likely culprits; faster speeds and less traffic will solve everything, right? This is reminiscent of recent ISP wars; which is better, DSL or cable modems? Cable modem proponents touted the higher bandwidth while DSL proponents warned of the dangers of sharing the network with your potentially bandwidth-hogging neighbors. In this blog entry, we'll examine these two closely-related constraints, beginning the series of performance analyses using the framework we introduced in Part I. I'll use graphics from Compuware's application-centric protocol analyzer - Transaction Trace - as illustrations.

Bandwidth
We define bandwidth delay as the serialization delay encountered as bits are clocked out onto the network medium. Most important for performance analysis is what we refer to as the "bottleneck bandwidth" - the speed of the link at its slowest point - as this will be the primary influencer on the packet arrival rate at the destination. Each packet incurs the serialization delay dictated by the link speed; for example, at 4Mbps, a 1500 byte packet takes approximately 3 milliseconds to be serialized. Extending this bandwidth calculation to an entire operation is relatively straightforward. We observe (on the wire) the number of bytes sent or received and multiply that by 8 bits, then divide by the bottleneck link speed, understanding that asymmetric links may have different upstream and downstream speeds.

Bandwidth effect = [ [# bytes sent or received] x [8 bits] ]/ [Bottleneck link speed]

For example, we can calculate the bandwidth effect for an operation that sends 100KB and receives 1024KB on a 2048Kbps link:

  • Upstream effect: [100,000 * 8] / 2,048,000] = 390 milliseconds
  • Downstream effect: [1,024,000 *8] / 2,048,000] = 4000 milliseconds

For better precision, you should account for frame header size differences between the packet capture medium - Ethernet, likely - and the WAN link; this difference might be as much as 8 or 10 bytes per packet.

Bandwidth constraints impact only the data transfer periods within an operation - the request and reply flows. Each flow also incurs (at a minimum) additional delay due to network latency, as the first bit traverses the network from sender to receiver; TCP flow control or other factors may introduce further delays. (As an operation's chattiness increases, its sensitivity to network latency increases and the overall impact of bandwidth tends to decrease, becoming overshadowed by latency.)

Transaction Trace Illustration: Bandwidth
One way to frame the question is "does the operation use all of the available bandwidth?" The simplest way to visualize this is to graph throughput in each direction, comparing uni-directional throughput with the link's measured bandwidth. If the answer is yes, then the operation bottleneck is bandwidth; if the answer is no, then there is some other constraint limiting performance. (This doesn't mean that bandwidth isn't a significant, or even the dominant, constraint; it simply means that there are other factors that prevent the operation from reaching the bandwidth limitation. The formula we used to calculate the impact of bandwidth still applies as a definition of the contribution of bandwidth to the overall operation time.)

This FTP transfer is frequently limited by the 10Mbps available bandwidth.

Networks are generally shared resources; when there are multiple connections on a link, TCP flow control will prevent a single flow from using all of the available bandwidth as it detects and adjusts for congestion. We will evaluate the impact of congestion next, but fundamentally, the diagnosis is the same; bandwidth constrains throughput.

Congestion
Congestion occurs when data arrives at a network interface at a rate faster than the media can service; when this occurs, packets must be placed in an output queue, waiting until earlier packets have been serviced. These queue delays add to the end-to-end network delay, with a potentially significant effect on both chatty and non-chatty operations. (Chatty operations will be impacted due to the increase in round-trip delay, while non-chatty operations may be impacted by TCP flow control and congestion avoidance algorithms.)

For a given flow, congestion initially reduces the rate of TCP slow-start's ramp by slowing increases to the sender's Congestion Window (CWD); it also adds to the delay component of the Bandwidth Delay Product (BDP), increasing the likelihood of exhausting the receiver's TCP window. (We'll discuss TCP slow-start as well as the BDP later in this series.)

As congestion becomes more severe, the queue in one of the path's routers may become full. As packets arrive exceeding the queue's storage capacity, some packets must be discarded. Routers employ various algorithms to determine which packets should be dropped, perhaps attempting to distribute congestion's impact among multiple connections, or to more significantly impact lower-priority traffic. When TCP detects these dropped packets (by a triple-duplicate ACK, for example), congestion is the assumed cause. As we will discuss in more depth in an upcoming blog entry, packet loss causes the sending TCP to reduce its Congestion Window by 50%, after which slow-start begins to ramp up again in a relatively conservative congestion avoidance phase.

For more on congestion, and for further insight, click here for the full article.

More Stories By Gary Kaiser

Gary Kaiser is a Subject Matter Expert in Network Performance Analysis at Compuware APM. He has global field enablement responsibilities for performance monitoring and analysis solutions embracing emerging and strategic technologies, including WAN optimization, thin client infrastructures, network forensics, and a unique performance management maturity methodology. He is also a co-inventor of multiple analysis features, and continues to champion the value of software-enabled expert network analysis.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@BigDataExpo Stories
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enter...
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, a...
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
SYS-CON Events announced today that DragonGlass, an enterprise search platform, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. After eleven years of designing and building custom applications, OpenCrowd has launched DragonGlass, a cloud-based platform that enables the development of search-based applications. These are a new breed of applications that utilize a search index as their backbone for data...
SYS-CON Events announced today that EnterpriseDB (EDB), the leading worldwide provider of enterprise-class Postgres products and database compatibility solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. EDB is the largest provider of Postgres software and services that provides enterprise-class performance and scalability and the open source freedom to divert budget from more costly traditiona...
Gartner predicts that the bulk of new IT spending by 2016 will be for cloud platforms and applications and that nearly half of large enterprises will have cloud deployments by the end of 2017. The benefits of the cloud may be clear for applications that can tolerate brief periods of downtime, but for critical applications like SQL Server, Oracle and SAP, companies need a strategy for HA and DR protection. While traditional SAN-based clusters are not possible in these environments, SANless cluste...
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists will address the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affec...
Software Defined Storage provides many benefits for customers including agility, flexibility, faster adoption of new technology and cost effectiveness. However, for IT organizations it can be challenging and complex to build your Enterprise Grade Storage from software. In his session at Cloud Expo, Paul Turner, CMO at Cloudian, looked at the new Original Design Manufacturer (ODM) market and how it is changing the storage world. Now Software Defined Storage companies can build Enterprise grade ...
To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management (APM) and Network Performance Management (NPM) tools. Together APM and NPM tools are essential aids in improving a business's infrastructure required to support an effective web experience... but they are missing a critical component - Internet visibility. Internet connectivity has always played a role in customer access to web presence, but in the past few years u...
There's Big Data, then there's really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at Big Data Expo®, Hannah Smalltree, Director at Treasure Data, discussed how IoT, Big D...
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager – Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, will review next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discuss how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Speaker Bios Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has b...
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. 8th International Big Data Expo, co-located with 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. As advanced data storage, access and analytics technologies aimed at handling high-volume and/or fast moving data all move center stage, aided by the cloud computing bo...
Cloud services are the newest tool in the arsenal of IT products in the market today. These cloud services integrate process and tools. In order to use these products effectively, organizations must have a good understanding of themselves and their business requirements. In his session at 15th Cloud Expo, Brian Lewis, Principal Architect at Verizon Cloud, outlined key areas of organizational focus, and how to formalize an actionable plan when migrating applications and internal services to the ...
"We have developers who are really passionate about getting their code out to customers, no matter what, in the shortest possible time. Operations are very focused on procedures and policies," explained Stan Klimoff, CTO of Qubell, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Agility is top of mind for Cloud/Service providers and Enterprises alike. Policy Driven Data Center provides a policy model for application deployment by decoupling application needs from the underlying infrastructure primitives. In his session at 15th Cloud Expo, David Klebanov, a Technical Solutions Architect with Cisco Systems, discussed how it differentiates from the software-defined top-down control by offering a declarative approach to allow faster and simpler application deployment. Davi...
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize sup...
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects - scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e....
Cloud is not a commodity. And no matter what you call it, computing doesn’t come out of the sky. It comes from physical hardware inside brick and mortar facilities connected by hundreds of miles of networking cable. And no two clouds are built the same way. SoftLayer gives you the highest performing cloud infrastructure available. One platform that takes data centers around the world that are full of the widest range of cloud computing options, and then integrates and automates everything. J...
With the arrival of the Big Data revolution, a data professional is expected to master a broad spectrum of complex domains including data processing, mathematics, programming languages, machine learning techniques, and business knowledge. While this mastery is undoubtedly important, this narrow focus on tool usage has divorced many from the imagination required to solve real-world problems. As the demand for analysis increases, the data science community must transform from tool experts to "data...