Welcome!

@DXWorldExpo Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Zakia Bouachraoui

Blog Feed Post

The network probe is dead. Long live the probe!

Dynatrace blog

I came across a good discussion about complexity from one of the industry’s network probe vendors. Applying Metcalfe’s law, the blog reinforced what we’re all acutely aware of; the number of connections – not simply between servers, but between multiple processes on servers – inside today’s data centers is exploding. Here, I’ve crudely recreated a few diagrams often used to illustrate this.

https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-300x107.png 300w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-768x273.png 768w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-200x71.png 200w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-400x142.png 400w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-600x213.png 600w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-800x284.png 800w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-1000x355.png 1000w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-1200x426.png 1200w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-1400x497.png 1400w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-1-1600x568.png 1600w" sizes="(min-width: 900px) 900px, 100vw" />

This rapidly increasing complexity drove corresponding data center network architectural shifts, from hierarchical layouts (where north-south traffic dominated) to flattened and micro-segmented leaf/spine layouts that can more effectively support this server-to-server or east-west traffic.

https://dt-cdn.net/wp-content/uploads/2018/01/Image-2-300x115.png 300w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-2-768x293.png 768w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-2-200x76.png 200w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-2-400x153.png 400w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-2-600x229.png 600w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-2-800x306.png 800w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-2-1000x382.png 1000w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-2-1200x459.png 1200w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-2-1400x535.png 1400w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-2.png 1429w" sizes="(min-width: 900px) 900px, 100vw" />

It’s no coincidence that the flattened network architecture shown here resembles the most complex of the Metcalfe diagrams I drew; direct switched network connections between communicating hosts are inherently more efficient than multi-hop hierarchies.

At the same time, increases in virtualization density frequently introduced scenarios where inter-VM traffic never had to hit the (physical) network.

Existential angst

Slowly but surely, these shifts resulted in the loss of traditional traffic aggregation points that had become an enabling foundation of a network probe’s value. They presented an existential problem, and probe vendors found themselves in a battle for territorial survival inside the modern data center.

Unspoken panic can be a great motivator, and vendors entered the fray from multiple flanks; after all, if all you have is a probe, then every problem must be visible on the network. Early skirmishes saw sometimes convoluted network configurations intended to route communications on to physical links or up to artificial aggregation points. Some took the approach of installing virtual probes on each host. More recent and sophisticated battles look to deploy virtual network port mirroring and virtual taps. Even the unbiased treaty negotiators – analysts and services companies – have touted fundamental revisions to data center architectures to include “visibility planes” through distributed network packet brokers (NPBs).

Battles continue to rage on in the trenches. But even through the thick fog of war, the outcome is taking clear shape. Data center architectures are designed to provide agile application services. Providing access to network packets takes a much lower priority, to some degree in anticipation of alternate monitoring approaches.

Tough questions

How many probes are you willing to deploy in your data center? How many taps – real and virtual – would you need for full Metcalfe-like visibility? What kind of supporting network will you need to route this traffic to your probes? How will these stopgap solutions respond to the ever-increasing dynamics of data center traffic, of connections, of the services themselves?  And the most important question: to what end? To monitor application performance?

It seems reasonable, then, to conclude that network probes are not going to maintain their value for intra-data center monitoring. Take the current trajectory to its logical conclusion, where every host, every VM, every container sits on its own segment with a direct virtual path to dependent peers. Should every node have its own virtual probe? And if not – how will you measure the quality of node-to-node communication in the virtualized environment?

https://dt-cdn.net/wp-content/uploads/2018/01/Image-3-200x168.png 200w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-3-400x336.png 400w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-3-600x504.png 600w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-3.png 650w" sizes="(min-width: 900px) 900px, 100vw" />

Straight answers

Now replace the words “virtual probe” in the above paragraph with “software agent.” Suddenly, the problem of data collection isn’t so daunting; the agents do all the work, offering access not only to host network interface statistics, but also to process-level network communications – including access to network packets themselves– along with compelling host and app performance data. The challenge shifts quickly to the high-value opportunity of data analysis. And that’s where automation, full-stack visibility and AI come to play.

https://dt-cdn.net/wp-content/uploads/2018/01/Image-4-200x190.png 200w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-4-400x381.png 400w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-4-600x571.png 600w, https://dt-cdn.net/wp-content/uploads/2018/01/Image-4.png 750w" sizes="(min-width: 900px) 900px, 100vw" />

Is the network probe dead?

Not by a long shot.

If we consider the data center probe in terms of its traditional form-factor, it’s clear it doesn’t fit well in today’s dynamic and cloud-like data center architectures. But the value of network analysis endures; in fact, some argue this may gain importance as containerization, micro-segmentation, and dynamic provisioning gain firmer footing, relying more heavily on Metcalfe-like network meshes. Inside the modern data center, these insights will be derived from software agents rather than physical – or even virtual – network probes.

In fact, attempting to fulfill an outsized APM-like role may have contributed to the probe’s struggle for relevance. As Digital Performance Management (DPM) and in-depth transaction tracing become the exclusive realm of agents and APIs, there exists a natural opportunity to use these same agents to also deliver fundamental network insights. This leaves NPM – along with user visibility – for the probe, leading to this rhetorical question: Is probe-sourced wire data really the best way to understand the performance of a data center’s core network? Or can agents do the job?

Of course packet-based TCP flow analysis will also remain important, offering the most objective source of network performance insights; these insights, however, have always been more valuable on the WAN. And that is where we’ll see traditional network probes settle – rather comfortably, actually – into the role they were designed to play, shifting to the place they have always belonged: the edge of the data center, where they can continue to deliver clear value.

Monitoring at the edge of the data center

What makes the data center edge so well-suited for a network probe?

  • First and foremost, the WAN – traditional, hybrid, optimized, software-defined – is where network characteristics and TCP flow control behaviors are most likely to have an impact on application performance and availability.
  • Second, today’s WANs incorporate many devices and appliances that influence performance and availability through factors that go beyond such traditional NPM micmetrics as bandwidth, latency, loss, and routing. These appliances include WAN optimization controllers (WOCs), application delivery controllers (ADCs), load balancers, firewalls, even thin client solutions. They not only control traffic flows through TCP manipulation, they also often perform server-like functions, independently delivering some application content directly to users. Yet they’re still considered part of the network.
  • Third, the number of WAN access points to your data center – and therefore the number of probe points – is relatively small, avoiding the Metcalfe matrix problem.
  • Fourth, these access points often already have wire data access solutions in place for IDS, important to security teams; these network packet broker (NPB) solutions can easily prune and share raw packet streams with a network monitoring probe.
  • Last – but not least – end-user experience is arguably the most important metric by which to measure service delivery quality. The probe’s vantage point at the data center edge provides the best perspective to deliver this value, as it can see all user interactions with your data center apps.

Equipping the network probe with the intelligence to understand application-specific transactions and automatically analyze performance degradation makes it an invaluable triage point for BizDevOps teams responsible for application delivery and performance. User experience remains the common actionable metric important to all three groups.

It’s worth noting that, just a few short years ago, some pundits were announcing the death of the network probe; the inability to receive packets in PaaS and IaaS clouds presumably foretold a rapid demise. But the speed at which this problem has been solved by the leading NPB vendors (such as Ixia and Gigamon) proves the resilience and value of the probe at the data center’s edge. We’ve started to see this trend, working with our customers to achieve monitoring continuity as they move their complete application infrastructures – including their DC RUM network probes – from on-premises to IaaS clouds.

The post The network probe is dead. Long live the probe! appeared first on Dynatrace blog.

Read the original blog entry...

More Stories By APM Blog

APM: It’s all about application performance, scalability, and architecture: best practices, lifecycle and DevOps, mobile and web, enterprise, user experience

DXWorldEXPO Digital Transformation Stories
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business outcomes to drive data, analytics and technology decisions that underpin an organization's digital transformation strategy.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...