Welcome!

@BigDataExpo Authors: William Schmarzo, Liz McMillan, Scott Allen, Elizabeth White, Akhil Sahai

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Containers Expo Blog, SDN Journal

@DevOpsSummit: Blog Post

Network Service Provisioning Speed | @DevOpsSummit [#DevOps]

Inarguably, the pressure is on the network to get in gear, so to speak, and address how fast its services can be up and running

Irrelevance of Hardware to Network Service Provisioning Speed
September 10, 2014

Inarguably, the pressure is on "the network" to get in gear, so to speak, and address how fast its services can be up and running. Software-defined architectures like cloud and SDN have arisen in response to this pressure, attempting to provide the means by which critical network services can be provisioned in hours instead of days.

Much of the blame for the time it takes to provision network services winds up landed squarely on the fact that much of the network is comprised of hardware. Not just any hardware, mind you, but special hardware. Such devices take time to procure, time to unbox, time to rack and time to cable. It's a manually intensive process that, when not anticipated, can take weeks to acquire and get into place.

Register For DevOps Summit "FREE" (before Friday) ▸ Here

Enter virtualization, cloud, containers and any other solution that holds, at its core, abstraction as a key characteristic. Abstraction all but eliminates the time it takes to procure hardware by enabling software to be deployed on any hardware, making the procurement process as simple as finding an empty server in the data center. After all, the majority of networking functions are just very specialized software running on very specific hardware.  Decouple the two and voila! Virtualized, containerized or cloud(erized) networking. Instantaneous! No more waiting for the network. Just push a button and you're done.

Only you aren't.

See, that's not counting the time it takes to actually provision and configure the desired services.

Most of the lamentable time it takes to provision network services has absolutely nothing to do with the underlying hardware. Whether it's commoditized off the shelf hardware or custom designed silicon makes no difference whatsoever in the actual time required to provision network services.  Both proprietary and commoditized hardware support a layer of abstraction - of virtualization - that enables them to be sliced and diced into discrete, consumable chunks of computing power. Within that "container" are the actual network services that need to be deployed to provide the breadth of network services required to keep today's applications scalable, secure and fast enough to satisfy both consumers and business constituents alike.

hardware versus hardware

To point to "hardware" as the primary impediment in rapidly provisioning these services is ludicrous. The hardware has nothing to do with the configuration of the minute and complex details associated with any given network service today. The slowdown is in the configuration of the services and the complexity of the topologies into which such services must be deployed.

This is the nature of application-focused networking. Each service - in addition to the nuts and bolts of IP addresses and VLANs and DNS entries - requires specific settings to ensure the network is able to provide the services upon which business rely to deliver applications. An optimized TCP stack for one application can mean disastrous performance for another. The specific application security details that protect one application may result in gaping holes in yet another application and completely break the functionality of another. The route one application takes through the network may provide excellent performance for one application but introduce unacceptable latency for another.

It is this reality with which network service configuration is concerned and why services absolutely must be application-driven with respect to their particular configuration. One size does not fit all when it comes to applications.

And thus it is these configurations - not the underlying hardware model - that impede service provisioning in the network and slow down application deployments. Manually flipping a bit here and a byte there and writing rules that deny access to that device but allow it from another are time consuming, error prone and terribly inefficient.

Virtualization of network functions a la NFV is only a panacea when one is deploying services that can be configured exactly the same, every time. That happens to be a model which works for service providers, who are concerned with scaling out specific functions in the network and not necessarily supporting new application deployments. In the enterprise, where the focus is on delivering individual applications with their own unique performance, security and reliability profiles, virtualization is nothing more than a means of squeezing out a greater economy of scale across existing hardware resources - whether commoditized or not.

Enterprises whose continued success relies on the fickle and highly volatile demands of consumer-facing applications are not so fortunate. Each network service must not only support the basic needs of an application but provide value in terms of improving performance, ensuring security or maintaining availability. To do that, each service must be tailored to the application - and sometimes to each client device - in question.

That takes time, and whether that service is deployed on a piece of commodity or custom hardware is irrelevant. The configuration is accomplished in software, which is the same whether running in a container, a virtual machine, or in plain old software daemon form.

That's why operationalization of the network is so critical to improving the alacrity with which application deployments are concluded. Going "virtual" isn't going to change the requirement for provisioning and configuration of the services, it only addresses the underlying process of acquiring and provisioning the appropriate resources.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@BigDataExpo Stories
"Avere Systems is a hybrid cloud solution provider. We have customers that want to use cloud storage and we have customers that want to take advantage of cloud compute," explained Rebecca Thompson, VP of Marketing at Avere Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busin...
Large scale deployments present unique planning challenges, system commissioning hurdles between IT and OT and demand careful system hand-off orchestration. In his session at @ThingsExpo, Jeff Smith, Senior Director and a founding member of Incenergy, will discuss some of the key tactics to ensure delivery success based on his experience of the last two years deploying Industrial IoT systems across four continents.
There will be new vendors providing applications, middleware, and connected devices to support the thriving IoT ecosystem. This essentially means that electronic device manufacturers will also be in the software business. Many will be new to building embedded software or robust software. This creates an increased importance on software quality, particularly within the Industrial Internet of Things where business-critical applications are becoming dependent on products controlled by software. Qua...
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2016 Silicon Valley. The 19th Cloud Expo and 6th @ThingsExpo will take place on November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Interne...
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to imp...
Most organizations prioritize data security only after their data has already been compromised. Proactive prevention is important, but how can you accomplish that on a small budget? Learn how the cloud, combined with a defense and in-depth approach, creates efficiencies by transferring and assigning risk. Security requires a multi-defense approach, and an in-house team may only be able to cherry pick from the essential components. In his session at 19th Cloud Expo, Vlad Friedman, CEO/Founder o...
As organizations shift towards IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. Commvault can ensure protection, access and E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his general session at 18th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Part...
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develo...
SYS-CON Events announced today that MangoApps will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. MangoApps provides modern company intranets and team collaboration software, allowing workers to stay connected and productive from anywhere in the world and from any device.
“We're a global managed hosting provider. Our core customer set is a U.S.-based customer that is looking to go global,” explained Adam Rogers, Managing Director at ANEXIA, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
"This week we're really focusing on scalability, asset preservation and how do you back up to the cloud and in the cloud with object storage, which is really a new way of attacking dealing with your file, your blocked data, where you put it and how you access it," stated Jeff Greenwald, Senior Director of Market Development at HGST, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
"We've discovered that after shows 80% if leads that people get, 80% of the conversations end up on the show floor, meaning people forget about it, people forget who they talk to, people forget that there are actual business opportunities to be had here so we try to help out and keep the conversations going," explained Jeff Mesnik, Founder and President of ContentMX, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
“Being the one true cloud-agnostic and storage-agnostic software solution, more and more customers are coming to Commvault and saying ' What do you recommend? What's your best practice for implementing cloud?” explained Randy De Meno, Chief Technologist at Commvault, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.