Welcome!

Big Data Journal Authors: Esmeralda Swartz, Kira Makagon, Elizabeth White, Pat Romanski, Srinivasan Sundara Rajan

Related Topics: DevOps Journal, Java, Linux, Virtualization, SDN Journal

DevOps Journal: Blog Post

Network Service Provisioning Speed | @DevOpsSummit [#DevOps]

Inarguably, the pressure is on the network to get in gear, so to speak, and address how fast its services can be up and running

Irrelevance of Hardware to Network Service Provisioning Speed
September 10, 2014

Inarguably, the pressure is on "the network" to get in gear, so to speak, and address how fast its services can be up and running. Software-defined architectures like cloud and SDN have arisen in response to this pressure, attempting to provide the means by which critical network services can be provisioned in hours instead of days.

Much of the blame for the time it takes to provision network services winds up landed squarely on the fact that much of the network is comprised of hardware. Not just any hardware, mind you, but special hardware. Such devices take time to procure, time to unbox, time to rack and time to cable. It's a manually intensive process that, when not anticipated, can take weeks to acquire and get into place.

Register For DevOps Summit "FREE" (before Friday) ▸ Here

Enter virtualization, cloud, containers and any other solution that holds, at its core, abstraction as a key characteristic. Abstraction all but eliminates the time it takes to procure hardware by enabling software to be deployed on any hardware, making the procurement process as simple as finding an empty server in the data center. After all, the majority of networking functions are just very specialized software running on very specific hardware.  Decouple the two and voila! Virtualized, containerized or cloud(erized) networking. Instantaneous! No more waiting for the network. Just push a button and you're done.

Only you aren't.

See, that's not counting the time it takes to actually provision and configure the desired services.

Most of the lamentable time it takes to provision network services has absolutely nothing to do with the underlying hardware. Whether it's commoditized off the shelf hardware or custom designed silicon makes no difference whatsoever in the actual time required to provision network services.  Both proprietary and commoditized hardware support a layer of abstraction - of virtualization - that enables them to be sliced and diced into discrete, consumable chunks of computing power. Within that "container" are the actual network services that need to be deployed to provide the breadth of network services required to keep today's applications scalable, secure and fast enough to satisfy both consumers and business constituents alike.

hardware versus hardware

To point to "hardware" as the primary impediment in rapidly provisioning these services is ludicrous. The hardware has nothing to do with the configuration of the minute and complex details associated with any given network service today. The slowdown is in the configuration of the services and the complexity of the topologies into which such services must be deployed.

This is the nature of application-focused networking. Each service - in addition to the nuts and bolts of IP addresses and VLANs and DNS entries - requires specific settings to ensure the network is able to provide the services upon which business rely to deliver applications. An optimized TCP stack for one application can mean disastrous performance for another. The specific application security details that protect one application may result in gaping holes in yet another application and completely break the functionality of another. The route one application takes through the network may provide excellent performance for one application but introduce unacceptable latency for another.

It is this reality with which network service configuration is concerned and why services absolutely must be application-driven with respect to their particular configuration. One size does not fit all when it comes to applications.

And thus it is these configurations - not the underlying hardware model - that impede service provisioning in the network and slow down application deployments. Manually flipping a bit here and a byte there and writing rules that deny access to that device but allow it from another are time consuming, error prone and terribly inefficient.

Virtualization of network functions a la NFV is only a panacea when one is deploying services that can be configured exactly the same, every time. That happens to be a model which works for service providers, who are concerned with scaling out specific functions in the network and not necessarily supporting new application deployments. In the enterprise, where the focus is on delivering individual applications with their own unique performance, security and reliability profiles, virtualization is nothing more than a means of squeezing out a greater economy of scale across existing hardware resources - whether commoditized or not.

Enterprises whose continued success relies on the fickle and highly volatile demands of consumer-facing applications are not so fortunate. Each network service must not only support the basic needs of an application but provide value in terms of improving performance, ensuring security or maintaining availability. To do that, each service must be tailored to the application - and sometimes to each client device - in question.

That takes time, and whether that service is deployed on a piece of commodity or custom hardware is irrelevant. The configuration is accomplished in software, which is the same whether running in a container, a virtual machine, or in plain old software daemon form.

That's why operationalization of the network is so critical to improving the alacrity with which application deployments are concluded. Going "virtual" isn't going to change the requirement for provisioning and configuration of the services, it only addresses the underlying process of acquiring and provisioning the appropriate resources.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@BigDataExpo Stories
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the ...
Agility is top of mind for Cloud/Service providers and Enterprises alike. Policy Driven Data Center provides a policy model for application deployment by decoupling application needs from the underlying infrastructure primitives. In his session at 15th Cloud Expo, David Klebanov, a Technical Solutions Architect with Cisco Systems, discussed how it differentiates from the software-defined top-down control by offering a declarative approach to allow faster and simpler application deployment. Davi...
The adoption of the Internet Of Things (IoT) is growing and its growth is synonymous with the growth of cloud. As per predictions from IDC: IoT and the Cloud: Within the next five years, more than 90% of all IoT data will be hosted on service provider platforms as cloud computing reduces the complexity of supporting IoT "Data Blending." This means that any organization that wanted to transform themselves using IoT has to automatically embrace the cloud too, especially the public cloud. This b...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
“We help people build clusters, in the classical sense of the cluster. We help people put a full stack on top of every single one of those machines. We do the full bare metal install," explained Greg Bruno, Vice President of Engineering and co-founder of StackIQ, in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
In this demo at 15th Cloud Expo, John Meza, Product Engineer at Esri, showed how Esri products hook into Hadoop cluster to allow you to do spatial analysis on the spatial data within your cluster, and he demonstrated rendering from a data center with ArcGIS Pro, a new product that has a brand new rendering engine.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, discussed how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your onlin...
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore's Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at Big Data Expo, Mason Katz, CTO and co-founder of StackIQ, disc...
Software Defined Storage provides many benefits for customers including agility, flexibility, faster adoption of new technology and cost effectiveness. However, for IT organizations it can be challenging and complex to build your Enterprise Grade Storage from software. In his session at Cloud Expo, Paul Turner, CMO at Cloudian, looked at the new Original Design Manufacturer (ODM) market and how it is changing the storage world. Now Software Defined Storage companies can build Enterprise grade ...
Can the spatial component of your Big Data be harnessed and visualized, adding another dimension of power and analytics to your data? In his session at Big Data Expo®, John Meza, Product Engineer and Performance Engineering Team Lead at Esri, discussed the spatial queries that can be used within the Hadoop ecosystem and their integration with GeoSpatial applications. The GIS Tools for Hadoop project was also discussed and its implementation to discover location-based patterns and relationships...
In this Women in Technology Power Panel at 15th Cloud Expo, moderated by Anne Plese, Senior Consultant, Cloud Product Marketing at Verizon Enterprise, Esmeralda Swartz, CMO at MetraTech; Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems; Seema Jethani, Director of Product Management at Basho Technologies; Victoria Livschitz, CEO of Qubell Inc.; Anne Hungate, Senior Director of Software Quality at DIRECTV, discussed what path they took to find their spot within the tec...
DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete...
Software AG and Wipro Ltd. have announced a joint solution platform for streaming analytics that provides real-time actionable intelligence for the Internet of Things (IoT) market. “The key to successfully addressing the IoT market is the ability to rapidly build and evolve apps that tap into, analyze and make smart decisions on fast, big data”, said John Bates, Global Head of Industry Solutions and CMO, Software AG. To address the huge market potential created by streaming analytics in conj...
Amazon, Google and Facebook are household names in part because of their mastery of Big Data. But what about organizations without billions of dollars to spend on Big Data tools - how can they extract value from their data? In his session at 6th Big Data Expo®, Ali Ghodsi, Co-Founder and Head of Engineering at Databricks, discussed how the zero management cost and scalability of the cloud is addressing the challenges and pain points that data engineers face when working with Big Data. He also s...
We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise be...
DevOps means different things to different people. Qubell defines DevOps as the ability for the developer teams to do what they need to do to have this level of self-service. At DevOps Summit, Stan Klimoff, CTO of Qubell, demos the enterprise DevOps platform.
SYS-CON Events announced today that that Innodisk, the service-driven provider of industrial embedded flash and DRAM storage products and technologies, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Innodisk is a service-driven provider of industrial embedded flash and DRAM storage products and technologies. With satisfied customers across the embedded, aerospace and defense, cloud storage markets an...
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.