Welcome!

@BigDataExpo Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, William Schmarzo

Related Topics: SDN Journal, Java IoT, Microsoft Cloud, Containers Expo Blog, @CloudExpo, @BigDataExpo

SDN Journal: Blog Feed Post

Virtual Apostasy

When all you have is a hypervisor, everything looks like it should be virtualized

When all you have is a hypervisor, everything looks like it should be virtualized.

Yes, I'm about to say something that's on the order of heresy in the church of virtualization. But it has to be said and I'm willing to say it because, well, as General Patton said, "If everyone is thinking the same...   someone isn't thinking."

The original NFV white paper cited in the excellent overview of the SDN and NFV relationships "NFV and SDN: What’s the Difference?" describes essentially two problems it attempts to solve: rapid provisioning and operational costs.

The reason commodity hardware is always associated with NFV and with SDN is that, even if there existed a rainbow and unicorns industry-wide standard for managing network hardware there would still exist significant time required to acquire and deploy said hardware. One does not generally have extra firewalls, routers, switches, and application network service hardware lying around idle. One might, however, have commodity (cheap) compute available on which such services could be deployed.

Software, as we've seen, has readily adapted to distribution and deployment in a digital form factor. It wasn't always so after all. We started with floppies, moved to CD-ROM, then DVD and, finally, to neat little packages served up by application stores and centralized repositories (RPM, NPM, etc...).

Virtualization arrived just as we were moving from the physical to digital methods of distribution and it afforded us the commonality (abstraction) necessary to enable using commodity hardware for systems that might not otherwise be deployable on that hardware due to a lack of support by the operating system or the application itself. With the exposure of APIs and management via centralized platforms, the issue of provisioning speed was quickly addressed. Thus, virtualization is the easy answer to data center problems up and down the network stack.

But it isn't the only answer, and as SDN has shown there are other models that provide the same agility and cost benefits as virtualization without the potential downsides (performance being the most obvious with respect to the network).

ABSTRACT the ABSTRACTION

Let's abstract the abstraction for a moment. What is it virtualization offers that a similar, software-defined solution would not? If you're going to use raw compute, what is it that virtualization provides that makes it so appealing?

Hardware agnosticism comes to mind as a significant characteristic that leads everyone to choose virtualization as nearly a deus-ex machina solution. The idea that one can start with bare metal (raw compute) and within minutes have any of a number of very different systems up and running is compelling. Because there are hardware-specific drivers and configuration required at the OS level, however, that vision isn't easily realized. Enter virtualization, which provides a consistent, targetable layer for the operating system and applications.

Sure, it's software, but is standardizing on a hypervisor platform all that different from standardizing on a hardware platform?

We've turned the hypervisor into our common platform. It is what we target, what we've used as the "base" for deployment. It has eliminated the need to be concerned about five or ten hundred different potential board-level components requiring support and provided us a simple base platform upon which to deploy. But it hasn't eliminated dependencies; you can't deploy a VM packaged for VMware on a KVM system or vice-versa. There's still some virtual diaspora in the market that requires different targeted packages. But at least we're down to half-a-dozen from the hundreds of possible combinations at the hardware level.

But is it really virtualization that enables this magical deployment paradigm or is it the ability to deploy on common hardware it offers that's important? I'd say its the latter. It's the ability to deploy on commodity hardware that makes virtualization appealing. The hardware, however, still must exist. It must be racked and ready, available for that deployment. In terms of compute, we still have traditional roadblocks around ensuring compute capacity availability. The value up the operational process stack, as it were, of virtualization suddenly becomes more about readiness; about the ability to rapidly provision X or Y or Z because it's pre-packaged for the virtualization platform. In other words, it's the readiness factor that's key to rapid deployment. If there is sufficient compute (hardware) available and if the application/service/whatever is pre-packaged for the target virtualization platform then rapid deployment ensues.

Otherwise, you're sitting the same place you were before virtualization.

So there's significant planning that goes into being able to take advantage of virtualization's commoditization of compute to enable rapid deployment. And if we abstract what it is that enables virtualization to be the goodness that it is we find that it's about pre-packaging and a very finite targeted platform upon which services and applications can be deployed.

The question is, is that the only way to enable that capability?

Obviously I don't think so or I wouldn't be writing this post.

COMPLACENCY is the GREAT INHIBITOR of INNOVATION

What if we could remove the layer of virtualization, replacing it instead with a more robust and agile operating system capable of managing a bare metal deployment with the same (or even more) alacrity than a comparable virtualized system?

It seems that eliminating yet another layer of abstraction between the network function and, well, the network would be a good thing. Network functions at layer 2-3 are I/O bound; they're heavily reliant on fast input and output and that includes traversing the hardware up through the OS up through the hypervisor up through the... The more paths (and thus internal bus and lane traversals) a packet must travel in the system the higher the latency. Eliminating as many of these paths as possible is one of the keys*** to continued performance improvements on commodity hardware such that they are nearing those of network hardware.

If one had such a system that met the requirements - pre-packaged, rapid provisioning, able to run on commodity hardware - would you really need the virtual layer?

No.

But when all you have is a hypervisor...

I'm not saying virtualization isn't good technology, or that it doesn't make sense, or that it shouldn't be used. What I am saying is that perhaps we've become too quick to reach for the hammer when confronted with the challenge of rapid provisioning or flexibility. Let's not get complacent. We're far too early in the SDN and NFV game for that.

* Notice I did not say Sisyphean. It's doable, so it's on the order of Herculean. Unfortunately that also implies it's a long, arduous journey.

** That may be a tad hyperbolic, admittedly.

*** The operating system has a lot - a lot - to do with this equation, but that's a treatise for another day

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@BigDataExpo Stories
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
The session is centered around the tracing of systems on cloud using technologies like ebpf. The goal is to talk about what this technology is all about and what purpose it serves. In his session at 21st Cloud Expo, Shashank Jain, Development Architect at SAP, will touch upon concepts of observability in the cloud and also some of the challenges we have. Generally most cloud-based monitoring tools capture details at a very granular level. To troubleshoot problems this might not be good enough.
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
Transforming cloud-based data into a reportable format can be a very expensive, time-intensive and complex operation. As a SaaS platform with more than 30 million global users, Cornerstone OnDemand’s challenge was to create a scalable solution that would improve the time it took customers to access their user data. Our Real-Time Data Warehouse (RTDW) process vastly reduced data time-to-availability from 24 hours to just 10 minutes. In his session at 21st Cloud Expo, Mark Goldin, Chief Technolo...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
Though cloud is the future of enterprise computing, a smooth transition of legacy applications and systems is critical for seamless business operations. IT professionals are eager to start leveraging the cost, scale and other benefits of cloud, but with massive investments already in place in existing infrastructure and a number of compliance and resource hurdles, it can be challenging to move to a cloud-based infrastructure.
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, will go over the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, applicatio...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launchi...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.