Welcome!

@BigDataExpo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Scott Allen, Donald Meyer

Related Topics: @CloudExpo, Java IoT, Linux Containers, Containers Expo Blog, Cloud Security, SDN Journal

@CloudExpo: Article

Why Intelligent VM Routing Is Critical to Your Private Cloud’s Success

Hosting decisions are far too important to be left to simplistic, best-efforts approaches

Virtualized and private cloud infrastructures are all about sharing resources - compute, storage and network. Optimizing these environments comes down to the ability to properly balance capacity supply and application demand. In practical terms, this means allocating the right amount of resources and putting workloads in the right places. These decisions are critical to ensuring performance, compliance and cost control.

Yet most organizations are using antiquated methods such as home-grown spreadsheets and best guesses to determine which infrastructure to host workloads on and how much capacity to allocate. Not only do these approaches hinder operational agility, but as hosting decisions become more and more complex, they are downright dangerous. The typical strategy employed to stave off risk is to over-provision infrastructure, and the thinking behind this is that having an excess of capacity on hand will ensure that enough resource is available to avoid any performance problems. This is not only expensive, but it actually doesn't prevent key operational issues and many of the performance and compliance issues that are caused by incorrectly combining workloads.

In essence, this management challenge is the same one faced by hotel operators. Hoteliers need to constantly align guest demands with hotel resources and amenities. A hotel could not operate without a reservation system to manage resource availability and match that with guest needs, and yet this is exactly how companies manage their virtual and internal cloud environments. Imagine if a hotel didn't have the operational control provided by their reservation system, and was constantly forced to build more rooms than necessary in order to meet "potential" guest demands, rather than basing their decision on an actual profile of historical and predicted demand. Or if they put clients in rooms without enough beds or required amenities. This should start sounding familiar to anyone who has managed a production virtual environment.

Hotels have had the luxury of a long history to refine their operations, and by using reservations systems to properly place guests and manage current and future bookings, they have gained a complete picture of available resources at any point in time. In doing so, they have optimized their ability to plan for and leverage available capacity, achieving the right balance between supply and demand.

Why Workload Routing and Reservations are Important
By applying the same principles used to manage a hotel's available capacity to their own operations, IT organizations can significantly reduce risk and cost while ensuring service levels in virtual and cloud infrastructures. There are five reasons why the process of workload routing and capacity reservation must become a core, automated component of IT planning and management:

1. Complexity of the Hosting Decision
Hosting decisions are all about optimally aligning supply with demand. However, this is very complex in modern infrastructures, where capabilities can vary widely, and the requirements of the workloads may have a significant impact on what can go where. To make the optimal decision, there are three important questions that must be asked:

  • Do the infrastructure capabilities satisfy the workload requirements? This is commonly referred to as "fit for purpose," and is required to determine whether the hosting environment is suitable for the kind of workload being hosted. This question has not always been top of mind in the past, as the typical process to deploy new applications has been to procure new infrastructure with very detailed specifications. But the increasing use of shared environments is changing this, and understanding the specifications of the currently running hosting environments is critical. Unfortunately, early virtual environments tended to be one-size-fits-all, and early internal clouds tended to focus on dev/test workloads, so fit for purpose decisions rarely extended beyond ensuring the environment has the right CPU architecture.
  • Will the workloads fit? While the fit for purpose analysis is concerned with whether a target environment has the right kind of capacity, this aspect of making hosting decisions is concerned with whether there is sufficient free capacity to host the workloads. This is a more traditional capacity problem, but with a twist, as virtual and cloud environments are by nature shared environments, and the capacity equation is multi-dimensional. Resources such as CPU, memory, disk, I/O, network I/O, storage capacity, etc., must be considered, as well as looking at the levels and patterns of activity to ensure that the new workloads are "dovetailing" with the existing ones. Furthermore, any analysis of capacity must also ensure that the workload will fit at the point in time it will be deployed and it must continue to fit beyond that time.
  • What is the relative cost? While fit and suitability are critical to where to host a workload, in a tiebreaker the main issue becomes relative cost. While many organizations are still not sophisticated enough to have an accurate chargeback model in place, a more precise way to determine cost may be to consider the relative cost of hosting a workload as a function of policy and placement.

2. Capacity Supply and Application Demand are Dynamic
Nothing stands still in virtualized IT environments, and any decisions must be made in the context of ever-changing technologies, hardware specs, service catalogs, application requirements and workloads. This is becoming even more prevalent in the age of the software-defined data center.

Because of this, capacity must be viewed as a pipeline, with inbound demands, inbound supply side capacity, outbound demands and decommissioned capacity all being part of the natural flow of activity. Handling this flow is a key to achieving agility, which is a goal in the current breed of virtual and cloud hosting infrastructure. The ability to efficiently react to changing needs is critical, and the lack of agility in legacy environments is really a reflection of the fact that previous approaches did not operate as a pipeline. If it currently takes two to three months to get capacity, then it is a clear indication that there is no pipeline in place.

3. Meeting Your Customers Expectations
Application owners today have expectations that capacity will be available when required, so it's necessary for IT to have a way to hold capacity for planned workload placements to be available on the date of deployment (like advance booking a hotel room).

Sometimes the concept of a capacity reservation is equated with the draw-down on a pool of resources or a quota that has been assigned to a consumer or internal group. This is dangerous, as it simply ensures that a specific amount of resources will not be exceeded, and does not guarantee that actual resources will be available. This is analogous to getting a coupon from a store that says "limit 10 per customer" - it in no way guarantees that there will be any product left on the shelf. Organizations should beware of these types of reservations, as they can give a false sense of security.

Capacity reservations are extremely useful to those managing the infrastructure capacity. They provide an accurate model of the pipeline of demand, which allows for much more efficient, accurate and timely purchasing decisions. Simply put, less idle capacity needs to be left on the floor. It also allows infrastructure to be managed as a portfolio, and if a certain mix of resources is needed to satisfy the overall supply and demand balance (such as buying servers with more memory), then procurement can factor this in.

4. Even Self-Service Needs Reservations
Self-service can create a highly volatile demand pipeline. But a bigger issue with self-service models is the way organizations perceive them. Many early cloud implementations focus on dev/test users or more grid-type workloads, and the entire approach to delivering capacity takes on a last-minute, unplanned flavor. But these are not the only kinds of workloads - or even the most common - and for a cloud to become a true "next-generation" hosting platform it must also support enterprise applications and proper release planning processes.

The heart of the issue is a tendency for organizations to equate self-service with instant provisioning. Although instant provisioning is useful for dev/test, grid and other horizontal scaling scenarios, it is not the only approach. For example, an online hotel reservation site provides self-service access to hotel rooms, but these rooms are not often being booked for that night. For business trips, conferences and even vacations, you book ahead. The same process must be put into place for hosting workloads.

Rather than narrowly defining self-service as the immediate provisioning of capacity, it is better to focus on the intelligent provisioning of capacity, which may or may not be immediate. For enterprise workloads with proper planning cycles and typical lead times, reservations are far more important than instant provisioning. And deciding where the application should be hosted in the first place is a solution critical decision that is often overlooked. Unless an organization has only one hosting environment, the importance (and difficulty) of this should not be underestimated.

5. Demand Is Global
There is a huge benefit to thinking big when it comes to making hosting decisions. The long-term trend will undoubtedly be to start thinking beyond the four walls of an organization and make broader hosting decisions that include external cloud providers, outsourcing models and other potential avenues of efficiency. But the use of external capacity is still a distant roadmap item in many IT organizations, and the current focus tends to be on making the best use of existing capacity and purchasing dollars.

Operating in scale also allows certain assumptions to be challenged, such as the requirement for an application to be hosted at a specific geographical location. Geographical constraints should be fully understood and properly identified, and not simply assumed based on past activity or server-hugging paranoia. Some workloads do have specific jurisdictional constraints, compliance requirements or latency sensitivities, but many have a significant amount of leeway in this regard, and to constrain them unnecessarily ties up expensive data center resources.

Unfortunately, the manual processes and spreadsheet-based approaches in use in many organizations are simply not capable of operating at the necessary scale, and cannot properly model the true requirements and constraints of a workload. This not only means that decisions are made in an overly narrow context, but that the decisions that are made are likely wrong.

Moving Past Your "Gut"
Hosting decisions are far too important to be left to simplistic, best-efforts approaches. Where a workload is placed and how resources are assigned to it is likely the most important factor in operational efficiency and safety, and is even more critical as organizations consider cloud hosting models. These decisions must be driven by the true requirements of the applications, the capabilities of the infrastructure, the policies in force and the pipeline of activity. They should be made in the context of the global picture, where all supply and demand can be considered and all hosting assumptions challenged. And they should be made in software, not brains, so they are repeatable, accurate and can drive automation.

More Stories By Andrew Hillier

Andrew Hillier is CTO and co-founder of CiRBA, Inc., a data center intelligence analytics software provider that determines optimal workload placements and resource allocations required to safely maximize the efficiency of Cloud, virtual and physical infrastructure. Reach Andrew at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@BigDataExpo Stories
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, wh...
SYS-CON Events announced today that ContentMX, the marketing technology and services company with a singular mission to increase engagement and drive more conversations for enterprise, channel and SMB technology marketers, has been named “Sponsor & Exhibitor Lounge Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York City, New York. “CloudExpo is a great opportunity to start a conversation with new prospects, but what happens after the...
The pace of innovation, vendor lock-in, production sustainability, cost-effectiveness, and managing risk… In his session at 18th Cloud Expo, Dan Choquette, Founder of RackN, will discuss how CIOs are challenged finding the balance of finding the right tools, technology and operational model that serves the business the best. He will discuss how clouds, open source software and infrastructure solutions have benefits but also drawbacks and how workload and operational portability between vendors...
The essence of data analysis involves setting up data pipelines that consist of several operations that are chained together – starting from data collection, data quality checks, data integration, data analysis and data visualization (including the setting up of interaction paths in that visualization). In our opinion, the challenges stem from the technology diversity at each stage of the data pipeline as well as the lack of process around the analysis.
The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discuss how businesses can gain an edge over competitors by empowering consumers to take control through IoT. We'll cite examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He'll also highlight how IoT can revitalize and restore outdated business models, making them profitable...
Enterprise networks are complex. Moreover, they were designed and deployed to meet a specific set of business requirements at a specific point in time. But, the adoption of cloud services, new business applications and intensifying security policies, among other factors, require IT organizations to continuously deploy configuration changes. Therefore, enterprises are looking for better ways to automate the management of their networks while still leveraging existing capabilities, optimizing perf...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit y...
SYS-CON Events announced today that MangoApps will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. MangoApps provides modern company intranets and team collaboration software, allowing workers to stay connected and productive from anywhere in the world and from any device. For more information, please visit https://www.mangoapps.com/.
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo New York Call for Papers is now open.
Designing IoT applications is complex, but deploying them in a scalable fashion is even more complex. A scalable, API first IaaS cloud is a good start, but in order to understand the various components specific to deploying IoT applications, one needs to understand the architecture of these applications and figure out how to scale these components independently. In his session at @ThingsExpo, Nara Rajagopalan is CEO of Accelerite, will discuss the fundamental architecture of IoT applications, ...
In his session at 18th Cloud Expo, Bruce Swann, Senior Product Marketing Manager at Adobe, will discuss how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects). Bruce Swann has more than 15 years of experience working with digital marketing disciplines like web analytics, social med...
SYS-CON Events announced today that Enzu, a leading provider of cloud hosting solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to foc...
IoT generates lots of temporal data. But how do you unlock its value? How do you coordinate the diverse moving parts that must come together when developing your IoT product? What are the key challenges addressed by Data as a Service? How does cloud computing underlie and connect the notions of Digital and DevOps What is the impact of the API economy? What is the business imperative for Cognitive Computing? Get all these questions and hundreds more like them answered at the 18th Cloud Expo...
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, will explore the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences betwee...
As cloud and storage projections continue to rise, the number of organizations moving to the cloud is escalating and it is clear cloud storage is here to stay. However, is it secure? Data is the lifeblood for government entities, countries, cloud service providers and enterprises alike and losing or exposing that data can have disastrous results. There are new concepts for data storage on the horizon that will deliver secure solutions for storing and moving sensitive data around the world. ...
What a difference a year makes. Organizations aren’t just talking about IoT possibilities, it is now baked into their core business strategy. With IoT, billions of devices generating data from different companies on different networks around the globe need to interact. From efficiency to better customer insights to completely new business models, IoT will turn traditional business models upside down. In the new customer-centric age, the key to success is delivering critical services and apps wit...
Cloud-based NCLC (No-code/low code) application builder platforms empower everyone in the organization to quickly build applications and executable processes that broaden access, deepen collaboration, and enhance transparency for all team members. Line of business owners (LOBO) and operations managers know best their part of the business and their processes. IT departments are beginning to leverage NCLC platforms to empower and enable LOBOs to lead the innovation, transform the organization, an...
SYS-CON Events announced today that 24Notion has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. 24Notion is full-service global creative digital marketing, technology and lifestyle agency that combines strategic ideas with customized tactical execution. With a broad understand of the art of traditional marketing, new media, communications and social influence, 24Notion uniquely understands how to con...
In his session at 18th Cloud Expo, Andrew Cole, Director of Solutions Engineering at Peak 10, will discuss how the newest technology advances are reducing the cost and complexity of traditional business continuity and disaster recovery solutions. Attendees will: Learn why having a full disaster recovery strategy is more important now than ever before Explore the key drivers of a successful disaster recovery solution Achieve measurable operational and business value from a disaster recovery ...
The initial debate is over: Any enterprise with a serious commitment to IT is migrating to the cloud. But things are not so simple. There is a complex mix of on-premises, colocated, and public-cloud deployments. In this power panel at 18th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists will look at the present state of cloud from the C-level view, and how great companies and rock star executives can use cloud computing to meet their most ambitious and disruptive business ...