Welcome!

Big Data Journal Authors: Pat Romanski, Yeshim Deniz, Liz McMillan, Lori MacVittie, Elizabeth White

Related Topics: Cloud Expo, Java, Linux, Security, Big Data Journal, SDN Journal

Cloud Expo: Article

Storms in the Cloud

No technology negates the need for proper planning and the cloud is no different

There are things we tend to take for granted in our everyday lives. We have certain expectations that don't even have to be spoken, they're just a given. If you walk into a room and turn on the light switch, the lights will go on, it's assumed. If you turn the water faucet on, water will come out; if you pick up the telephone, there will be a dial tone. The concept of any of those things not happening does not enter the conversation. These are services we have that are ubiquitous; we don't even think about them - they are just there.

In recent years people have seen the impact Mother Nature has had on those core services such as electricity, water and phone, Storms, hurricanes, floods and blizzards have taken our expectations of these services and turned them on their head.

Cloud Computing, the New Light Switch
Cloud computing has become pervasive in both our personal and business lives; you cannot have a conversation about technology without the word "cloud" in it.

On a personal level, our music players are streaming from the cloud, our tablets and eReaders are getting books from the cloud, our TVs are streaming video from the cloud and our smart phones and PCs are being backed up to the cloud. Google has glasses that connect you to the cloud and Samsung just came out with a watch that connects you to the cloud. Like the electricity and water in your home, the cloud is always there - at least that has become the perception and expectation.

On a business level, our expectations are influenced by our personal exposure and experiences with technology. There is an assumption that by going to the cloud, the services provided will always be there, like the light switch.

Recent Heavy Weather in the Cloud
Cloud services and service providers do enhance those expectations. By dispersing applications across multiple servers and multiple data centers, the technology implementations allow for higher levels of fault tolerance. The risk is that the higher levels of complexity needed to implement these infrastructures introduce new potential ‘technology storms' that can expose a business to unexpected failures and outages.

One need only read the headlines of public cloud outages over the last year whether it be NASDQ, Amazon, Google, and numerous other providers to understand that going to the cloud does not come with 100% availability, and that comes with a cost.

  • In January of this year, DropBox experienced an outage due to a ‘routine maintenance episode' on a Friday evening. Customers experienced 2-5 hour loss of access to services, some lasting into the weekend.
  • In August of last year, NASDAQ was shut down for 3 hours. The root cause was determined to be a ‘data flood' on requests that peaked at 26,000/sec, (26 times normal volumes) that exposed a software flaw that prevented the fail-safes from being triggered to allow operations to continue.
  • In that same month, Google experienced an outage of their services that only lasted 4 minutes. In that short period of time, Internet traffic dropped by 40%. (The fact the outage only lasted 4 minutes speaks well of Google's recovery plans and services.)
  • On January 31st, 2013, Amazon had an outage that lasted only 49 minutes. The estimated cost to Amazon in lost sales for that 49 minutes is estimated to be between $4-$5M dollars. (Several other companies that utilize Amazon's services, such as Netflix, also experienced the impact of this outage.)
  • As far back as two years ago, a large portion of the State of Maryland's IT services to the public were down for days due to a double failure in the storage sub-systems and their failover systems. No system is immune.

Planning for Availability and Recoverability
Going to the cloud does not in and of itself provide high availability and resiliency. Like any technology architecture, these capabilities need to be designed in and come with a cost. Higher availability has always required more effort and associated costs, and going to the cloud alone does not necessarily provide what your business is expecting from that light switch.

When moving to cloud architectures, whether they are public or private, business needs and expectations around availability and resiliency must be defined and understood. You cannot take for granted that by being in the cloud the needs will be met. Due diligence must still be performed.

  • When going to the public clouds, you need to make sure the availability requirements from the business are included in the SLAs with the cloud vendor.
  • When building a private cloud network, it is incumbent on the IT organization to ensure the needs and requirements are baked into the design and implementation of that infrastructure, and that expectations with the business are properly set and understood.
  • Risk mitigation plans need to be developed and in place before outages occur, as even the best infrastructure may still have a failure (such as the State of Maryland). Going to the cloud does not negate the need to develop and have a business continuity plan.
  • If working with a public cloud provider, this is a joint effort, not solely the vendor's responsibility or yours. Vendors will have their own set of plans, and you must dovetail yours with theirs. Make sure you understand what they have in place before signing on the dotted line.

No technology negates the need for proper planning and the cloud is no different. Ultimately, weathering the technological natural disasters in the cloud is accomplished just like we weather those of Mother Nature, prepare a plan, so when the storm does hit, you can make it out the other side.

More Stories By Ed Featherston

Ed Featherston is a senior enterprise architect and director at Collaborative Consulting. He brings more than 34 years of information technology experience of designing, building, and implementing large complex solutions. He has significant expertise in systems integration, Internet/intranet, client/server, middleware, and cloud technologies, Ed has designed and delivered projects for a variety of industries, including financial services, pharmacy, government and retail.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Cloud Expo Latest Stories
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.
Until recently, many organizations required specialized departments to perform mapping and geospatial analysis, and they used Esri on-premise solutions for that work. In his session at 15th Cloud Expo, Dave Peters, author of the Esri Press book Building a GIS, System Architecture Design Strategies for Managers, will discuss how Esri has successfully included the cloud as a fully integrated SaaS expansion of the ArcGIS mapping platform. Organizations that have incorporated Esri cloud-based applications and content within their business models are reaping huge benefits by directly leveraging cloud-based mapping and analysis capabilities within their existing enterprise investments. The ArcGIS mapping platform includes cloud-based content management and information resources to more widely, efficiently, and affordably deliver real-time actionable information and analysis capabilities to your organization.
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity. In his session at Internet of @ThingsExpo, Mac Devine, Distinguished Engineer at IBM, will discuss bringing these three elements together via Systems of Discover.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
The cloud is everywhere and growing, and with it SaaS has become an accepted means for software delivery. SaaS is more than just a technology, it is a thriving business model estimated to be worth around $53 billion dollars by 2015, according to IDC. The question is – how do you build and scale a profitable SaaS business model? In his session at 15th Cloud Expo, Jason Cumberland, Vice President, SaaS Solutions at Dimension Data, will give the audience an understanding of common mistakes businesses make when transitioning to SaaS; how to avoid them; and how to build a profitable and scalable SaaS business.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
SYS-CON Events announced today that Solgenia, the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between personal and professional social, mobile and cloud user experiences, our solutions help large and medium-sized organizations dramatically improve productivity, reduce collaboration costs, and increase the overall enterprise value by bringing collaboration and infrastructure solutions to the cloud.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
Enterprises require the performance, agility and on-demand access of the public cloud, and the management, security and compatibility of the private cloud. The solution? In his session at 15th Cloud Expo, Simone Brunozzi, VP and Chief Technologist(global role) for VMware, will explore how to unlock the power of the hybrid cloud and the steps to get there. He'll discuss the challenges that conventional approaches to both public and private cloud computing, and outline the tough decisions that must be made to accelerate the journey to the hybrid cloud. As part of the transition, an Infrastructure-as-a-Service model will enable enterprise IT to build services beyond their data center while owning what gets moved, when to move it, and for how long. IT can then move forward on what matters most to the organization that it supports – availability, agility and efficiency.
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.