|By Kevin Benedict||
|April 1, 2014 05:14 PM EDT||
|Cognizant Mobility Expert |
I believe the trends away from mainframes to the Cloud will have a large impact on enterprise mobility architecture. If we believe that going forward, enterprise mobility architectures will be closely tied to the Cloud, then we need to take a serious look at architectural design. I have written about MBaaS (Mobile Back End as a Service) which is a new form of Cloud offering, but today I want share my opinions on best practices.
I have been working on an MBaaS project recently, and we ran into some interesting challenges when it came to submitting the App to the Apple App Store. In the middle of the night there was some server maintenance going on which was obviously considered out of hours in the UK. The point I reminded everyone was that Apple Valley is actually GMT-7, and so what is considered out of hours in the UK is not the case where Apple does their testing.
We then got onto some interesting questions:
- “Do we have availability monitoring?”
- “How do you get the Node service working again if it falls over?”
- “Do we have High Availability (HA) and Disaster Recovery (DR)?”
One of the best articles that I found underpinning architectural design for Cloud native applications on AWS was written back in 2011 (but is still referenced today) and genuinely changed my architectural philosophy on the matter (http://it20.info/2011/04/tcp-clouds-udp-clouds-design-for-fail-and-aws/).
In a nutshell, Amazon Web Services uses a UDP-cloud model because it doesn’t guarantee reliability at the infrastructure level.
This is a very interesting concept so I want to take the rest of the Blog to explain it, starting with a quick reminder of TCP and UDP.
- TCP is a reliable connection oriented protocol with segment sequencing and acknowledgments
- UDP is an unreliable connectionless protocol with no sequencing or acknowledgments
During a few large AWS outages then a number of Bloggers (such as George Reese) outlined the differences between the “design for fail” model and the “traditional” model. The traditional model, among other things, has high-availability (HA) and disaster recovery (DR) characteristics built right into the infrastructure and these features are typically application-agnostic. An alternative view of “design for fail” and “traditional” is therefore TCP-clouds and UDP-clouds.
- A TCP Cloud has the application in the consumer space and the HA / DR policies and Cloud Compute in the provider space.
- A UDP Cloud has the application and the HA/DR policies in the consumer space and only the Cloud Compute in the provider space.
This is obviously a vast oversimplification and AWS offers far more than just cloud computing, but the key components in this equation are the ones to focus on. AWS doesn’t have high availability built into the EC2 service, instead they suggest to deploy in multiple "Availability Zones" simply to avoid concurrent failures. In other words, if you deploy your application in a given "Availability Zone," there is nothing that will “fail it over” to another "Availability Zone."
Some of AWS customers, therefore, developed tools to test the resiliency of their applications such as a Chaos Monkey tool (http://readwrite.com/2010/12/20/chaos-monkey-how-netflix-uses). These are software programs that are designed to break things randomly. In a TCP-cloud it would be the cloud provider to run traditional tests to make sure the infrastructure could self-recover. In a UDP-cloud it is the developer that must run a Chaos Monkey in order to test if the application could self-recover since it’s been designed for fail.
A different view on this is cattle and pets (http://thinkacloud.wordpress.com/2014/01/27/is-openstack-and-vmware-like-cattle-and-pets/).
vSphere servers are likened to pets:
· They are given names (such as pussinboots.cern.ch)
· Uniquely hand raised and cared for
· Nursed back to health when sick
OpenStack servers are likened to cattle:
· They get random identification numbers (vm0042.cern.ch)
· They are almost identical to each other
· They are cared for as a group
· They and basically just replaced when ill
The conclusion being that “Future application architectures should use Cattle, but Pets with strong configuration management are viable and still needed”. If you haven’t made the connection yet, then Cattle are UDP Clouds and Pets are the TCP Clouds.
I have always classed MBaaS as somewhere between Cloud PaaS and Cloud SaaS to my colleagues but I have been quite wrong in this regard. I want to update that definition to the following:
“MBaaS is the combination of Cloud SaaS and EITHER Cloud PaaS or Cloud IaaS, which depends on both the underlying Cloud provider and the supporting service model”.
That means if you have an underlying Cloud provider of AWS, and your MBaaS vendor isn't giving you additional support in HA/DR, availability monitoring or Chaos Monkey tools, then you are basically sitting on a Cloud IaaS which is acting as a UDP Cloud. That is an important thing to be aware of in terms of what you need to bring to the party, and is the potential danger of not really understanding the underlying Cloud model that you are working with.
http://devo.ps/blog/2013/06/26/goodbye-node-forever-hello-pm2.html). Node-Forever is a popular option to bring Node services back to life again (Keep Alive) and also supports CoffeeScript. PM2 adds the following: log aggregation; API; terminal monitoring (including CPU usage and memory consumption by cluster); native clustering; and JSON configuration.
There are also plenty of ways to monitor availability of the Cloud instance. You could subscribe to a twitter feed of your particular Cloud (http://status.aws.amazon.com/). There are quite a few services that offer a ping service to check availability (https://www.statuscake.com/paid-website-monitoring/). If you are using Appcelerator Cloud Services then there is a great tool called Relic available on their Market Place (https://marketplace.appcelerator.com/apps/1140?restoreSearch=true#!features/Availability_Monitoring).
In terms of HA then you need to look into deploying a High Availability Proxy. HAProxy (High Availability Proxy) is an open source load balancer which can load balance any TCP service. It is particularly suited for HTTP load balancing as it supports session persistence and layer 7 processing. I am not sure how many Cloud developers actually use Chaos Monkey tools to test DR but the option is certainly there. Certainly you should be designing your applications to be stateless as much as possible and looking into NoSQL databases.
I hope this article has helped you to understand that you cannot just assume your MBaaS vendor is providing a full Cloud PaaS and all of this stuff just comes out of the box. I hope you will also consider designing your Cloud services with a general consideration of the underlying infrastructure. You should have this discussion early on in the project and work out which tools you need to be providing and which enterprise architectural principles need to be applied.
Of course there is nothing to stop you having two or three different underlying Cloud providers or just having the mission critical features running on a private local Cloud. It is an important point to remember though, Amazon EC2 and other Cloud providers can go down for 48 hours. It is very rare but it is not unheard of in the history of the Cloud.
”Design for failure and you won't ever be surprised”
I would like to thank Massimo and Douglas Lin for their exceptional Blogs that I have referenced throughout this article.
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with b...
Dec. 3, 2016 08:00 PM EST Reads: 3,950
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
Dec. 3, 2016 06:15 PM EST Reads: 1,508
"IoT is going to be a huge industry with a lot of value for end users, for industries, for consumers, for manufacturers. How can we use cloud to effectively manage IoT applications," stated Ian Khan, Innovation & Marketing Manager at Solgeniakhela, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 3, 2016 05:30 PM EST Reads: 4,040
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Dec. 3, 2016 05:15 PM EST Reads: 2,136
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Onalytica. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
Dec. 3, 2016 05:15 PM EST Reads: 1,998
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, discussed using predictive analytics to mon...
Dec. 3, 2016 04:00 PM EST Reads: 4,859
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
Dec. 3, 2016 03:30 PM EST Reads: 1,590
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Dec. 3, 2016 03:15 PM EST Reads: 3,222
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Dec. 3, 2016 02:45 PM EST Reads: 522
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to impr...
Dec. 3, 2016 02:15 PM EST Reads: 6,955
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Dec. 3, 2016 02:00 PM EST Reads: 2,482
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
Dec. 3, 2016 02:00 PM EST Reads: 475
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Dec. 3, 2016 01:30 PM EST Reads: 2,130
Businesses and business units of all sizes can benefit from cloud computing, but many don't want the cost, performance and security concerns of public cloud nor the complexity of building their own private clouds. Today, some cloud vendors are using artificial intelligence (AI) to simplify cloud deployment and management. In his session at 20th Cloud Expo, Ajay Gulati, Co-founder and CEO of ZeroStack, will discuss how AI can simplify cloud operations. He will cover the following topics: why clou...
Dec. 3, 2016 01:15 PM EST Reads: 629
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Dec. 3, 2016 01:00 PM EST Reads: 1,875
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Dec. 3, 2016 12:45 PM EST Reads: 1,955
[slides] Agility for Digital Transformation | @CloudExpo @NewhouseConsult #Agile #DigitalTransformation
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, drew together recent research and lessons learned from emerging and established compa...
Dec. 3, 2016 12:45 PM EST Reads: 737
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busin...
Dec. 3, 2016 12:45 PM EST Reads: 3,774
CloudJumper, a Workspace as a Service (WaaS) platform innovator for agile business IT, has been recognized with the Customer Value Leadership Award for its nWorkSpace platform by Frost & Sullivan. The company was also featured in a new report(1) by the industry research firm titled, “Desktop-as-a-Service Buyer’s Guide, 2016,” which provides a comprehensive comparison of DaaS providers, including CloudJumper, Amazon, VMware, and Microsoft.
Dec. 3, 2016 11:45 AM EST Reads: 696
We are always online. We access our data, our finances, work, and various services on the Internet. But we live in a congested world of information in which the roads were built two decades ago. The quest for better, faster Internet routing has been around for a decade, but nobody solved this problem. We’ve seen band-aid approaches like CDNs that attack a niche's slice of static content part of the Internet, but that’s it. It does not address the dynamic services-based Internet of today. It does...
Dec. 3, 2016 11:30 AM EST Reads: 855