Welcome!

@DXWorldExpo Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Microservices Expo, Agile Computing, @CloudExpo, Cloud Security, @DXWorldExpo, SDN Journal

@DevOpsSummit: Article

Continuous Infrastructure

How IT Went from the Back of the Class to the Front of the Board Room

Businesses have IT because apps deliver value to their customers. This can be direct value - a user-facing app. Or it can be indirect - an internal app for managing sales, marketing, or even HR transactions. Either way, the idea is that the app will make the business more efficient and responsive.

The purpose of IT used to be to develop, purchase and operate these apps. But today, every team in the business is writing apps, and every part of the business depends on the apps that they run. These apps have gotten bigger. They have become more complex. There are more apps everywhere, and they are more mission critical than before.

In fact, we are long past the point where any modern application is a single piece of software. Most apps these days run on top of a collection of common services - such as web servers, database servers, queue servers - even the storage space and network connections are increasingly consumed as-a-service, by apps.

So the purpose of IT has been transformed from being an operator of apps, to an operator of services to power them. What's different about being a provider of IT services, rather than an operator of applications?

  1. It's about scale.
  2. It's about efficiency.
  3. It's about speed.

Explaining the difference between scale-up, and scale-out, is beyond the scope of this article. But I encourage you to read Maged Michael's excellent summary of the topic[1]. Whether we're scaling the number of users, the amount of data, or the complexity of calculations required (or even if you're scaling all three) - the era of scale-up is drawing to a close.

So if we can't scale UP, we have to scale OUT - which means building multi-server distributed systems. These are notoriously tricky. (In fact, Google famously considers their expertise in this to be a key competitive advantage.) So when we've invested the work in developing a system for delivering a particular service, we're likely to try and get as many different apps to share that service as possible. This is the same type of thinking that encouraged object-oriented programming and object reuse - if we're all sharing the same system, we can invest more in making it as reliable and efficient as possible. I like to call this the "Warren Buffet" approach to enterprise architecture - put all your eggs in one basket, and then watch that damn basket!

So we can see that, as our businesses have become more and more dependent on IT apps, those apps have in turn become more and more dependent on scalable, efficient services. And as applications have gotten both larger and more complex, IT has had to focus more on the services that can power these applications - rather than just the applications themselves.

Now, technology doesn't necessarily make life better. Nor does it reduce the number of mistakes we make. But it *does* help us go faster, which means we can make mistakes at a faster and faster rate. And this is what it's done in business, too - IT has sped everything up.

The pursuit of speed in IT has been transforming everything about how we develop software over the past ten years. It started with our project management methodologies, switching from traditional SDLC approaches to Agile ones such as Scrum or Kan-Ban. As we sped development up, we needed to integrate and test our systems earlier, and the Continuous Integration ecosystem was born, with tools such as Jenkins and Build Bot at the heart of it. Eventually, the need for speed broke through from development and testing, into the production operations environments, and the DevOps movement was born. We're starting to see a raft of tools and technologies to support Continuous Deployment now - the most mature form of DevOps.

The final step in this metamorphosis is a revolution in the delivery of the underlying infrastructure services - the push to API-driven, or software-defined, ...well, everything. Only when Continuous Infrastructure supports the Continuous Deployment of Continuously Integrated and Continuously Prioritized apps and features, will our work be complete.

For more of my thoughts on apps and services, open source software and how to manage your IT infrastructure like 400-head of Holsteins, check out my blog at http://www.pistoncloud.com/blog.

1. http://www.cecs.uci.edu/~papers/ipdps07/pdfs/SMTPS-201-paper-1.pdf

More Stories By Joshua McKenty

Prior to co-founding Piston Cloud Computing, Joshua McKenty was the Technical Architect of NASA's Nebula Cloud Computing Platform and the OpenStack compute components. As a board member of the OpenStack Foundation, Joshua plays an instrumental role in the OpenStack community. Joshua has over two decades of experience in entrepreneurship, management and software engineering and architecture. He was the team lead for the development of the Netscape Browser (vs. 8) as well as AOL's IE AIM toolbar, and a senior engineer at Flock.com. He also led the successful first release of OpenQuake, an open source software application allowing users to compute seismic hazard, seismic risk and the socio-economic impact of earthquakes. In his spare time, Joshua has crafted a handmade violin and banjo, fathered two children, and invented his own juggling trick, the McKenty Madness.

DXWorldEXPO Digital Transformation Stories
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage...
For years the world's most security-focused and distributed organizations - banks, military/defense agencies, global enterprises - have sought to adopt cloud technologies that can reduce costs, future-proof against data growth, and improve user productivity. The challenges of cloud transformation for these kinds of secure organizations have centered around data security, migration from legacy systems, and performance. In our presentation, we will discuss the notion that cloud computing, properl...
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
By 2021, 500 million sensors are set to be deployed worldwide, nearly 40x as many as exist today. In order to scale fast and keep pace with industry growth, the team at Unacast turned to the public cloud to build the world's largest location data platform with optimal scalability, minimal DevOps, and maximum flexibility. Drawing from his experience with the Google Cloud Platform, VP of Engineering Andreas Heim will speak to the architecture of Unacast's platform and developer-focused processes.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.