Click here to close now.

Welcome!

Big Data Journal Authors: Elizabeth White, Jnan Dash, Robert McNutt, Liz McMillan, Bart Copeland

Related Topics: @ThingsExpo, Java, Cloud Expo, Big Data Journal, SDN Journal, OpenStack Journal

@ThingsExpo: Blog Feed Post

PubSub and Other Disruptive Emerging Network Models

What's new with software-defined architectures is not just the logical separation but the physical decoupling

When people write about software-defined architectures being "disruptive" to the network they're doing a bit of a disservice to just how much change is occurring under the hood in the engine that drives today's businesses. The notion of separating control and data planes is superficial in that it describes a general concept and it isn't really all that radical a change, if you think about it.

The control and data planes have always been separate. We have, since the need for web-scale networks came about, implemented separate topological (and usually physical) networks specifically for the purpose of segregating control traffic from the data path. The reasons for this are many: to keep management (control) traffic from interfering with the delivery of applications (and vice versa), to enable a model in which control over the critical path for applications could be secured and to ensure access to necessary control functions in the face of failure or attack.

What's new with software-defined architectures is not just the logical separation but the physical decoupling and change in component responsibility. In traditional networks there is a logically separate control plane, but it is distributed; it resides on each physical component. In software-defined networks it is physically separate but it is centralized; control responsibility resides in a single component, the "controller".

Now, OpenStack and emerging models for scaling systems that will be responsible for managing communication with and for the Internet of Things (like MQTT) use a similar control model, but it's not as active as a software-defined architectural model, it's more a passive model. That's the nature of PubSub imposing itself on the network.

PubSub Control Modelpubsub-model

PubSub (publish / subscribe) is a familiar model to application developers. It's a middleware staple that's been used for a very long time to distribute messages to a variable set of systems. In a nutshell, PubSub is based on the notion of there existing a "queue" to which authorized components can publish events or messages of interest and to which interested components an subscribe. Events or messages have a life (like a TTL) and eventually expire. In the interim, it's expected that subscribed components check the queue for messages periodically. They poll for events or messages.

The queue itself is much like a switch or router's queue, except messages in the queue are duplicated until the TTL runs out and the message expires.

PubSub is passive; that is, it does not actively distribute messages. It merely serves as a kind of centralized repository, making available to those components that need it access to relevant information about the state of applications and/or the network.

Centralized Control Model

push-modelOpenFlow-based SDN, by contrast, is active. That is, it not only serves as a centralized repository for the state of applications and/or the network, but it actively distributes messages to components based on events. For example, a controller might receive a message from another system indicating the launch of a new application instance. That event triggers a series of actions on the controller that includes informing the affected network components of configuration changes. In a passive, PubSub model, the network components themselves might be polling for such an event and, upon receiving one, would initiate the appropriate configuration changes themselves.

We can simplify the description of the differences even more: a controller-based architecture uses a push model, while a pubsub-based architecture uses a pull model. What this doesn't illustrate well is that in a push model, the centralized controller must know how to communicate the desired changes to each and every component it is controlling. That's one of the reasons original models standardized on OpenFlow and were tightly focused on L2-4 stateless networking. It could be easily standardized down to a common forwarding table.While different components might internalize that differently, the basic information was always the same: IP addresses, ports and actions.

As we move up the stack into L4-7 stateful networking, however, this model becomes more burdensome because of the complexity of rule sets and differences in policy models across such a broad set of networking domains. Hence the plug-in support in controllers like OpenDaylight for "other" control protocols. But the basic premise of the model remains the same, regardless of the control protocol: the centralized controller dictates the changes to all components. It pushes those changes to the network. Both control and execution are centralized. The controller tells components to change their configuration.

PubSub centralizes control but decentralizes execution. The control plane is still centralized; there is one authoritative system responsible for disseminating change across the network, but each individual component (or domain controller but we'll get to that in a minute) is responsible for executing the appropriate changes based on their configured policies and services. A pubsub controller never tells a component "change this now"; that's up to the individual components (or domain controller).

The Integrated Control Model

To make things even more confusing (and disruptive), these models may be used simultaneously. A software-defined architecture might be based on a centralized control model with domain controllers for specific networking functions (like security and application delivery) integrated via a pubsub-based model.

integrated-model

This is where we start seeing models that combine emerging technologies like OpenStack and SDN architectures together. OpenStack manages at the data center level, and at its heart is pubsub model that can be used by domain controllers (stateless L2-4 SDN, stateful L4-7 SDN, etc...) to receive notification of changes in the network and subsequently push those changes using the appropriate control protocols to the components it is managing.

Needless to say, the term "disruptive" is really inadequate in describing the level of change in the network required to support either models (or both). Both require significant changes not just to the network itself but the way in which the network is fundamentally provisioned and managed. It's not just a new CLI or management console, these models dramatically change the design and management of networks.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@BigDataExpo Stories
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been ...
Companies today struggle to manage the types and volume of data their customers and employees generate and use every day. With billions of requests daily, operational consistency can be elusive. In his session at Big Data Expo, Dave McCrory, CTO at Basho Technologies, will explore how a distributed systems solution, such as NoSQL, can give organizations the consistency and availability necessary to succeed with on-demand data, offering high availability at massive scale.
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focu...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes ...
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her Day 2 Keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, discussed...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of bein...
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data th...
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @Things...
Even as cloud and managed services grow increasingly central to business strategy and performance, challenges remain. The biggest sticking point for companies seeking to capitalize on the cloud is data security. Keeping data safe is an issue in any computing environment, and it has been a focus since the earliest days of the cloud revolution. Understandably so: a lot can go wrong when you allow valuable information to live outside the firewall. Recent revelations about government snooping, along...
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, C...
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. Learn about IoT, Big Data and deployments processing massive data volumes from wearables, utilities and ot...
SYS-CON Events announced today that CodeFutures, a leading supplier of database performance tools, has been named a “Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. CodeFutures is an independent software vendor focused on providing tools that deliver database performance tools that increase productivity during database development and increase database performance and scalability during production.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @Things...
SYS-CON Events announced today that Intelligent Systems Services will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Established in 1994, Intelligent Systems Services Inc. is located near Washington, DC, with representatives and partners nationwide. ISS’s well-established track record is based on the continuous pursuit of excellence in designing, implementing and supporting nationwide clients’ mission-cri...
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com...
Data-intensive companies that strive to gain insights from data using Big Data analytics tools can gain tremendous competitive advantage by deploying data-centric storage. Organizations generate large volumes of data, the vast majority of which is unstructured. As the volume and velocity of this unstructured data increases, the costs, risks and usability challenges associated with managing the unstructured data (regardless of file type, size or device) increases simultaneously, including end-to-...
The excitement around the possibilities enabled by Big Data is being tempered by the daunting task of feeding the analytics engines with high quality data on a continuous basis. As the once distinct fields of data integration and data management increasingly converge, cloud-based data solutions providers have emerged that can buffer your organization from the complexities of this continuous data cleansing and management so that you’re free to focus on the end goal: actionable insight.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.