Welcome!

@BigDataExpo Authors: Elizabeth White, Yeshim Deniz, XebiaLabs Blog, SmartBear Blog, Carmen Gonzalez

Blog Feed Post

PubSub and Other Disruptive Emerging Network Models

#IoT #OpenStack #SDN

When people write about software-defined architectures being "disruptive" to the network they're doing a bit of a disservice to just how much change is occurring under the hood in the engine that drives today's businesses. The notion of separating control and data planes is superficial in that it describes a general concept and it isn't really all that radical a change, if you think about it.

The control and data planes have always been separate. We have, since the need for web-scale networks came about, implemented separate topological (and usually physical) networks specifically for the purpose of segregating control traffic from the data path. The reasons for this are many: to keep management (control) traffic from interfering with the delivery of applications (and vice versa), to enable a model in which control over the critical path for applications could be secured and to ensure access to necessary control functions in the face of failure or attack.

What's new with software-defined architectures is not just the logical separation but the physical decoupling and change in component responsibility. In traditional networks there is a logically separate control plane, but it is distributed; it resides on each physical component. In software-defined networks it is physically separate but it is centralized; control responsibility resides in a single component, the "controller".

Now, OpenStack and emerging models for scaling systems that will be responsible for managing communication with and for the Internet of Things (like MQTT) use a similar control model, but it's not as active as a software-defined architectural model, it's more a passive model. That's the nature of PubSub imposing itself on the network.

PubSub Control Model

pubsub-modelPubSub (publish / subscribe) is a familiar model to application developers. It's a middleware staple that's been used for a very long time to distribute messages to a variable set of systems. In a nutshell, PubSub is based on the notion of there existing a "queue" to which authorized components can publish events or messages of interest and to which interested components an subscribe. Events or messages have a life (like a TTL) and eventually expire. In the interim, it's expected that subscribed components check the queue for messages periodically. They poll for events or messages.

The queue itself is much like a switch or router's queue, except messages in the queue are duplicated until the TTL runs out and the message expires.

PubSub is passive; that is, it does not actively distribute messages. It merely serves as a kind of centralized repository, making available to those components that need it access to relevant information about the state of applications and/or the network.

Centralized Control Model

push-modelOpenFlow-based SDN, by contrast, is active. That is, it not only serves as a centralized repository for the state of applications and/or the network, but it actively distributes messages to components based on events. For example, a controller might receive a message from another system indicating the launch of a new application instance. That event triggers a series of actions on the controller that includes informing the affected network components of configuration changes. In a passive, PubSub model, the network components themselves might be polling for such an event and, upon receiving one, would initiate the appropriate configuration changes themselves.

We can simplify the description of the differences even more: a controller-based architecture uses a push model, while a pubsub-based architecture uses a pull model. What this doesn't illustrate well is that in a push model, the centralized controller must know how to communicate the desired changes to each and every component it is controlling. That's one of the reasons original models standardized on OpenFlow and were tightly focused on L2-4 stateless networking. It could be easily standardized down to a common forwarding table.While different components might internalize that differently, the basic information was always the same: IP addresses, ports and actions.

As we move up the stack into L4-7 stateful networking, however, this model becomes more burdensome because of the complexity of rule sets and differences in policy models across such a broad set of networking domains. Hence the plug-in support in controllers like OpenDaylight for "other" control protocols. But the basic premise of the model remains the same, regardless of the control protocol: the centralized controller dictates the changes to all components. It pushes those changes to the network. Both control and execution are centralized. The controller tells components to change their configuration.

PubSub centralizes control but decentralizes execution. The control plane is still centralized; there is one authoritative system responsible for disseminating change across the network, but each individual component (or domain controller but we'll get to that in a minute) is responsible for executing the appropriate changes based on their configured policies and services. A pubsub controller never tells a component "change this now"; that's up to the individual components (or domain controller). 

The Integrated Control Model

To make things even more confusing (and disruptive), these models may be used simultaneously. A software-defined architecture might be based on a centralized control model with domain controllers for specific networking functions (like security and application delivery) integrated via a pubsub-based model.

integrated-model

 

This is where we start seeing models that combine emerging technologies like OpenStack and SDN architectures together. OpenStack manages at the data center level, and at its heart is pubsub model that can be used by domain controllers (stateless L2-4 SDN, stateful L4-7 SDN, etc...) to receive notification of changes in the network and subsequently push those changes using the appropriate control protocols to the components it is managing.

Needless to say, the term "disruptive" is really inadequate in describing the level of change in the network required to support either models (or both). Both require significant changes not just to the network itself but the way in which the network is fundamentally provisioned and managed. It's not just a new CLI or management console, these models dramatically change the design and management of networks.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@BigDataExpo Stories
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists will examine how DevOps helps to meet th...
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and 21st International Cloud Expo, which will take place in November in Silicon Valley, California.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help globa...
SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
SYS-CON Events announced today that Juniper Networks (NYSE: JNPR), an industry leader in automated, scalable and secure networks, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Juniper Networks challenges the status quo with products, solutions and services that transform the economics of networking. The company co-innovates with customers and partners to deliver automated, scalable and secure network...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain.
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software in the hope of capturing value in IoT. Although IoT is relatively new in the market, it has already gone through many promotional terms such as IoE, IoX, SDX, Edge/Fog, Mist Compute, etc. Ultimately, irrespective of the name, it is about deriving value from independent software assets participating in an ecosystem as one comprehensive solution.
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across supply chain networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost and time for product recall as well as advance trade. Are you curious about Blockchain and how it can provide you with new opportunities for innovation and growth? In her session at 20th Cloud Exp...
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help globa...
Quickly find the root cause of complex database problems slowing down your applications. Up to 88% of all application performance issues are related to the database. DPA’s unique response time analysis shows you exactly what needs fixing - in four clicks or less. Optimize performance anywhere. Database Performance Analyzer monitors on-premises, on VMware®, and in the Cloud, including Amazon® AWS and Azure™ virtual machines.