Welcome!

@DXWorldExpo Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo, Industrial IoT, Microservices Expo, Containers Expo Blog, Agile Computing, @DXWorldExpo

@CloudExpo: Article

Semantic Interoperability: The Pot of Gold Under the Rainbow

The problem with semantic interoperability is that human communication is inherently vague, ambiguous, and relative

Our ZapThink 2020 poster lays out our complex web of predictions for enterprise IT in the year 2020. You might think that semantic operability is an important part of this story; after all, several groups have been heads down working on the problem of how to teach computers to agree on the meaning of the information they exchange for years now. But look again: we relegate semantics to the lower right corner, where we point out that we don’t believe there will be much progress in this area by 2020. Eventually, maybe, but even though semantic interoperability appears to be within our grasp, it behaves more like the pot of gold under the rainbow. The closer you get to the rainbow, the farther away it appears.

What gives? ZapThink usually takes an optimistic perspective about the future of technology, but we’re decidedly pessimistic about the prospects for semantic interoperability. The problem as we see it comes down to the human understanding of language. All efforts to standardize meanings in order to facilitate semantic interoperability strip out vagueness and ambiguity from data, and presume a single, universal underlying grammar. After all, isn’t the goal to foster precise, unambiguous, and consistent communication between systems? The problem is, human communication is inherently vague, ambiguous, and relative. The way humans understand the world, the way we think, and the way we put our thoughts into language require both vagueness and ambiguity. Without them, we lose important aspects of meaning. Furthermore, how we structure our language is culturally and linguistically relative. As a result, current semantic interoperability efforts will be able to address a certain class of problems, but in the grand scheme of things, that class of problems is a relatively small subset of the types of communication we would prefer to automate between systems.

The Importance of Vagueness
Ironically, to discuss semantics we must first define our terms. A term is vague when it’s impossible to say whether the term applies in certain circumstances, for example, “my face is red.” Just how red does it have to be before we’re sure it’s red? In contrast, a term is ambiguous when it’s possible to interpret it in more than one way. For example, “I’m going to a bank” might mean that I’m going to a financial institution or to the side of a river.

Vagueness leads to knotty problems in philosophy that impact our ability to provide semantic interoperability. So, let’s go back to philosophy class, and study the sorites paradox. If you have a heap of sand and you take away a single grain, do you still have a heap of sand? Certainly. OK, repeat the process. Clearly, when you get down to a single grain of sand remaining, you no longer have a heap. So, when did the heap cease to be a heap?

Philosophers and linguists have been arguing over how to solve the sorites paradox for over a century now (yes, I know, they should find something more useful to do with their time). One answer: put your foot down and establish a precise boundary. 1,000 or more grains of sand are a heap, but 999 or less are not. Our computers will have no problem with such a resolution to the paradox, but it doesn’t accurately represent what we really mean by a heap. After all, if 1,000 grains constitutes a heap, wouldn’t 999? Central to the meaning of the term “heap” is its inherent vagueness.

Another solution: yes, there is some number of grains of sand where a heap ceases to be a heap, but we can’t know what it is. This resolution might satisfy some philosophers, but it doesn’t help our computers make sense out of our language. A third approach: instead of considering “is a heap” and “isn’t a heap” as the only two possible values, define a spectrum of intermediary values, or perhaps a continuum of values. The computer scientists are likely to be happy with this answer, as it lends itself to fuzzy logic: the statement “this pile of sand is a heap” might be, say, 40% true. Yes, we can do our fuzzy logic math now, but we’ve still lost some fundamental elements of meaning.

To bring back our natural language-based understanding of the sorites paradox, let’s step away from an overly analytical approach to the problem and try to look at the paradox from a human perspective. How, for example, would a seven-year-old describe the heap of sand as you take away a grain of sand at a time? They might answer, “well, it’s a smaller heap” or “it’s kinda a heap” or “it’s a little heap” or “it’s not really a heap,” etc. Such expressions are clearly not precise. Our computers wouldn’t be able to make much sense out of them. But these simple, even childish expressions are how people really speak and how people truly understand vagueness.

The important takeaway here is that vagueness isn’t a property relegated to heaps and blushing faces. It’s a ubiquitous property of virtually all human communication, even within the business context. Take for example an insurance policy. Insurance policies have a number of properties (policy holder, underwriter, insured property, deductible, etc.) and relationships to other business entities (policy application, underwriting documentation, claims forms, etc.) Now let’s add or take away individual properties and relationships from our canonical understanding of an insurance policy one at a time. Is it still a policy? Clearly, if we take away everything that makes a policy a policy then it’s no longer a policy. But if we take away a single property, we’re likely to say it’s still a policy. So where do we draw the line? If philosophers and linguists haven’t solved this problem in over a century, don’t expect your semantic interoperability tool to make much headway either.

The Problem of Linguistic Relativity
Another century-long battle in the world of linguistics is the fray over linguistic relativity vs. linguistic universality. Linguistic relativity is the position that language affects how speakers see their world, and by extension, how they think. In the other corner is Noam Chomsky’s universal grammar, the linguistic theory that grammar is hardwired into the brain, and hence universal across all peoples regardless of their language or their culture. Theoretical work on a universal grammar has led to dramatic advances in natural language translation, and we all get to use and appreciate Google Translate and its brethren as a result. But while Google Translate is a miraculous tool indeed (especially for us Star Trek fans who marveled at the Universal Translator), it doesn’t take a polyglot to realize that the state of the art for such technology still leaves much to be desired.

Linguistic relativity, however, goes at the heart of the semantic interoperability challenge. Take for example, one of today’s most useful semantic standards: the Resource Description Framework (RDF). RDF is a metadata data model intended for making statements about resources (in particular, Web-based resources) in the form of subject-predicate-object expressions. For example, you might be able to express the statement “ZapThink wrote this ZapFlash” in the triplet consisting of “ZapThink” (the subject); “wrote” (the predicate); and “this ZapFlash” (the object). Take this basic triplet building block and you can build semantic webs of arbitrary complexity, with the eventual goal of describing the relationships among all business entities within a particular business context.

The problem with the approach RDF takes, however, is that the subject-predicate-object structure is Eurocentric. Non-European languages (and hence, non-European speakers) don’t necessarily think in sentences that follow this structure. And furthermore, this problem isn’t new. In fact, the research into this phenomenon dates back to the 1940s, with the work of linguist Benjamin Lee Whorf. Whorf conducted linguistic research among the Hopi and other Native American peoples, and thus established an empirical basis for linguistic relativity. The illustration below comes from one of his seminal papers on the subject:



In the graphic above, Whorf compares a simple sentence, “I clean it with a ramrod,” where “it” refers to a gun, in English and Shawnee. The English sentence predictably follows the subject-predicate-object format that RDF leverages. The Shawnee translation, however, translates literally to “dry space/interior of hole/by motion of tool or instrument.” Not only is there no one-to-one correspondence between parts of speech across the two sentences, but the entire context of the expression is different. If you were in the unenviable position of establishing RDF-based semantic interoperability between, say, a British business and a Shawnee business, you’d find RDF far too culturally specific to rise to the challenge.

The ZapThink Take
We have tools for semantic interoperability today, of course – but all such tools require the human step of configuring or training the tool to understand the properties and relationships among entities. Once you’ve trained the tool, it’s possible to automate many semantic interactions. But to get this process started, we must get together in a room with the people we want to communicate with and hammer out the meanings of the terms we’d like to use.

This human component to semantic interoperability actually dates to the Stone Age. How did we do business in the Stone Age? Say your tribe was on the coast, so you had fish. You were getting tired of fish, so you and your tribemates decided to pack up some fish and bring the bundle to the next village where they had fruit. You showed up at the village market, only you had no common language. So what did you do? You held up some fish, pointed to some fruit, grunted, and waved your hands. If you established a basis of communication, you conducted business, and went home with some fruit. If not, then you went home empty handed (or you pulled out your clubs and attacked, but that’s another story). Cut to the 21st century, and little has changed. People still have to get together and establish a basis of communication as human beings in order to facilitate semantic interoperability. But fully automating such interoperability is as close as the next rainbow.

More Stories By Jason Bloomberg

Jason Bloomberg is a leading IT industry analyst, Forbes contributor, keynote speaker, and globally recognized expert on multiple disruptive trends in enterprise technology and digital transformation. He is ranked #5 on Onalytica’s list of top Digital Transformation influencers for 2018 and #15 on Jax’s list of top DevOps influencers for 2017, the only person to appear on both lists.

As founder and president of Agile Digital Transformation analyst firm Intellyx, he advises, writes, and speaks on a diverse set of topics, including digital transformation, artificial intelligence, cloud computing, devops, big data/analytics, cybersecurity, blockchain/bitcoin/cryptocurrency, no-code/low-code platforms and tools, organizational transformation, internet of things, enterprise architecture, SD-WAN/SDX, mainframes, hybrid IT, and legacy transformation, among other topics.

Mr. Bloomberg’s articles in Forbes are often viewed by more than 100,000 readers. During his career, he has published over 1,200 articles (over 200 for Forbes alone), spoken at over 400 conferences and webinars, and he has been quoted in the press and blogosphere over 2,000 times.

Mr. Bloomberg is the author or coauthor of four books: The Agile Architecture Revolution (Wiley, 2013), Service Orient or Be Doomed! How Service Orientation Will Change Your Business (Wiley, 2006), XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996). His next book, Agile Digital Transformation, is due within the next year.

At SOA-focused industry analyst firm ZapThink from 2001 to 2013, Mr. Bloomberg created and delivered the Licensed ZapThink Architect (LZA) Service-Oriented Architecture (SOA) course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, which was acquired by Dovel Technologies in 2011.

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting), and several software and web development positions.

DXWorldEXPO Digital Transformation Stories
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage...
For years the world's most security-focused and distributed organizations - banks, military/defense agencies, global enterprises - have sought to adopt cloud technologies that can reduce costs, future-proof against data growth, and improve user productivity. The challenges of cloud transformation for these kinds of secure organizations have centered around data security, migration from legacy systems, and performance. In our presentation, we will discuss the notion that cloud computing, properl...
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
By 2021, 500 million sensors are set to be deployed worldwide, nearly 40x as many as exist today. In order to scale fast and keep pace with industry growth, the team at Unacast turned to the public cloud to build the world's largest location data platform with optimal scalability, minimal DevOps, and maximum flexibility. Drawing from his experience with the Google Cloud Platform, VP of Engineering Andreas Heim will speak to the architecture of Unacast's platform and developer-focused processes.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.