Welcome!

@DXWorldExpo Authors: Zakia Bouachraoui, Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz

Related Topics: @DXWorldExpo, Agile Computing, @CloudExpo, Cloud Security, Government Cloud, SDN Journal

@DXWorldExpo: Article

Big Data Governance for Good or Evil

Lessons of the NSA PRISM Initiative

In the days since the news of the NSA’s secret PRISM spying – oops, surveillance initiative broke, there has been no end of consternation among the media and the Twitterverse. And regardless of where you fall on the political spectrum or what you think of the morality of the NSA’s efforts to collect information about our phone calls or social media interactions, one clear fact shines through: Big Data are real. They are here to stay. But they are also increasingly dangerous. As I explain in my book The Agile Architecture Revolution, the more powerful the technology, the more importance we must place on governance. So too with Big Data.

PRISM’s Big Data Governance Lessons
Most people would agree that finding terrorists and stopping them before they can wreak havoc is a good thing. It is also safe to assume that most people would allow that the US Government should be in the intelligence-gathering business, if only to stop the aforesaid terrorists. Countries have been gathering intelligence for millennia, after all, and victories frequently go to the adversary with the better intelligence. Why, then, are people livid about the NSA this time around?

The answer, of course, is that we’re not angry that the NSA is gathering intelligence on terrorists. We’re upset that the NSA is gathering intelligence on everybody else, including ourselves. We’re not talking about some James Bond-style spy mission here. We’re talking about Big Data.

Here, then, is PRISM Big Data lesson number one: It’s not just the data you want that are important, you also have to worry about the data you don’t want. Traditional data governance generally focuses on the data you want: let’s make sure our data are clean, correct, and properly secured. When we have a limited quantity of data and they all have value, then issues like data quality are relatively straightforward (although achieving data quality in practice may still be a major headache).

In the Big Data scenario, however, we’re miners looking for that nugget of gold hidden in vast quantities of dross. Yes, we must govern that nugget of value, but that’s the easy task, relatively speaking. The lesson from PRISM is that we must also govern the dross: the data we don’t want, because they open up a range of governance challenges like the privacy issues at the core of the PRISM scandal.

Your Big Data governance challenge may not be privacy related, but the fact remains that the more leftover data you have, the harder it is to govern them. After all, just because you don’t find value in Big Data doesn’t mean your competition or a hacker won’t.

The second lesson from PRISM: metadata may be Big Data as well. Data professionals are used to thinking of metadata as having technical value but little worth outside the bowels of the IT organization. In the case of PRISM, however, the NSA went after call detail records (CDRs), not the calls themselves. True, I felt a strangely geeky thrill when President Obama used the word metadata – and used it correctly, by the way – but the recent focus on call metadata only serves to highlight the fact that the metadata themselves may be the most valuable Big Data you own. Ask yourself: how robust is your metadata governance? If it’s not every bit as rock solid as your everyday data governance, then perhaps you’re not ready for Big Data after all.

PRISM lesson number 3: Big Data analytics apps can be data governance tools themselves, particularly when the central challenge is data quality. Terrorists, after all, aren’t quite stupid enough to send tweets like buying #plasticexplosives now, meet me at the #Boston #Marathon. They may be fanatics, but let’s posit that we’ve already taken out the real numbskulls already, OK? We can safely assume terrorists are actively seeking to obscure their communications, which from the enterprise perspective, is an example of (in this case intentionally) poor data quality.

The NSA naturally has sophisticated algorithms for cutting through such obfuscation. As your Big Data sets grow, you’ll need similarly sophisticated tools for cleaning up run of the mill data quality issues. Remember, the bigger the data sets, the more diverse and messy your data quality challenges will become. After all, fixing mailing address formats in your ERP system is dramatically simpler than bringing a vast hodgepodge of structured, semi-structured, and unstructured information into some kind of order.

On to PRISM lesson number four: Your Big Data analytics results may not only be valuable, they may also be dangerous. While it’s common to liken Big Data analytics to mining for gold, in reality it may be more like mining for uranium. True, uranium has monetary value, but put too much pure uranium in the same place and you’re asking for Big Trouble – Trouble with a capital T.

For example, US Census data are publicly available, but they are not allowed to provide any personally identifiable information. However, if it turns out that there is, say, only one Native American family with two children in a given zip code, then it may be possible to uniquely identify them by crunching the data. As a result, the Census Bureau must be very careful not to publish any data that may lead to such results.

Similarly, a significant danger in the NSA analysis is the risk of false positives. Mistakenly identifying an innocent citizen as a terrorist is an appalling risk that outweighs ordinary privacy concerns – at least in the opinion of the innocent civilian. And while on the one hand, the more data the NSA crunches, the less likely a false positive may be, it also follows that such false positives are all the more dangerous for their rarity.

Onto the fifth lesson, what ZapThink likes to call the Big Data corollary to Parkinson’s Law. You may recall that Parkinson’s Law states that the amount of work you have will expand to fill the available time. The Big Data corollary states that the amount of data you collect will expand to consume your ability to store and process it. In other words, if it’s possible to collect Big Data, then somebody will. It’s a question of what to do with it, not a question of whether to collect it in the first place. So let’s not worry about whether the NSA should collect the data it does. If they don’t, then someone else will – or already has. Any Big Data governance effort faces the same challenge: what to do with your data, not whether to collect it in the first place.

Finally, the sixth lesson, which is actually a lesson from something the NSA isn’t doing. Note that in the case of the NSA, current data are more valuable than historical data, even historical data that are one day old. Their paramount concern is to mine current intelligence: what terrorists are doing right now. But your problem area might find value in historical data as well as current data. If your problem deals with historical trends, then your data sets have just ballooned again, as have your data governance challenges.

The ZapThink Take
The NSA was only collecting phone call metadata, because those metadata met their needs. But what about the data themselves—the call audio? Perhaps they are unable to collect such vast quantities of data. But if not, it’s only a matter of time. The question is, once they’re able to collect all call audio, will they? Yes, of course they will. The corollary to Parkinson’s Law in action, after all.

In fact, we might as well just go ahead and assume that somewhere in the Federal Government, they’re collecting all the data – all the phone calls, all the emails, all the tweets, blog posts, forum comments, log files, everything. Because even if they aren’t quite able to amass the whole shebang yet, it’s just a matter of time till they can. And while this scenario seems like a page out of Orwell’s 1984, the most important lesson here is that data governance is now of central importance. It’s no longer a question of whether we can collect Big Data. The entire question is what we should do with Big Data once we have them.

More Stories By Jason Bloomberg

Jason Bloomberg is a leading IT industry analyst, Forbes contributor, keynote speaker, and globally recognized expert on multiple disruptive trends in enterprise technology and digital transformation. He is ranked #5 on Onalytica’s list of top Digital Transformation influencers for 2018 and #15 on Jax’s list of top DevOps influencers for 2017, the only person to appear on both lists.

As founder and president of Agile Digital Transformation analyst firm Intellyx, he advises, writes, and speaks on a diverse set of topics, including digital transformation, artificial intelligence, cloud computing, devops, big data/analytics, cybersecurity, blockchain/bitcoin/cryptocurrency, no-code/low-code platforms and tools, organizational transformation, internet of things, enterprise architecture, SD-WAN/SDX, mainframes, hybrid IT, and legacy transformation, among other topics.

Mr. Bloomberg’s articles in Forbes are often viewed by more than 100,000 readers. During his career, he has published over 1,200 articles (over 200 for Forbes alone), spoken at over 400 conferences and webinars, and he has been quoted in the press and blogosphere over 2,000 times.

Mr. Bloomberg is the author or coauthor of four books: The Agile Architecture Revolution (Wiley, 2013), Service Orient or Be Doomed! How Service Orientation Will Change Your Business (Wiley, 2006), XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996). His next book, Agile Digital Transformation, is due within the next year.

At SOA-focused industry analyst firm ZapThink from 2001 to 2013, Mr. Bloomberg created and delivered the Licensed ZapThink Architect (LZA) Service-Oriented Architecture (SOA) course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, which was acquired by Dovel Technologies in 2011.

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting), and several software and web development positions.

DXWorldEXPO Digital Transformation Stories
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve fu...
Only Adobe gives everyone - from emerging artists to global brands - everything they need to design and deliver exceptional digital experiences. Adobe Systems Incorporated develops, markets, and supports computer software products and technologies. The Company's products allow users to express and use information across all print and electronic media. The Company's Digital Media segment provides tools and solutions that enable individuals, small and medium businesses and enterprises to cre...
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, softwar...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...