Welcome!

@DXWorldExpo Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, Zakia Bouachraoui

Related Topics: @DXWorldExpo, Agile Computing, @CloudExpo, Cloud Security, Government Cloud, SDN Journal

@DXWorldExpo: Article

Big Data Governance for Good or Evil

Lessons of the NSA PRISM Initiative

In the days since the news of the NSA’s secret PRISM spying – oops, surveillance initiative broke, there has been no end of consternation among the media and the Twitterverse. And regardless of where you fall on the political spectrum or what you think of the morality of the NSA’s efforts to collect information about our phone calls or social media interactions, one clear fact shines through: Big Data are real. They are here to stay. But they are also increasingly dangerous. As I explain in my book The Agile Architecture Revolution, the more powerful the technology, the more importance we must place on governance. So too with Big Data.

PRISM’s Big Data Governance Lessons
Most people would agree that finding terrorists and stopping them before they can wreak havoc is a good thing. It is also safe to assume that most people would allow that the US Government should be in the intelligence-gathering business, if only to stop the aforesaid terrorists. Countries have been gathering intelligence for millennia, after all, and victories frequently go to the adversary with the better intelligence. Why, then, are people livid about the NSA this time around?

The answer, of course, is that we’re not angry that the NSA is gathering intelligence on terrorists. We’re upset that the NSA is gathering intelligence on everybody else, including ourselves. We’re not talking about some James Bond-style spy mission here. We’re talking about Big Data.

Here, then, is PRISM Big Data lesson number one: It’s not just the data you want that are important, you also have to worry about the data you don’t want. Traditional data governance generally focuses on the data you want: let’s make sure our data are clean, correct, and properly secured. When we have a limited quantity of data and they all have value, then issues like data quality are relatively straightforward (although achieving data quality in practice may still be a major headache).

In the Big Data scenario, however, we’re miners looking for that nugget of gold hidden in vast quantities of dross. Yes, we must govern that nugget of value, but that’s the easy task, relatively speaking. The lesson from PRISM is that we must also govern the dross: the data we don’t want, because they open up a range of governance challenges like the privacy issues at the core of the PRISM scandal.

Your Big Data governance challenge may not be privacy related, but the fact remains that the more leftover data you have, the harder it is to govern them. After all, just because you don’t find value in Big Data doesn’t mean your competition or a hacker won’t.

The second lesson from PRISM: metadata may be Big Data as well. Data professionals are used to thinking of metadata as having technical value but little worth outside the bowels of the IT organization. In the case of PRISM, however, the NSA went after call detail records (CDRs), not the calls themselves. True, I felt a strangely geeky thrill when President Obama used the word metadata – and used it correctly, by the way – but the recent focus on call metadata only serves to highlight the fact that the metadata themselves may be the most valuable Big Data you own. Ask yourself: how robust is your metadata governance? If it’s not every bit as rock solid as your everyday data governance, then perhaps you’re not ready for Big Data after all.

PRISM lesson number 3: Big Data analytics apps can be data governance tools themselves, particularly when the central challenge is data quality. Terrorists, after all, aren’t quite stupid enough to send tweets like buying #plasticexplosives now, meet me at the #Boston #Marathon. They may be fanatics, but let’s posit that we’ve already taken out the real numbskulls already, OK? We can safely assume terrorists are actively seeking to obscure their communications, which from the enterprise perspective, is an example of (in this case intentionally) poor data quality.

The NSA naturally has sophisticated algorithms for cutting through such obfuscation. As your Big Data sets grow, you’ll need similarly sophisticated tools for cleaning up run of the mill data quality issues. Remember, the bigger the data sets, the more diverse and messy your data quality challenges will become. After all, fixing mailing address formats in your ERP system is dramatically simpler than bringing a vast hodgepodge of structured, semi-structured, and unstructured information into some kind of order.

On to PRISM lesson number four: Your Big Data analytics results may not only be valuable, they may also be dangerous. While it’s common to liken Big Data analytics to mining for gold, in reality it may be more like mining for uranium. True, uranium has monetary value, but put too much pure uranium in the same place and you’re asking for Big Trouble – Trouble with a capital T.

For example, US Census data are publicly available, but they are not allowed to provide any personally identifiable information. However, if it turns out that there is, say, only one Native American family with two children in a given zip code, then it may be possible to uniquely identify them by crunching the data. As a result, the Census Bureau must be very careful not to publish any data that may lead to such results.

Similarly, a significant danger in the NSA analysis is the risk of false positives. Mistakenly identifying an innocent citizen as a terrorist is an appalling risk that outweighs ordinary privacy concerns – at least in the opinion of the innocent civilian. And while on the one hand, the more data the NSA crunches, the less likely a false positive may be, it also follows that such false positives are all the more dangerous for their rarity.

Onto the fifth lesson, what ZapThink likes to call the Big Data corollary to Parkinson’s Law. You may recall that Parkinson’s Law states that the amount of work you have will expand to fill the available time. The Big Data corollary states that the amount of data you collect will expand to consume your ability to store and process it. In other words, if it’s possible to collect Big Data, then somebody will. It’s a question of what to do with it, not a question of whether to collect it in the first place. So let’s not worry about whether the NSA should collect the data it does. If they don’t, then someone else will – or already has. Any Big Data governance effort faces the same challenge: what to do with your data, not whether to collect it in the first place.

Finally, the sixth lesson, which is actually a lesson from something the NSA isn’t doing. Note that in the case of the NSA, current data are more valuable than historical data, even historical data that are one day old. Their paramount concern is to mine current intelligence: what terrorists are doing right now. But your problem area might find value in historical data as well as current data. If your problem deals with historical trends, then your data sets have just ballooned again, as have your data governance challenges.

The ZapThink Take
The NSA was only collecting phone call metadata, because those metadata met their needs. But what about the data themselves—the call audio? Perhaps they are unable to collect such vast quantities of data. But if not, it’s only a matter of time. The question is, once they’re able to collect all call audio, will they? Yes, of course they will. The corollary to Parkinson’s Law in action, after all.

In fact, we might as well just go ahead and assume that somewhere in the Federal Government, they’re collecting all the data – all the phone calls, all the emails, all the tweets, blog posts, forum comments, log files, everything. Because even if they aren’t quite able to amass the whole shebang yet, it’s just a matter of time till they can. And while this scenario seems like a page out of Orwell’s 1984, the most important lesson here is that data governance is now of central importance. It’s no longer a question of whether we can collect Big Data. The entire question is what we should do with Big Data once we have them.

More Stories By Jason Bloomberg

Jason Bloomberg is a leading IT industry analyst, Forbes contributor, keynote speaker, and globally recognized expert on multiple disruptive trends in enterprise technology and digital transformation. He is ranked #5 on Onalytica’s list of top Digital Transformation influencers for 2018 and #15 on Jax’s list of top DevOps influencers for 2017, the only person to appear on both lists.

As founder and president of Agile Digital Transformation analyst firm Intellyx, he advises, writes, and speaks on a diverse set of topics, including digital transformation, artificial intelligence, cloud computing, devops, big data/analytics, cybersecurity, blockchain/bitcoin/cryptocurrency, no-code/low-code platforms and tools, organizational transformation, internet of things, enterprise architecture, SD-WAN/SDX, mainframes, hybrid IT, and legacy transformation, among other topics.

Mr. Bloomberg’s articles in Forbes are often viewed by more than 100,000 readers. During his career, he has published over 1,200 articles (over 200 for Forbes alone), spoken at over 400 conferences and webinars, and he has been quoted in the press and blogosphere over 2,000 times.

Mr. Bloomberg is the author or coauthor of four books: The Agile Architecture Revolution (Wiley, 2013), Service Orient or Be Doomed! How Service Orientation Will Change Your Business (Wiley, 2006), XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996). His next book, Agile Digital Transformation, is due within the next year.

At SOA-focused industry analyst firm ZapThink from 2001 to 2013, Mr. Bloomberg created and delivered the Licensed ZapThink Architect (LZA) Service-Oriented Architecture (SOA) course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, which was acquired by Dovel Technologies in 2011.

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting), and several software and web development positions.

DXWorldEXPO Digital Transformation Stories
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage...
DXWorldEXPO LLC announced today that All in Mobile, a mobile app development company from Poland, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. All In Mobile is a mobile app development company from Poland. Since 2014, they maintain passion for developing mobile applications for enterprises and startups worldwide.
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER gives detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPOalso offers sp...