Welcome!

@DXWorldExpo Authors: Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Carmen Gonzalez, Elizabeth White

Blog Feed Post

Big Data Governance for Good or Evil: Lessons of the NSA PRISM Initiative

post thumbnail

In the days since the news of the NSA’s secret PRISM spying – oops, surveillance initiative broke, there has been no end of consternation among the media and the Twitterverse. And regardless of where you fall on the political spectrum or what you think of the morality of the NSA’s efforts to collect information about our phone calls or social media interactions, one clear fact shines through: Big Data are real. They are here to stay. But they are also increasingly dangerous. As I explain in my book The Agile Architecture Revolution, the more powerful the technology, the more importance we must place on governance. So too with Big Data.

PRISM’s Big Data Governance Lessons

Most people would agree that finding terrorists and stopping them before they can wreak havoc is a good thing. It is also safe to assume that most people would allow that the US Government should be in the intelligence-gathering business, if only to stop the aforesaid terrorists. Countries have been gathering intelligence for millennia, after all, and victories frequently go to the adversary with the better intelligence. Why, then, are people livid about  the NSA this time around?

The answer, of course, is that we’re not angry that the NSA is gathering intelligence on terrorists. We’re upset that the NSA is gathering intelligence on everybody else, including ourselves. We’re not talking about some James Bond-style spy mission here. We’re talking about Big Data.

Here, then, is PRISM Big Data lesson number one: It’s not just the data you want that are important, you also have to worry about the data you don’t want. Traditional data governance generally focuses on the data you want: let’s make sure our data are clean, correct, and properly secured. When we have a limited quantity of data and they all have value, then issues like data quality are relatively straightforward (although achieving data quality in practice may still be a major headache).

In the Big Data scenario, however, we’re miners looking for that nugget of gold hidden in vast quantities of dross. Yes, we must govern that nugget of value, but that’s the easy task, relatively speaking. The lesson from PRISM is that we must also govern the dross: the data we don’t want, because they open up a range of governance challenges like the privacy issues at the core of the PRISM scandal.

Your Big Data governance challenge may not be privacy related, but the fact remains that the more leftover data you have, the harder it is to govern them. After all, just because you don’t find value in Big Data doesn’t mean your competition or a hacker won’t.

The second lesson from PRISM: metadata may be Big Data as well. Data professionals are used to thinking of metadata as having technical value but little worth outside the bowels of the IT organization. In the case of PRISM, however, the NSA went after call detail records (CDRs), not the calls themselves. True, I felt a strangely geeky thrill when President Obama used the word metadata – and used it correctly, by the way – but the recent focus on call metadata only serves to highlight the fact that the metadata themselves may be the most valuable Big Data you own. Ask yourself: how robust is your metadata governance? If it’s not every bit as rock solid as your everyday data governance, then perhaps you’re not ready for Big Data after all.

PRISM lesson number 3: Big Data analytics apps can be data governance tools themselves, particularly when the central challenge is data quality. Terrorists, after all, aren’t quite stupid enough to send tweets like buying #plasticexplosives now, meet me at the #Boston #Marathon. They may be fanatics, but let’s posit that we’ve already taken out the real numbskulls already, OK? We can safely assume terrorists are actively seeking to obscure their communications, which from the enterprise perspective, is an example of (in this case intentionally) poor data quality.

The NSA naturally has sophisticated algorithms for cutting through such obfuscation. As your Big Data sets grow, you’ll need similarly sophisticated tools for cleaning up run of the mill data quality issues. Remember, the bigger the data sets, the more diverse and messy your data quality challenges will become. After all, fixing mailing address formats in your ERP system is dramatically simpler than bringing a vast hodgepodge of structured, semi-structured, and unstructured information into some kind of order.

On to PRISM lesson number four: Your Big Data analytics results may not only be valuable, they may also be dangerous. While it’s common to liken Big Data analytics to mining for gold, in reality it may be more like mining for uranium. True, uranium has monetary value, but put too much pure uranium in the same place and you’re asking for Big Trouble – Trouble with a capital T.

For example, US Census data are publicly available, but they are not allowed to provide any personally identifiable information. However, if it turns out that there is, say, only one Native American family with two children in a given zip code, then it may be possible to uniquely identify them by crunching the data. As a result, the Census Bureau must be very careful not to publish any data that may lead to such results.

Similarly, a significant danger in the NSA analysis is the risk of false positives. Mistakenly identifying an innocent citizen as a terrorist is an appalling risk that outweighs ordinary privacy concerns – at least in the opinion of the innocent civilian. And while on the one hand, the more data the NSA crunches, the less likely a false positive may be, it also follows that such false positives are all the more dangerous for their rarity.

Onto the fifth lesson, what ZapThink likes to call the Big Data corollary to Parkinson’s Law. You may recall that Parkinson’s Law states that the amount of work you have will expand to fill the available time. The Big Data corollary states that the amount of data you collect will expand to consume your ability to store and process it. In other words, if it’s possible to collect Big Data, then somebody will. It’s a question of what to do with it, not a question of whether to collect it in the first place. So let’s not worry about whether the NSA should collect the data it does. If they don’t, then someone else will – or already has. Any Big Data governance effort faces the same challenge: what to do with your data, not whether to collect it in the first place.

Finally, the sixth lesson, which is actually a lesson from something the NSA isn’t doing. Note that in the case of the NSA, current data are more valuable than historical data, even historical data that are one day old. Their paramount concern is to mine current intelligence: what terrorists are doing right  now. But your problem area might find value in historical data as well as current data. If your problem deals with historical trends, then your data sets have just ballooned again, as have your data governance challenges.

The ZapThink Take

The NSA was only collecting phone call metadata, because those metadata met their needs. But what about the data themselves—the call audio? Perhaps they are unable to collect such vast quantities of data. But if not, it’s only a matter of time. The question is, once they’re able to collect all call audio, will they? Yes, of course they will. The corollary to Parkinson’s Law in action, after all.

In fact, we might as well just go ahead and assume that somewhere in the Federal Government, they’re collecting all the data – all the phone calls, all the emails, all the tweets, blog posts, forum comments, log files, everything. Because even if they aren’t quite able to amass the whole shebang yet, it’s just a matter of time till they can. And while this scenario seems like a page out of Orwell’s 1984, the most important lesson here is that data governance is now of central importance. It’s no longer a question of whether we can collect Big Data. The entire question is what we should do with Big Data once we have them.

Read the original blog entry...

More Stories By Jason Bloomberg

Jason Bloomberg is a leading IT industry analyst, Forbes contributor, keynote speaker, and globally recognized expert on multiple disruptive trends in enterprise technology and digital transformation. He is ranked #5 on Onalytica’s list of top Digital Transformation influencers for 2018 and #15 on Jax’s list of top DevOps influencers for 2017, the only person to appear on both lists.

As founder and president of Agile Digital Transformation analyst firm Intellyx, he advises, writes, and speaks on a diverse set of topics, including digital transformation, artificial intelligence, cloud computing, devops, big data/analytics, cybersecurity, blockchain/bitcoin/cryptocurrency, no-code/low-code platforms and tools, organizational transformation, internet of things, enterprise architecture, SD-WAN/SDX, mainframes, hybrid IT, and legacy transformation, among other topics.

Mr. Bloomberg’s articles in Forbes are often viewed by more than 100,000 readers. During his career, he has published over 1,200 articles (over 200 for Forbes alone), spoken at over 400 conferences and webinars, and he has been quoted in the press and blogosphere over 2,000 times.

Mr. Bloomberg is the author or coauthor of four books: The Agile Architecture Revolution (Wiley, 2013), Service Orient or Be Doomed! How Service Orientation Will Change Your Business (Wiley, 2006), XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996). His next book, Agile Digital Transformation, is due within the next year.

At SOA-focused industry analyst firm ZapThink from 2001 to 2013, Mr. Bloomberg created and delivered the Licensed ZapThink Architect (LZA) Service-Oriented Architecture (SOA) course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, which was acquired by Dovel Technologies in 2011.

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting), and several software and web development positions.

DXWorldEXPO Digital Transformation Stories
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.
The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
In an age of borderless networks, security for the cloud and security for the corporate network can no longer be separated. Security teams are now presented with the challenge of monitoring and controlling access to these cloud environments, at the same time that developers quickly spin up new cloud instances and executives push forwards new initiatives. The vulnerabilities created by migration to the cloud, such as misconfigurations and compromised credentials, require that security teams t...
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
Serverless applications increase developer productivity and time to market, by freeing engineers from spending time on infrastructure provisioning, configuration and management. Serverless also simplifies Operations and reduces cost - as the Kubernetes container infrastructure required to run these applications is automatically spun up and scaled precisely with the workload, to optimally handle all runtime requests. Recent advances in open source technology now allow organizations to run Serv...
As the fourth industrial revolution continues to march forward, key questions remain related to the protection of software, cloud, AI, and automation intellectual property. Recent developments in Supreme Court and lower court case law will be reviewed to explain the intricacies of what inventions are eligible for patent protection, how copyright law may be used to protect application programming interfaces (APIs), and the extent to which trademark and trade secret law may have expanded relev...
Cloud computing, big data and AI provide a new impetus and urgency to traditional enterprises to become digitally transformed businesses as they face disruption from new players who leverage technology to foster new business models. Traditionally, enterprises focused on digitizing processes and transactions. The incumbents can also be disruptors by leveraging AI for data-driven insights and innovate at scale on Cloud platform. They need to uncover the power of ERP/SAP using Cloud, AI and Big dat...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Th...