Welcome!

@DXWorldExpo Authors: Zakia Bouachraoui, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: @DXWorldExpo, Microservices Expo, Containers Expo Blog, Apache, Cloud Security, SDN Journal

@DXWorldExpo: Blog Post

Big Data – Big Help or Big Risk?

In order to effectively secure Big Data, you must mitigate the following security risks that aren’t addressed by prior security

By Andy Thurai (Twitter: @AndyThurai)

(Original version of this blog appeared on ProgrammableWeb)

As promised in my last blog “Big Data, API, and IoT …..Newer technologies protected by older security” here is a deep dive on Big Data security and how to effortlessly secure Big Data effectively.

Like many other open source models, Hadoop has followed a path that hasn’t focused much on security.  In order to effectively use Big Data, it needs to be secured properly. However if you try to force fit into an older security model, you might end up compromising more than you think. But if you make it highly secure, it might interfere with performance.

In order to effectively secure Big Data, you must mitigate the following security risks that aren’t addressed by prior security models.

Issue #1:  Are the keys to the kingdom with you?
In a hosted environment, the provider holds the keys to your secure data. If a government agency legally demands access, the providers are obligated to provide access to your data. While it is necessary, the onus should be on you to control when, what, and how much you are giving others access to and also keep track of the information released to facilitate internal auditing processes.

gove agency

Keep the keys to the kingdom with you. An encryption proxy can provide a tighter control.

Issue#2: Encrypting slows things down
If you encrypt the entire data, it could slow the performance down significantly. In order to avoid that, some of the Big Data, BI, and analytics programs choose to encrypt only portions of sensitive data. It is imperative to use a Big Data eco-system that is intelligent enough to encrypt data selectively.

A separate and more desirable option is to run faster encryption/ decryption. Solutions such as Intel Hadoop security Gateway use Intel chip based encryption acceleration (Intel AES-NI instruction set as well as SSE 4.2 instruction set) which is several orders of magnitude faster than software based encryption solutions. It is not only faster, but it is also more secure as the data never leaves the processor for an on or off-board crypto processor.

AES-NI encryption

Issue #3: Identifiable, sensitive data is a big risk
Sensitive data can be classified into two groups: Risk or Compliance. Safeguarding your data might include one of the following:

  1. Completely redact this information so you can never get the original information back. While this is the most effective method, it would be difficult to get the original data back if needed.
  2. Tokenize the sensitive data using proxy tokenization solution. You can create a completely random token that can be made to look like the original data to fit the format so it won’t break the backend systems. The sensitive data can be stored in a secure vault and only associated tokens can be distributed.
  3. Encrypt the sensitive data using mechanisms such as Format Preserving Encryption (FPE) so the output encrypted data fits the format of the original data. Care should be exercised in selecting a solution to make sure the solution has strong key management & strong encryption capabilities.

Issue #4: Data and access control properties together
If you let applications/services access the raw data that could be disastrous. Instead, you might want to enforce the data access controls, as close to the data as possible. You need to distribute data, associated properties, classification levels, and enforce them where the data is. One way to enforce this would be to have an API expose data that can control the exposure based on data attributes locally.

Issue #5: Protect the exposure APIs
Many of the Big Data components communicate via APIs (i.e. HDFS, HBase, and HCatalog). When you allow such powerful APIs to be exposed with very little, or no protection, it could lead to disastrous results. The most effective way to protect your Big Data goldmine would be to introduce a touchless API security Gateway in front of the Hadoop clusters. The clusters can be made to trust calls ONLY from the secure gateway. By choosing a hardened Big Data security gateway you can enforce all of the above by using very rich authentication and authorization schemes.

Issue #6: Name node protection
This issue is important enough for me to call this out as a separate issue. This arises from the architectural perspective that, if no proper resource protection is enforced, the NameNode can become the single point of failure making the entire Hadoop cluster useless. It is as easy as someone launching a DOS attack against webHDFS by producing excessive activity that can bring webHDFS down.

NameNode

Issue #7: Identify, Authenticate, Authorize and control the data access
You need to have an effective Identity Management and Access control system in place to make this happen. You also need to identify the user base and effectively control access to the data consistently based on access control policies without relying on an additional identity silos. Ideally, authentication and authorization for Hadoop should leverage existing identity management investments. The enforcement should also take into account the time based restrictions as well (such as certain users can access certain data only during specific periods, etc.).

Issue #8: Monitor, Log and analyze the usage patterns
Once you have implemented an effective data access controls based classification, you also need to monitor and log the usage patterns. You need to constantly analyze the usage patterns to make sure that there is no unusual activity. It is very crucial to catch an unusual activity and access-pattern early enough so you can avoid dumps of data making it out of your repository to a hacker.

Conclusion
As more and more organizations are rushing to implement and utilize the power of Big Data, care should be exercised to secure Big Data. Extending the existing security models to fit Big Data may not solve the problem; as a matter of fact it might introduce additional performance issues as discussed above. A solid security framework needs to be thought out before organizations can adopt enterprise grade Big Data.

The post Big Data – Big Help or Big Risk? appeared first on Application Security.

Read the original blog entry...

More Stories By Andy Thurai

Andy Thurai is Program Director for API, IoT and Connected Cloud with IBM, where he is responsible for solutionizing, strategizing, evangelizing, and providing thought leadership for those technologies. Prior to this role, he has held technology, architecture leadership and executive positions with Intel, Nortel, BMC, CSC, and L-1 Identity Solutions.

You can find more of his thoughts at www.thurai.net/blog or follow him on Twitter @AndyThurai.

DXWorldEXPO Digital Transformation Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...