Welcome!

@DXWorldExpo Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui, Carmen Gonzalez

Related Topics: @DXWorldExpo, @CloudExpo, Apache

@DXWorldExpo: Article

Analytics in Decision-Making Workflow | @CloudExpo #BigData #Microservices

Big Data shouldn’t be restricted to data scientists

Putting Analytics into the Decision-Making Workflow with Apache Spark

Data-driven businesses use analytics to inform and support their decisions. In many companies, marketing, sales, finance, and operations departments tend to be the earliest adopters of data analytics, with the rest of the business lagging behind. The goal for many organizations now is to make analytics a natural part of most-if not every-employee's daily workflow. Achieving that objective typically requires a shift in the corporate culture, and ready access to user-friendly data analytics tools.

Big Data Shouldn't Be Restricted to Data Scientists
Big Data experts, when discussing the process of integrating data analysis into the workflow across an enterprise, often talk blithely about how users can easily leverage their SQL skills to query data. The problem is that not everyone has SQL skills-or even knows what SQL is.

Companies who plan to transform themselves into data-driven, lean businesses may want to consider the fact that every employee really doesn't need to be a data scientist. Focus the majority of training efforts (including how to run basic SQL queries, if necessary) on the employees whose jobs involve fact-based decision-making.

Making employees wait for IT to manage schemas and setup ETL tasks is counter-productive. In a busy company, by the time data is prepped for analysis, it may have lost some of its actionable relevance. Instead, provide robust self-service data analysis tools, such as Apache Drill, to enable users to extract the most value possible from data stored in Hadoop. This frees employees to work with data in native formats-schema-less data, nested data, and data with rapidly-evolving schemas-with limited to no IT involvement.

Self-service data tools also enable explorative queries. Users can explore the data directly and extend their analysis effortlessly, with no need to wait for IT to prep additional data sets. Analysis can then extend past known, structured data, to semi-structured and unstructured data, such as call center logs, videos, spreadsheets, social media data, clickstream data, web log files, and external data (such as publicly available industry data)-allowing a business to gain big picture, actionable insights on the fly.

Apache Spark: Bringing New Efficiencies to Big Data Analysis
Agile companies that rely on data analysis performed in near-time and real-time also need solutions that can rapidly process large data sets. Apache Spark, an in-memory data processing framework, is increasingly the solution of choice.

Spark is a framework providing parallel, distributed data processing. Spark can be deployed through Apache Hadoop via Yarn, Apache Mesos, or its own standalone cluster manager. It can serve as a foundation for other data processing frameworks, and supports programming languages including Scala, Java, and Python. Data can be accessed in HDFS, Cassandra, HBase, Hive, Tachyon, and any Hadoop data source.

Data sets can be pinned in memory with Spark, which boosts application performance noticeably. Spark also provides speed improvements for applications running on disk and enables MapReduce to support interactive queries and stream processing far more efficiently.

And Spark eliminates the need for separate, distributed systems to process, for example, batch applications, interactive queries, iterative algorithms, and/or streaming. With Spark, all of these processing types are supported by the same engine, reducing management chores and making the processes easier to combine.

Businesses can count on Spark's benefits over the long-term. Spark, initially conceived as a project at UC Berkeley in California, moved to the Apache Software Foundation in 2013 and became a top level project in 2014. Apache top level projects, which include Hadoop, Spark, and httpd, is a designation that indicates a project has strong community backing from developers and users-and has proved its worth. More than 50 companies currently list themselves on Spark's "Powered By" page.

Putting Data-Driven Intelligence to Work
Big Data incarnates multiple processes-collection, cleansing, integration, management, governance, security, analysis, and decision-making-all of which need to be in place before a company can consider itself data-driven. Oddly, the decision-making process itself tends to get the least attention.

Gaining real ROI from a Big Data project requires more than fast tools and a solid plan to enable users to incorporate analysis-driven decision-making into their workflow. Quick discovery of exciting new insights in data has no benefit if a company doesn't have a process that enables an equally speedy and effective response to that new intelligence. When devising (or revising) your Big Data project, ensure that you build in an implementation process that enables analysis to be transformed into action.

And finally, a word of warning about real-time analysis: It's easy to lose sight of long-range goals when you're immersed in the moment. Ensure that business goals are aligned with data analysis activities, and establish KPIs to monitor the success of data-driven initiatives. Big Data should provide a company with a sustainable competitive edge.

To explore more of what Spark has to offer, jump over to Getting Started with Apache Spark: From Inception to Production, a free interactive ebook by James A. Scott.

More Stories By Jim Scott

Jim has held positions running Operations, Engineering, Architecture and QA teams in the Consumer Packaged Goods, Digital Advertising, Digital Mapping, Chemical and Pharmaceutical industries. Jim has built systems that handle more than 50 billion transactions per day and his work with high-throughput computing at Dow Chemical was a precursor to more standardized big data concepts like Hadoop.

DXWorldEXPO Digital Transformation Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Cloud is the motor for innovation and digital transformation. CIOs will run 25% of total application workloads in the cloud by the end of 2018, based on recent Morgan Stanley report. Having the right enterprise cloud strategy in place, often in a multi cloud environment, also helps companies become a more intelligent business. Companies that master this path have something in common: they create a culture of continuous innovation. In his presentation, Dilipkumar will outline the latest resear...
Everyone wants the rainbow - reduced IT costs, scalability, continuity, flexibility, manageability, and innovation. But in order to get to that collaboration rainbow, you need the cloud! In this presentation, we'll cover three areas: First - the rainbow of benefits from cloud collaboration. There are many different reasons why more and more companies and institutions are moving to the cloud. Benefits include: cost savings (reducing on-prem infrastructure, reducing data center foot print, redu...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...