Welcome!

@DXWorldExpo Authors: Zakia Bouachraoui, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: @DXWorldExpo, @CloudExpo, @ThingsExpo

@DXWorldExpo: Blog Feed Post

Better Predictors | @CloudExpo @Schmarzo #BigData #IoT #AI #ML #Analytics

I love the simplicity of the data science concepts as taught by the book 'Moneyball'

Data Science: Identifying Variables That Might Be Better Predictors

I love the simplicity of the data science concepts as taught by the book “Moneyball.” Everyone wants to jump right into the real meaty, highly technical data science books. But I recommend to my students to start with the book “Moneyball.” The book does a great job of making the power of data science come to life (and the movie doesn’t count, as my wife saw it and “Brad Pitt is so cute!” was her only takeaway…ugh). One of my favorite lessons out of the book is the definition of data science:

Data Science is about identifying those variables and metrics that might be better predictors of performance

This straightforward definition sets the stage for defining the roles and responsibilities of the business stakeholders and the data science team:

  • Business stakeholders are responsible for identifying (brainstorming) those variables and metrics that might be better predictors of performance, and
  • The Data Science team is responsible for quantifying which variables and metrics actually are better predictors of performance

This approach takes advantage of what the business stakeholders know best – which is the business. And this approach takes advantage of what the data science team knows best – which is data transformation, data enrichment, data exploration and analytic modeling. The perfect data science team!

Note: the word “might” is probably the most important word in the data science definition. Business stakeholders must feel comfortable brainstorming different variables and metrics that might be better predictors of performance without feeling like their ideas will be judged. Some of our best ideas come from people whose voices typically don’t get heard. Our Big Data Vision Workshop process considers all ideas to be worthy of consideration. If you do not embrace that concept, then you risk constraining the creative thinking of the business stakeholders, or worse, miss out on surfacing potentially valuable data insight.

This blog serves to expand on the approach that the data science team uses to identify (and quantify) which variables and metrics are better predictors of performance. Let me walk through an example.

We recently had an engagement with a financial services organization where we were asked to predict customer attritors; that is, to identify which customers were at-risk of ending their relationship with the organization. As we typically do in a Big Data Vision Workshop, we held facilitated brainstorming sessions with the business stakeholders to identify those variables and metrics that might be better predictors of performance (see Figure 1).

Figure 1: Brainstorming the Variables and Metrics that Might Be Better Predictors

Note: I had to blur the exact metrics that we identified for client reasons of competitive advantage. Yea, I like that!

From this list of variables and metrics, the data science team sought to create an “Attrition Score” that can be used to identify (or score) at-risk customers. The data science team embraced the iterative, “fail fast / learn faster” process in testing different combinations of variables and metrics.   The data science team tested different data enrichment and transformation techniques and different analytic algorithms with different combinations of the variables and metrics to see which combinations of variables yielded the best results (see Figure 2).

Figure 2: Exploring Different Combinations of Variables and Metrics

The challenge for the data science team is to not settle on the first model that “works.” The data science teams needs to constantly push the envelope and as a result, fail enough in their testing of different combinations of variables to feel personally confident in the results of the final model.

After much testing and failing – and testing and failing – and testing and failing, the data science team came up with an “Attrition Score” model that had failed enough times for them to feel confident about its results (see Figure 3).

Figure 3: Identifying Variables and Metrics that ARE Better Predictors

We needed an approach that got the best out of everyone on the project – the business stakeholders brainstorming variables and metrics, and the data science team creatively testing different combinations. The final results in this engagement were quite impressive (see Figure 4):

Figure 4: Final Attrition Model Results

The creative data science process of combining different variables and metrics is highly dependent upon the success of the business stakeholder brainstorming exercises. If the business stakeholders are not brought into this process early and allowed to think creatively about what variables and metrics might be better predictors of performance, then the collection of variables and metrics that the data science team will seek to test will be limited. Put another way, the success of the data science process and the creation of the actionable score is highly dependent upon the creative involvement of the business stakeholders at the beginning of the process.

And that’s the power of our Big Data Vision Workshop process.

——————–

If you are interested in learning more about the Dell EMC Big Data Vision Workshop, check out the blogs below:

The post Data Science: Identifying Variables That Might Be Better Predictors appeared first on InFocus Blog | Dell EMC Services.

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Hitachi Vantara as CTO, IoT and Analytics.

Previously, as a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

DXWorldEXPO Digital Transformation Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...