Share via


KDD – Two Themes

This blog post is authored by Jacob Spoelstra , Director of Data Science at the Information Management & Machine Learning (IMML) team at Microsoft.

The recently concluded KDD conference reaffirmed its claim as the premier conference for Data Science, for both theory and practice, as evidenced by the sold-out crowd of over 2000 that packed the halls at the New York Sheraton. Premier sponsorship by Bloomberg and record-setting attendance (almost double last year’s) indicate this remains a white-hot field.

Every year brings a mix of new algorithms and applications. In line with this year’s theme of Data Mining for Social Good, two key aspects came to the fore: operationalization and interpretability. Right from the opening remarks we heard several times the need to get predictive models out of the lab and into real world systems and to drive real actions. Appropriately, the winners of the best Social Good paper, “Targeting Direct Cash Transfers to the Extremely Poor” by Brian Abelson, Kush Varshney and Joy Sun describes applying image recognition to locating villages with extreme poverty in Kenya and Uganda, driving deployment of aid and staff.

In his keynote, “Data, Predictions and Decisions in Support of People and Society” Eric Horvitz challenged the community to build systems that change the world. Deployment of predictive models remains a tough issue – data scientists are familiar with the long and painful process involved in going from good performance on training data to a production system. This typically involves documenting all the data transformations and model details, then handing it over to engineers to implement. Foster Provost and Tom Fawcett, in their excellent book “Data Science for Business”, remind us that the solution you deliver is not the model that your data scientist developed, it is the algorithm that your IT implemented.

Business sponsors want to comprehend the model and understand the drivers of outcomes. The interpretability issue is known to those who work in regulated industries such as consumer credit. The Fair Credit Reporting Act requires that consumers be provided with actionable reasons if declined for credit. In general, credit reports come with reasons behind the score. Explanations are a common requirement in customer-facing scenarios such as credit card fraud prevention (e.g. why are you blocking my card?) and online merchant suggestions (e.g. why is this being recommended to me?). In his keynote talk, “A Data Driven Approach to Diagnosing and Treating Disease”, Eric Schadt of the Icahn School of Medicine explained that medical professionals need to understand why a model produced a specific diagnosis. Interpretability is often accomplished by sacrificing accuracy – resorting to a relatively simple model such as linear or logistic regression or basic decision trees where the behavior can be understood by examining the model parameters and structure. In many cases, higher accuracy can be achieved using more complex non-linear methods such as neural networks or boosted decision trees but at the cost of comprehensibility. These models are best understood by their behavior, as opposed to inspecting the formula.

Interpretability is more than being able to comprehend the relation between model inputs and outputs. A point that Eric Horvitz emphasized is the importance of translating analytical results into business terms: Simulate the system, show costs, assumptions about efficacies of treatments. Present the true net benefit for realistic business scenarios.

Both themes play well to the strengths of the product we just launched, Azure ML. Deployment of a model to a cloud-hosted web service is just a few clicks away. From there on, it’s easy integration to production systems where real life decisions can be affected. This easy deployment also facilitates interpretability in the sense that such a model can be easily queried as part of a what-if simulation.

Historically this has been difficult to accomplish because the translation from a lab model to a production system that can score new data has been complex, time consuming and would only be done once the model has been finalized. This provides an opportunity to directly observe the behavior of the system by manipulating the input data in interesting ways.

As a proof of concept, we developed an Excel plug-in that can call a published request-response service using data in excel tables as input. This allowed us to use Excel’s GUI tools both to manipulate data and to graph results. Here are two examples:

  1. Direct “what if” scenarios: Using GUI controls, a user can manipulate inputs to define a specific case and observe the outcome. This could be used to explore the effect of perturbations around a specific case: e.g., what would the prediction be if inflation rates were 1% higher.

  2. Monte Carlo simulation: The user sets ranges of inputs (with probability distributions), then the system samples from possible scenarios, calculates and plots the distribution of outcomes. This is useful for getting estimates of best and worst cases, and most likely outcomes.

As data scientists we have our work cut out to get our models integrated into applications. While new tools do lower technology barriers, consumers and business owners still need systems that they can trust and relate to. For a walk-through on how to build, score and evaluate a predictive model in Azure ML, you can get started by watching this step by step video.

Jacob Spoelstra