Chat with Us

Algorithmic Accountability: The Need for Explanations

 

By Tom Slee

If a business uses machine learning software to assist with hiring and promotion decisions, “explanation” can quickly become important. Sometimes, an individual may want an explanation of a specific decision:

Manager: I’m sorry, we decided to give the new job to Denise.

Alice: Why? What qualifications does she have that I don’t?

Manager: Our software said she’s a better fit. We ran you both through the matching algorithm and she scored significantly higher.

Sometimes, the explanation required may be more about a group: if a firm consistently hires from one demographic group and leaves another out in the cold, the frozen-out group may demand an explanation.

But what kind of explanation are we talking about? “Because the algorithm said so” is an explanation, just not a very good one. One influential definition is used by the EU General Data Processing Regulation: explanation is “meaningful information about the logic of processing”.

The concern over explanations has grown in recent years because the latest generation of neural network Deep Learning algorithms (mainly used on unstructured data such as images and text) are particularly opaque. Deep Learning has captured the industry’s imagination because of dramatic achievements such as AlphaGo defeating the world’s leading Go player, but that doesn’t make it the best technique for all problems. When data are structured (as so much business data is) and when data sets are not massive (the ImageNet data set that has played a prominent role in some Deep Learning achievements consists of over 14 million images, which is a lot bigger than most enterprise data sets) classic machine learning is often the better choice, having a number of variables more suitable to the problem and having better-understood behaviors.

 

What we mean by explanation

Some observers claim that the complexity of Deep Learning algorithms makes them inherently opaque. Here is The New Yorker, from a 2016 portrait of Sam Altman, head of early-stage investor Y Combinator.

Y Combinator has even begun using an A.I. bot, Hal9000, to help it sift admission applications: the bot’s neural net trains itself by assessing previous applications and those companies’ outcomes. “What’s it looking for?” I asked Altman. “I have no idea,” he replied. “That’s the unsettling thing about neural networks—you have no idea what they’re doing, and they can’t tell you.”

Fortunately, there is good reason to believe Altman is too pessimistic about neural network explanations.It is helpful to start by thinking a bit more about what we mean by “explanation”. In a paper called “Slave to the Algorithm”, legal scholars Lilian Edwards and Michael Veale distinguish two flavors of explanation.

  • Model-centric explanations seek to justify the suitability of the algorithm to the decision, and don’t address any one specific complaint. They may include the statistical techniques used, the data set used in training, performance metrics for the model: a general or global defense of the claim that the model is a suitable one to use for the task at hand. In general, model-centric explanations are “white-box” explanations and reveal the internal workings of an algorithm: they make the algorithm transparent.
  • Subject-centric explanations are local explanations of how a specific individual is treated by the algorithm. For example, a sensitivity-based explanation identifies the key variables that made your decision turn out the way it did, a case-based explanation says which other people you have been grouped with, a demographic explanation lists characteristics of individuals who have received similar treatment, and performance-based explanations focus on confidence, and the success rate of a model for people like you. Subject-centric explanations are “black-box” explanations in that they don’t concern the details of the algorithm, just its results.

Edwards and Veale emphasize that much of the discussion around algorithmic accountability focuses on model-centric explanations (transparency) but the best explanations of complex systems are often “exploratory” subject-centric explanations.

 

Implementing explanations

Researchers are working on how to implement explanation techniques. Here are a few thumbnail sketches of recent ideas.

  • Attentive explanations: justifying decisions and pointing to the evidence. The method investigates provides text explanations for a Deep Learning algorithm that answers questions about images. For example, presented with an image, the algorithm seeks to answer “what sport is this?”). The explanation algorithm will provide an explanation for the decision, such as “[because] The player is swinging a bat.”
  • Why should I trust you? Explaining the predictions of any classifier presents a subject-centric model that “explains the predictions of any classifier in an interpretable and faithful manner.” The basic idea is to build a local (subject-centric) model that is much less complex than the complete algorithm, but which does explain the results around an individual observation, and then to repeat this for many observations to “explain” the underlying algorithm over a broader range of data.
  • FairML, based on Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models, also takes the approach of probing an algorithm by removing variables one at a time from the input, in such a way that other variables remain unchanged. In this case the researchers are interested in testing for bias, particularly in the case of protected categories such as gender: if the outcomes are different when the target variable is removed in this manner, that raises a flag about the possibility of bias in the algorithm.

These methods of probing models with modified inputs are now being used not only to provide explanations of machine learning algorithms and to test for bias, but to improve the algorithms themselves. This interplay, where concerns from legal studies or social sciences prompt the development of techniques that can then be used to improve algorithms, is reason for optimism for the future, and shows the benefits of a constructive, if at times uncomfortable, dialog between professionals in different fields.

    About the author

    Tom Slee, Ph.D.

    Senior Product Manager, SAP HANA

    Tom Slee is a senior product manager for the SAP HANA in-memory database system, where he specified in programming language interfaces and UI tools. The product management team helps to set priorities for HANA and communicate product capabilities and directions to customers.