Explaining Explainable AI

 

IMG_2448

I had the privilege of speaking to the Precision Agriculture Club at NDSU on Thursday evening. Precision Agriculture is a new major at NDSU and the club is also new. I chose to discuss Explainable AI (XAI), a topic that is both important and challenging in the field of data science.

Before explaining XAI, I should start with my misgivings about the name. A better name would be “explainable machine learning” as ML is the focus of XAI. But XAI has started to catch on in the industry, especially with significant research programs like this one at DARPA.  So I’ll roll with XAI.

I first started learning and thinking about ‘transparent’ ML models early in my data science training initiative. I wanted the model to be something more than a black box between a set of inputs and an output. In most cases, I believe that ML is a tool that will make humans more effective and efficient. A black box limits that possibility. The data informed expert needs the detail that a transparent model provides.

In my Data Ethics class (DS 760), I learned about legal and ethical issues that XAI can help resolve. Black box algorithms can hide issues like racial bias in prison sentencing tools  or gender bias in hiring tools.  Legal issues abound if a black box algorithm leads to the wrong decision. Who is responsible – the data scientist, the company who the data scientist works for, or the user of the algorithm? For example, how can a doctor explain a diagnosis or a treatment plan driven by a black box? What if that diagnosis or treatment plan is wrong? XAI provides the necessary information so that the doctor and patient can decide on the best action.

To summarize, XAI helps to:

  • Empower the domain expert to make better decisions.
  • Expose bias in algorithms.
  • Clarify decision responsibility when ML is involved.

ML use in precision agriculture has all three of these challenges. The club meeting on Thursday was kicked off by a very interesting video titled Farm Forward with John Deere. The goal of the video is to visualize what farming might look like in a few years with the support of AI, automation, and connectivity. There are examples of XAI that show up in the video. The inputs that are likely algorithm based empower the farmer to make better decisions. And it’s pretty clear that the farmer is making the decision. Bias in the algorithm is a bit trickier to consider. Bias usually comes from the data used for to train the algorithm. Could an algorithm steer a farmer to a particular brand of seed or herbicide? Perhaps… it would take more than a 4 minute video to explore that.

Hopefully it’s clear that XAI is an important consideration in ML development. It’s also challenging to deliver on. Some algorithms are easier to explain than others, and there’s extensive research going into the requirement. Those are topics for a future post.

Picture details:  U of M Landscape Arboretum (Chanhassen), 2/22/2019, Canon PowerShot SD4000 IS, 1/250 s, f-4, ISO-160