Building trustworthy AI webinar

IMG_0074

When first asked to do a webinar for Data Science DoJo, several topics came to mind. The topic I kept coming back to dealt with building trust in AI solutions. Trust aligns very well with my point of view on business relationships and AI capabilities. And it’s an underappreciated aspect of building AI solutions. I will follow up the post with links and context for the resources used in the webinar, but this post will provide some background on the webinar.

A fundamental goal of AI solutions is to amplify human intelligence. Domain knowledge (knowledge of a specific, specialized discipline or field) can and should be leveraged in decision making. We have a long ways to go before the capacity and versatility of the human brain can be simulated by an AI solution, so replacing the knowledge of domain experts is not imminent. For most scenarios today, an AI solution excels at targeted tasks. That can be very useful as an AI solution can either a) eliminate the mundane portions of the domain expert’s job so more time can be used to add business value, or b) provide additional information that improves decision making. When properly built and integrated into a domain expert’s workflow, an AI solution enables a “better together” situation.

I wrote more about this topic in The rise of the data informed expert – Lake Data Insights. It takes some personal investment to transform from a domain expert to a data informed expert, most notably increasing data fluency and being open to the value that analytics can provide. But we also have to build solutions that earn and maintain the trust of the domain expert. We can’t simply build a black box model and expect an expert in his/her domain to just trust it. We have to build solutions that have trustworthiness as an intentional goal.

When I first envisioned the webinar, I anticipated that it would be more or less an introduction to trustworthy solutions that would be followed with a deeper dive into model interpretability / explainability (Explaining Explainable AI – Lake Data Insights). I figured that’s where the meat of the discussion would lie. As I did more research into building trust, I realized that model interpretability was an important part of the solution but by no means was it sufficient for building trust. I was pointed to an interesting research paper that concluded that a confidence indicator was more valuable than explainability when interpreting a prediction. This result resonates with a personal project of mine (see the webinar for details). As the content for the webinar evolved, the overall importance of model interpretability in trustworthy solutions diminished as the fundamental concept of trust grew.

Put simply, a solution that isn’t trustworthy won’t get used. And trust is a fickle thing that takes a long time to grow and a short time to lose. It’s essential to factor trustworthiness into your AI solution. Check out the webinar and look for an upcoming post about resources to help you achieve AI trust.

Picture details:  8/13/2020, Canon PowerShot G3 X, f/5, 1/20 s, ISO-1600