Conversational AI and Explainability
by Adrian Bowles, PhD
In his seminal paper* on ELIZA (an early conversational system that mimicked the behavior of a Rogerian therapist), Joseph Weizenbaum noted that “…to explain is to explain away.” His point was that for opaque systems that used artificial intelligence and heuristic programming, once the algorithms and models—what he called “the inner workings”—are revealed, the mystery is gone and once understood, it is no longer considered intelligent.
Weizenbaum was right, but a lot has changed since 1966 and it is time to get serious about systems that can explain themselves.
Our Definition of Explainability
A system is explainable if it can demonstrate to the satisfaction of a user or to an independent authority that the process used to arrive at a conclusion is both valid and verifiable. The process (algorithm(s)) should return the right results (validation), and the authority should be able to determine that it meets the specifications (verification).
Do you remember when your math teacher told you to show your work? That’s explainability. If you appear to get the right answer but can’t explain the process, the independent authority (teacher) has no confidence that your process will always return the right answer.
It’s important to note that many AI problems deal with uncertainty and non-deterministic problems. There may be no single “right” answer, only a set of possible answers with varying levels of confidence, so for practical purposes, it is important to be able to explain conclusions when they appear to conflict with expert opinions. In medical diagnostics, for example, if an AI-powered system identifies a novel treatment plan or asks for results of a new test, physicians will need to see the process—the reasoning—behind the conclusion or request.
What Has Changed?
When most AI systems were built using rules and symbolic logic, explainability was a matter of being able to demonstrate which rules were fired or triggered by the data. The problem today comes with the use of purely statistical or sub-symbolic methods. Systems that use modern machine learning, particularly deep learning with many layers, may change their behavior based on feedback or anomalies in the data and those changes will not necessarily be captured to document the new process.
Today, we are building systems that can’t explain their process in sufficient detail to satisfy authorities. There are several projects underway to define specific steps developers should use to ensure explainability (see DARPA’s Explainable AI (XAI) research program) but it will be years before such an approach becomes mainstream, unless it can be automated and not add too much overhead to the process or the solutions.
Explainability today is about improving confidence in systems, but in the future, it will be about quantifiable assurance of process quality (similar to an ISO 9000 manufacturing certification).
*ELIZA – A Computer Program for the Study of Natural Language Communication Between Man and Machine, Communications of the ACM Volume 9, Number 1, January 1966.
Have a Comment on this?