Thoughts on IBM’s Trust and Transparency Capabilities for AI
by Adrian Bowles
In his recent blog It’s Time to Start Breaking Open the Black Box of AI (9/19/2018), Ruchir Puri, Chief Architect, IBM Watson and IBM Fellow, asserts:
“It’s time to start breaking open the black box of AI to give organizations confidence in their ability to manage those systems and explain how decisions are being made.”
Trust, Transparency, and Cloud
Puri’s blog introduces IBM’s new trust and transparency capabilities for AI, built on algorithms from IBM Research and available as services in the IBM Cloud. The capabilities have a business-oriented dashboard to help explain AI-powered recommendations or decisions, and tools to mitigate bias early in the data collection and management phases.
Improving visibility, reducing bias, and preserving auditing data without placing undue burdens on scarce AI and data science resources is becoming a critical need. Complex solutions, powered by opaque deep learning systems, are increasingly being called upon to augment human decision-making in critical tasks from healthcare to defense systems.
IBM’s trust and transparency capabilities can provide this level of explainability because individual transactions are logged throughout the operational life, not just the development and testing of an AI model. The “explainable AI” problem has been a barrier to deep learning adoption for applications in some regulated industries, and in many applications where trust has historically been based on demonstrated human expertise.
The new capabilities are a welcome addition to the field and should be evaluated by anyone struggling with these applications. They should be a boon to IBM as a cloud platform provider.