Deep Learning Euphoria: Give Me a Break
A treatise on the dangers of popular press coverage of AI, reduced to blog-size
by Adrian Bowles
Yes, Deep Learning Is Amazing
Deep learning, a subset of machine learning, is an undeniably powerful set of techniques for analyzing large data sets to identify or detect patterns by moving from the analysis of low-abstraction observable data (e.g., pixels in an image file) to successively extract abstract features (e.g., edges, then shapes, then classes of shapes) until the object or pattern (e.g., cat or network threat) is output.
Today, with refined deep learning techniques and readily available parallel processing power and large data sets, deep learning is successfully tackling problems ranging from natural language translation to facial recognition with sentiment analysis.
But Deep Learning Has Limits
In a recent research paper—Relational inductive biases, deep learning, and graph networks—researchers from DeepMind, Google Brain, MIT, and University of Edinburgh explore some limits to the ability of deep learning to generalize beyond the direct experience of the system, which is a defining characteristic of human intelligence. Humans—and AI systems that center on symbolic logic and abstract knowledge representations to support reasoning (e.g., inductive, deductive, and abductive)—work in ways that complement the power of statistical (sub-symbolic) deep learning.
In this figure, we trace the progression—not necessarily the progress—of AI development models from the 1950s to the present. The basic idea is that as deep learning approaches and the infrastructure that supports them matured, the industry migrated to a data-centric approach to ML powered by deep learning.
In the 1980s, I remember telling one of my undergrad AI classes that AI gets re-labeled once it works, once it is demystified. With all of the effort being expended to make deep learning-centric systems explainable, we shouldn’t lose sight of the value and potential of symbolic systems to solve problems, especially when relevant large data sets don’t exist. The future for many complex systems will require hybrid AI—solutions that leverage statistical and symbolic approaches.
Caution: Deep Learning Euphoria Is Taking Over
That brings me to the issue I face when talking to clients that are just getting started: the press is generally doing an appalling job explaining AI. Two recent articles by Karen Hao in the MIT Technology Review illustrate the problem.
In What is machine learning? We drew you another flowchart, she writes:
Machine-learning algorithms use statistics to find patterns in massive* amounts of data.
[…]*Note: Okay, there are technically ways to perform machine learning on smallish amounts of data, but you typically need huge piles of it to achieve good results.
If one just looks at the flowchart—which has been widely circulated online with the imprimatur of MIT—one might wrongly assume that all machine learning is deep learning, and one would completely ignore symbolic approaches, including mechanical theorem proving.
Perhaps even more objectionable, in Is this AI? We drew you a flowchart to work it out, Hao asserts—again on a widely circulated diagram—that unless a system is “looking for patterns in massive amounts of data,” it “doesn’t sound like reasoning to me.”
Conflating reasoning, learning, and understanding—the three pillars of cognitive computing within the discipline of AI—does a disservice to anyone working in the field and dangerously misleads readers. Words matter.
Before we worry about the limits of natural language generation by computers, perhaps we should be more adamant about accuracy from humans writing about AI.
Have a Comment on this?