Biological Intelligence and the Limitations of Deep Neural Networks – Intel on AI Season 3, Episode 2
In this episode of Intel on AI host Amir Khosrowshahi and Melanie Mitchell talk about the paradox of studying human intelligence and the limitations of deep neural networks. Melanie is the Davis Professor of Complexity at the Santa Fe Institute, former professor of Computer Science at Portland State University, and the author/editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems, including Complexity: A Guided Tour and Artificial Intelligence: A Guide for Thinking Humans.
In the episode, Melanie and Amir discuss how intelligence emerges from the substrate of neurons and why being able to perceive abstract similarities between different situations via analogies is at the core of cognition. Melanie goes into detail about deep neural networks using spurious statistical correlations, the distinction between generative and discriminative systems and machine learning, and the theory that a fundamental part of the human brain is trying to predict what is going to happen next based on prior experience. She also talks about creating the Copycat software, the dangers of artificial intelligence (AI) being easy to manipulate even in very narrow areas, and the importance of getting inspiration from biological intelligence.
Academic research discussed in the podcast episode:
- Gödel, Escher, Bach: an Eternal Golden Braid
- Fluid Concepts and Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought
- A computational model for solving problems from the Raven’s Progressive Matrices intelligence test using iconic visual representations
- A Framework for Representing Knowledge
- On the Measure of Intelligence
- The Abstraction and Reasoning Corpus (ARC)
- Human-level concept learning through probabilistic program induction
- Why AI is Harder Than We Think
- We Shouldn’t be Scared by ‘Superintelligent A.I.’ (New York Times opinion piece)