Rebooting AI | Gary F. Marcus

Summary of: Rebooting AI: Building Artificial Intelligence We Can Trust
By: Gary F. Marcus


In ‘Rebooting AI: Building Artificial Intelligence We Can Trust’, Gary F. Marcus and Ernest Davis posit that today’s AI systems are not the great threats or saviors that many portray them to be. They argue that AI engineers have overly focused on deep learning rooted in statistical models and neglected the importance of cognitive processes. Consequently, AI struggles in adapting to real-world scenarios as it lacks the ability to make ethical decisions, navigate vehicles reliably, or empower robots suitably. This introduction to the book summary will delve into the authors’ belief that AI should resemble the human mind and must develop general intelligence to adapt to an open-ended world.

Human-Like AI

AI experts Gary Marcus and Ernest Davis argue that for AI to make a significant impact on society, it must imitate human cognition, not just rely on statistical models like deep learning. They maintain that AI lacking a real grasp of the world will not be able to make ethical choices or effectively operate vehicles and machines.

AI: Limited Scope and Unreliable

The book argues that current AI technology lacks the potential to be a threat or a great boon to humanity. Although narrow AI systems work, they only function if their actions have been anticipated by programmers. AI must exhibit general intelligence that can adapt to an open-ended world, as humans do. People commonly overestimate AI’s capabilities, assuming that AI can perform tasks flawlessly after seeing it solve them once or twice. However, robots require precise programming for every tiny task, making them unreliable and error-prone. In conclusion, AI has limited scope and needs significant improvements to become genuinely useful.

The Limitations of AI

Artificial Intelligence (AI) has significant limitations related to its inability to understand causal relationships and need for massive amounts of data to function. AI also struggles with abstract concepts and partial information. Although AI has the potential to read and understand medical papers, it lacks real-world experience to infer meaning. Expert researchers also struggle to understand the complex decision-making processes employed by AI. AI’s poor comprehension of language and compositionality makes it of little use in high-stakes situations. Finally, despite the ability of Deep Learning to sort immense amounts of data, it cannot understand it or employ common sense. Ultimately, AI’s poor understanding of how the world works is its major limitation.

The Myth of Superintelligent Robots

The fear that intelligent robots will take over the world is just a myth. While robots can perform specific tasks in controlled environments, they struggle to adapt to unfamiliar terrains without human assistance. To demonstrate basic intelligence, robots require robust software that can constantly cycle through the “OODA loop”: Observe, Orient, Decide, and Act. However, making them understand their environment and adapt appropriately remains problematic. Deep learning might identify many things in their surroundings, but it fails to comprehend the context between objects. Creating intelligence that matches humans is difficult because intelligence does not have a single principle. Extracting words from context eliminates nuances, and intelligence requires integrating top-down and bottom-up knowledge. Causal inferences are needed to comprehend the world, and the mind is not a blank slate. Nature and nurture work together rather than competing. Overall, while robots can perform specific tasks reliably, their intelligence is far from human-like, and the fear of them taking over the world is unfounded.

Want to read the full book summary?

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed