Genius Makers | Cade Metz

Summary of: Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World
By: Cade Metz

Introduction

Step into the world of AI with Genius Makers, a fascinating tale of the mavericks who revolutionized the way we think about and utilize artificial intelligence. From the early days of the Perceptron to the latest advancements in deep learning, Cade Metz takes readers on a journey through the twists and turns in the development of AI. Discover how neural networks evolved, learn about the exciting discoveries made by pioneers like Geoff Hinton, and delve into the ever-growing competition among tech giants like Google, Facebook, and Microsoft to become the AI leader. Understand the potential and pitfalls of AI as it transforms industries, our perception of reality, and ultimately the future of humankind.

Overcoming AI Skepticism

On July 7, 1958, Frank Rosenblatt assembled a group of men¸deep within the United States Weather Bureau’s office, to demonstrate the Perceptron, an early precursor to artificial intelligence (AI). Initially met with skepticism, Rosenblatt’s creation, alongside its successor the Mark I, would lay the groundwork for neural networks, now a prominent aspect of AI. Despite facing opposition and an “AI winter” in the 1970s and 1980s, the pursuit of connectionism-based AI systems carried on, to bring us to the current AI revolution.

In 1958, the offices of the United States Weather Bureau in Washington, DC, witnessed an essential moment in the history of artificial intelligence. Frank Rosenblatt, a young Cornell University professor, gave an incredible presentation—a refrigerator-sized computer, called the Perceptron, that could learn on the fly.

Rosenblatt fed the machine flashcards with a black square printed on one side, which the machine tried to identify as either left or right. Though its initial attempt failed, after 50 tries, the Perceptron achieved nearly perfect identification results. This fascinating machine is considered an early stepping stone to the intricate AI systems we use today. However, at the time, it was brushed aside as a mere novelty.

Nowadays, Rosenblatt’s Perceptron and its successor, the Mark I, are acknowledged as pioneering neural networks. Machine learning serves as the core of modern AI, and these adaptive systems form the backbone of our current technological achievements. Fundamentally, neural networks analyze massive data sets, detecting patterns and refining their algorithms to deliver increasingly accurate insights.

To teach the Mark I, researchers fed it pieces of paper, each printed with a letter from the alphabet. With every attempt to guess the letter it saw, the computer’s calculations improved as humans reviewed and corrected its responses. Rosenblatt dubbed this process “connectionism” and compared it to the human brain, suggesting computers could learn through networks of adaptive neurons.

Despite its revolutionary potential, the concept of connectionism faced strong opposition from skeptics like MIT computer scientist Marvin Minsky. In a 1969 book, Minsky argued that machine learning would never be capable of tackling more complex problems. Minsky’s influence led to an “AI winter” throughout the 1970s and early 1980s, with dwindling interest in researching neural networks and limited funding for such pursuits.

Nevertheless, a handful of resilient scientists persevered with connectionism development, and the efforts of these pioneers ultimately helped shape the AI revolution we benefit from today. With continued advances in AI and neural networks, Frank Rosenblatt’s Perceptron stands as a testament to the power of persistence and the potential for achieving the seemingly impossible.

Deep Learning Revolutionizes Tech

Geoff Hinton, a pioneer in the AI field, struggled to gain recognition for his groundbreaking ideas on deep learning during the AI winter. After a chance meeting with Microsoft’s Li Deng in 2008, the two collaborated on speech recognition software using deep learning neural networks. Their astonishing results sparked a technological revolution. Major tech companies like Google, captivated by the potential of neural networks, began investing in AI research firms and startups. This surge of interest laid the foundation for the AI-driven advancements we see in industries today.

Geoff Hinton has always been a unique figure in the world of artificial intelligence. He obtained his PhD from the University of Edinburgh during the AI winter of the 1970s, championing a connectionist approach despite the lack of interest. After years spent refining his theories on machine learning in various academic institutions, Hinton believed that deep learning could unlock the potential of neural networks.

In 2008, Hinton met Li Deng, a Microsoft computer scientist, at an AI conference. The two found a common interest in speech recognition software, and Hinton suggested deep learning neural networks as a way to outperform existing methods. Though initially skeptical, Deng decided to collaborate with Hinton.

In 2009, at Microsoft’s research lab, Hinton and Deng used machine learning models to analyze hundreds of hours of speech. Their program, utilizing GPU processing chips, amazed them with its ability to recognize words with exceptional accuracy. As their success became known, other tech companies started experimenting with deep learning neural networks. Google’s Navdeep Jaitly developed a machine with an error rate of just 18 percent.

The realization that neural networks could be applied to a wide range of problems, from image searches to self-driving car navigation, excited the tech industry. Google, always an innovator, acquired Hinton’s research firm, DNNresearch, and other AI-focused startups like DeepMind. This marked the beginning of an intense race to explore and develop deep learning applications, ultimately reshaping the landscape of modern technology.

AI Gold Rush in Silicon Valley

On November 2013, Clément Farabet, an NYU researcher, received a surprising call from Mark Zuckerberg. This event encapsulates how Silicon Valley’s giants were engaged in a recruitment arms race for AI talents around that time. Fueled by compelling and innovative ideas, top technology companies were eager to explore the frontier of AI applications, but they were also met with skepticism and warnings, questioning the potential risks of the growing technology.

Clément Farabet wasn’t expecting a quiet evening at home to turn into a life-changing moment. He picked up an unexpected call, and Mark Zuckerberg’s voice hit his ears. Already approached by several Facebook employees for weeks, Zuckerberg’s personal appeal to join the social media giant’s AI research kindled his interest.

Farabet’s situation wasn’t unique. Many talented technicians and researchers in the newly emerging field of AI were being headhunted by Silicon Valley titans. These companies, such as Facebook, Apple, and Google, believed AI was the future and were willing to invest millions in efforts to get ahead in this swiftly evolving field.

During the early 2010s, deep learning and neural networks were gaining popularity, although the applications and profit-generating potential of AI remained uncertain. Nevertheless, these tech behemoths fought for prominence in AI research. Google took an early lead by acquiring DeepMind, but Facebook and Microsoft trailed closely behind, hiring top AI researchers.

Facebook was particularly interested in AI’s potential to make sense of enormous amounts of data hosted on their platform. Cutting-edge neural networks could identify faces, translate languages, and predict user behavior for targeted advertising. Eventually, AI might drive sophisticated bots that could communicate and interact on the platform, creating a dynamic and almost alive online environment.

Google also had ambitious plans. Researchers like Anelia Angelova and Alex Krizhevsky saw possibilities in using AI to navigate self-driving cars through complex city streets. Another researcher, Demis Hassabis, aimed to create neural networks to enhance energy efficiency across Google’s massive server network.

The press celebrated these projects as visionary marvels that could change the world. However, there were skeptics, such as philosopher Nick Bostrom from Oxford University. Bostrom warned that the advancement of AI could lead to unpredictable and potentially dangerous consequences, with superintelligent machines making decisions that might put humanity at risk. Despite these cautionary voices, the Silicon Valley gold rush into AI research continued unabated.

Revolutionizing Healthcare with AI

Go, a seemingly simple game, had its world turned on its head in 2015 when Google’s AI program AlphaGo defeated top-ranked players, signifying the rise of neural networks’ capabilities. These networks can now potentially outperform humans in various fields. Advances in neural networks were fueled by faster, cheaper computer processors and the availability of enormous data resources. This progress has led to creative applications of machine learning, such as the diagnosis of diabetic retinopathy, a disease that causes blindness if left untreated. Neural networks can be trained to detect early signs of diabetic retinopathy, and similar networks could revolutionize the future of healthcare. By analyzing extensive medical data like X-rays, CAT scans, and MRIs, these networks may efficiently detect and diagnose diseases and abnormalities, paving the way for a smarter, more effective medical future.

In 2015, the realm of artificial intelligence (AI) witnessed a monumental shift when Google’s AI program, AlphaGo, defeated Fan Hui, top-ranked Go player, and later reigning human champion Lee Sedol. This remarkable feat was only made possible by neural networks’ increasing capabilities to outdo humans in various fields.

Two significant trends fuel the growth of neural networks: rapid advancements in computer processors leading to faster and cheaper chips, allowing for more complex calculations; and the abundance of data enabling extensive training for neural networks. These developments present opportunities to creatively apply machine learning principles for solving real-world issues.

One example of a critical issue tackled using machine learning involves the diagnosis of diabetic retinopathy – a leading cause of blindness, requiring early detection by skilled medical practitioners. Due to a lack of medical professionals in countries like India, it becomes challenging to screen patients effectively. Google engineer Varun Gulshan and physician Lily Peng took it upon themselves to resolve this challenge using technology. With the help of 130,000 digital eye scans from India’s Aravind Eye Hospital, Gulshan and Peng trained a neural network to recognize the early signs of diabetic retinopathy. Their program managed to analyze a patient’s eye within seconds, achieving a 90% accuracy rate – on par with a trained physician.

Projects like these can be instrumental in transforming healthcare for the better. By effectively using neural networks, medical professionals could analyze X-rays, CAT scans, MRIs, and other data types to detect diseases and abnormalities, enhancing the overall efficiency of healthcare. As these networks learn and evolve, they may even identify patterns and markers currently considered too subtle for human detection, cementing AI’s role in the future of medicine.

The Reality-Altering Power of AI

Can you truly trust what you see online? As artificial intelligence (AI) becomes more sophisticated, distinguishing between real and manipulated images and videos has become increasingly difficult. This new reality challenges our perception and raises ethical concerns about how AI-generated content can be misused, from deepfakes that can tarnish a public figure’s reputation to AI biases perpetuating unequal treatment.

Imagine scrolling through social media when you come across a video of Donald Trump speaking fluent Mandarin. It looks incredibly convincing, with mouth movements in perfect sync and appropriate body language. However, upon closer inspection, you notice minor glitches, and suddenly, it dawns on you – it’s a fake. While you weren’t fooled this time, rapidly evolving AI technology might make it harder to be so sure in the future.

As AI developed in the early 2010s, researchers aimed to teach computers the art of identifying patterns in information. This led to AI programs efficiently identifying and sorting images based on content. Everything changed in 2014 when Google researcher, Ian Goodfellow, pioneered generative adversarial networks (GANs). GANs consist of two neural networks that train each other, one generating images while the other judges their accuracy. The resulting images become increasingly lifelike as the process unfolds.

This groundbreaking GAN technology opened the floodgates to a new era of fake visuals. Early adopters wasted no time, creating eerily realistic videos featuring politicians, celebrities, and other public figures known as “deepfakes.” While some deepfakes are all in good fun, others have more sinister consequences, such as placing people in pornographic situations without their consent.

Beyond deepfakes, AI research faces growing concerns about the exacerbation of societal inequalities. In 2018, Joy Buolamwini, a computer scientist, demonstrated that facial-recognition programs developed by Google and Facebook struggled to identify non-white, non-male faces accurately. This inaccuracy stemmed from unbalanced training data and raises crucial questions about AI’s potentially oppressive nature – a topic worth delving deeper into.

Want to read the full book summary?

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed