Talks

Summarized using AI

Dreaming of Intelligent Machines

Paolo "Nusco" Perrotta • March 30, 2020 • Earth

In the presentation titled "Dreaming of Intelligent Machines," given by Paolo "Nusco" Perrotta at the Paris.rb Conf 2020, the evolution of artificial intelligence (AI) is explored, particularly contrasting the Symbolist and Connectionist approaches. The discussion begins with modern technologies like self-driving cars and digital assistants, tracing their roots back to the mid-20th century.

  • The Symbolist Approach: Symbolists, led by Marvin Minsky, believed that thinking is about manipulating symbols. They focused on programming machines to handle simple shapes and concepts, evolving towards intelligent behavior through cumulative learning and better programming languages like Lisp.

  • The Connectionist Approach: In contrast, the Connectionists aimed to replicate the human brain's neural architecture, using models like the perceptron. Instead of a ground-up strategy, they explored a network of interconnected inputs and outputs. Over time, the perceptron became significant in image recognition tasks.

  • Training the Perceptron: The perceptron learns through a training phase, classifying examples (like images of squares) by adjusting weights assigned to the input features. The speaker emphasizes the importance of the training algorithm in enhancing the perceptron's predictive abilities.

  • Criticism from Minsky: Minsky criticized Connectionism, showcasing the perceptron’s limitations and predicting its inability to tackle non-linear data, which led to a decline in interest and funding for neural networks after the publication of his book.

  • Revival of Connectionism: Despite the setback, researchers continued working on neural networks, eventually developing the backpropagation algorithm in the 1970s, which allowed effective training of multi-layer networks. Advancements in computer hardware and the explosion of big data further revitalized interest in neural networks.

  • The 2012 Breakthrough: A pivotal moment came during the 2012 ImageNet competition, where a deep neural network named AlexNet outperformed all previous entries, cementing belief in the feasibility of neural networks for complex AI tasks.

  • Conclusion and Ethical Considerations: While the advancements in AI have the potential to transform many industries, they also raise critical ethical concerns regarding privacy, surveillance, and the implications of autonomous technologies. Perrotta concludes by reflecting on the unpredictable future of AI and the journey that led to current capabilities in machine intelligence, emphasizing that the story of AI is ongoing and complex.

Dreaming of Intelligent Machines
Paolo "Nusco" Perrotta • March 30, 2020 • Earth

Paris.rb Conf 2020

00:00:15 Today, we're discussing topics like self-driving cars, which Ron talked about. In fact, these technologies have become quite prevalent, along with many other developments in recent years. Digital assistants on our phones, for instance, seem to have emerged overnight. However, these advancements did not happen suddenly; they stem from a much longer story, one that traces back to the 1950s.
00:00:29 This intriguing narrative is rooted in a dream, so to speak. That dream is the title of my presentation: the idea that computers can think, perhaps in a way similar to how we think. The philosophical battleground for this idea has been a conflict between two distinct camps, two factions, that have clashed over how to create machines that can think.
00:00:51 One of these factions was called the Symbolists. The Symbolists believed that thinking is fundamentally about manipulating symbols. For example, in language we manipulate words and sentences, while in vision we manipulate shapes. They proposed that we should build systems using software to manipulate these symbols, and eventually, through this process, intelligence would emerge. This is a bottom-up approach to coding intelligence.
00:01:05 To illustrate, when developing computer vision, you start by designing a machine that can recognize very simple shapes—like circles and squares—and then build upon that foundation. The idea is that through continuous and cumulative learning, you will eventually end up with something that behaves intelligently. At the time, it may sound somewhat naïve, but back then, computers were new and full of potential, and no one knew just how capable they could become with the right software.
00:01:35 At that time, it was clear that existing machine languages were insufficient; they recognized that they needed better programming languages. The Symbolists created better languages, one of the foremost contributions being made by a gentleman who wrote a programming language aimed at achieving artificial intelligence. That language was Lisp, one of the earliest high-level programming languages that gained traction.
00:01:57 Lisp became a significant inspiration for the core of Ruby, and I went straight to the source to validate that—thank you, Matsumoto-san! Now, while Lisp was influential, arguably the most important Symbolist was Marvin Minsky. Minsky was a brilliant inventor and a highly respected academic who garnered research funding and had the street cred to match.
00:02:09 He dedicated his life to the pursuit of symbolic AI and was recognized as the thought leader of the Symbolists. Minsky worked at MIT, where he spent a significant portion of his career developing technologies, including a robot that could recognize and stack blocks.
00:02:24 In contrast, there was another camp, the Connectionists, who acted as the underdogs in this intellectual battle. They lacked the financial backing and recognition that Marvin Minsky had but proposed a radically different approach. Instead of coding intelligence from the ground up, they reasoned that we only knew of one truly intelligent machine: the human brain.
00:02:40 Their strategy was to replicate the brain. Although our understanding of the brain was limited, they recognized that it was made up of neurons. Each neuron, if examined closely, has inputs called dendrites and one output known as an axon. While wildly simplifying the concept, the mechanism works such that if enough electrochemical signals accumulate from the dendrites, then the neuron fires and sends a signal along the axon, connecting to other neurons through synapses.
00:03:09 The Connectionists, particularly one prominent figure in their camp, aimed to replicate this concept. He was a psychologist, a biologist, and a musician with a wealth of talent. What he designed was known as the perceptron, which, when viewed, resembles a network of inputs converging to one output.
00:03:25 You may wonder, how can this perceptron help recognize images, for instance? Let's say we wanted to recognize squares. The approach would involve taking an image that might or might not contain a square and dissecting it into its constituent pixels, a term that didn’t exist back then.
00:03:43 We would then send the pixel values to the perceptron’s inputs. Each pixel is assigned a weight, which is the key element of the perceptron model, and these weighted values are summed together. The output then goes through a step function, determining whether the weighted sum exceeds a certain threshold.
00:04:01 If the weighted sum is below this threshold, the output is zero. If it’s above the threshold, it outputs one. This sets up a binary decision-making process. The distinction on what output to provide—whether it recognizes a square or not—depends on the specific weights assigned to each pixel.
00:04:16 The challenge then becomes how to find the right weights that classify inputs correctly. The connectionists aimed to develop this ability through example-based learning. Essentially, they would gather numerous examples of squares and non-squares, labeling them accordingly.
00:04:31 So, we would take a vast collection of examples—thousands, if possible—labeling each image as either a square (1) or not a square (0). With these labeled examples, they trained the perceptron.
00:04:45 Initially, the perceptron starts with random weights, making it effectively function like a random number generator. Upon processing the examples, the perceptron would produce a mix of expected and unexpected outcomes. However, a crucial algorithm exists to optimize how the perceptron adjusts its weights to increase the accuracy of its predictions.
00:05:14 This phase of tuning the perceptron based on input examples is called the training phase. During training, you might think it's rudimentary—a weighted sum may seem simple. Indeed, it doesn’t even know what a square is, though surprisingly, this model can achieve reliable performance.
00:05:32 You can test this functionality yourself by utilizing the MNIST database, a well-known collection of handwritten digits containing 60,000 samples. You can download a perceptron implementation in any programming language to evaluate how it performs.
00:05:48 The code I developed is written in Ruby. While the emphasis isn't necessarily on readability here, it serves to demonstrate that creating a perceptron isn’t overly complex, and even with a few modern enhancements to speed up training, it largely mirrors foundational concepts.
00:06:07 To my surprise, the perceptron achieved over 90% accuracy in recognizing handwritten digits upon testing. This was particularly astonishing for me when I first observed such high accuracy.
00:06:24 Unlike me, Rosenblatt, the perceptron’s creator, did not have the luxury of programming in a higher-level language; he built dedicated hardware to implement the perceptron. His setup may not have been aesthetically pleasing, but it was a formidable piece of machinery that garnered relevant results.
00:06:39 In the 1950s, it was a battle between the Symbolists and the Connectionists. The Symbolists, led by Minsky, had all the academic authority, securing funding from institutions like the US Navy.
00:06:58 Minsky was quite critical of the Connectionist approach, viewing it as a path that wouldn't lead to substantial outcomes. He became particularly incensed over what he believed were extravagant claims made by Rosenblatt; for instance, Rosenblatt suggested that the perceptron could soon demonstrate human-like capabilities.
00:07:15 Minsky viewed such proclamations as reckless, asserting that they squandered research funding on what he considered fanciful concepts. In 1969, Minsky decided to take decisive action against the notion of perceptrons, publishing a critical book that detailed their limitations in a manner that left no room for misunderstanding.
00:07:46 In this work, Minsky employed a modern analogy to illustrate the limitations of a perceptron. Let's say we wanted to predict whether a Netflix series would be renewed based on its viewership numbers and ratings. If we visualize this data, we may find linear separability, meaning we could easily draw a line to separate the successful (blue) series from the unsuccessful (green) ones.
00:08:06 However, Minsky warned that this neatness doesn't reflect reality; real-world data tends to be messy. When faced with data where clusters overlap, a perceptron struggles to delineate success. It can only find a linear boundary that distinguishes some points but fails to accurately identify every instance. While this performance can sometimes exceed the expectations of random chance, it cannot resolve more complex scenarios.
00:08:24 Minsky asserted that while the perceptron was nifty, it couldn't extend beyond simple tasks. The Connectionists knew that a theoretical way existed to enhance the perceptron's capabilities, combining its outputs in a multi-layer network.
00:08:42 They theorized that multi-layer perceptrons could manage non-linear data. The drawback was that there was no established training algorithm for these networks. Minsky doubted that such an algorithm would ever be developed, leading him to disavow the Connectionist model and concentrate on symbolic AI.
00:09:09 At that time, we don’t commonly refer to this structure as a multi-layer perceptron; it goes by different names now. Minsky’s influential book effectively quelled interest in connectionism, resulting in significant funding drying up and research on perceptrons halting.
00:09:28 Unfortunately, this period was punctuated by a tragedy when Rosenblatt died in a sailing accident—an event that appeared to signal the conclusion of the connectionist era. However, interest in these ideas never fully died.
00:09:45 There were individuals working in the background, akin to medieval monks preserving knowledge through challenging times, hoping to keep these concepts alive. Almost inevitably, I suspect we will eventually see a Hollywood film dramatizing the efforts of these pioneers.
00:10:03 Overcoming immense skepticism, they gradually solved numerous problems associated with neural networks. By the 1970s, they developed an algorithm for training neural networks—something Minsky believed could never be achieved.
00:10:21 This breakthrough, known as backpropagation, allowed for effective training of networks. However, to build powerful networks, researchers needed additional layers of perceptrons.
00:10:38 Though deeper networks were developed, training these networks posed challenges due to their computational intensity. Yet as technology advanced, particularly in terms of numerical methods and mathematical computation, progress continued.
00:10:56 In conjunction, hardware evolution contributed significantly. Neural networks revolved around computations with extensive matrices, which CPUs managed poorly until specialized computers were engineered for enhanced performance.
00:11:13 Initially developed to tackle challenges in computer graphics and video games, the rapid processing capabilities of GPUs soon became a boon for neural network research.
00:11:31 Another significant factor was the explosive growth of data, particularly after the rise of the internet and platforms like Google and Facebook, which began generating enormous amounts of data.
00:11:49 With this influx of data, companies started hiring experts in neural networks. All these developments culminated in a transformational moment during the 2012 ImageNet competition, traditionally dominated by symbolic AI programs.
00:12:06 These programs had struggled considerably, but in 2012, a team introduced a deep neural network called AlexNet, which didn't just participate but utterly dominated, outperforming other entries by a substantial margin.
00:12:24 As a consequence, researchers and industry professionals were compelled to acknowledge the potential of neural networks that had once seemed fringe. Most of them wanted to disassociate from the older term 'neural network,' which carried a history of ridicule.
00:12:44 Had Rosenblatt lived to see this day, I can hardly imagine his elation; however, Marvin Minsky, who passed away in 2014, maintained a scholarly reputation but had remained skeptical of neural networks.
00:13:06 In examining this saga, I sought some kind of uplifting lesson—a narrative suggesting that good ideas inevitably prosper.
00:13:25 Yet, the story does not present a clear-cut moral. We don’t know precisely where this journey will lead us. For the first time, we have machines that can outperform us at tasks we consider typically human, like image recognition.
00:13:47 Of course, these developments could lead to significant enhancements in our lives, from self-driving cars to advanced medical diagnostics. However, they also introduce new challenges—ethical dilemmas like mass surveillance, privacy concerns, and the consequences of autonomous weaponry. It is a rather unsettling prospect, which brings me to my conclusion. If we glance down the road, we may find it dizzying. I am uncertain where these technologies will ultimately lead.
00:14:31 What I have shared today is the story of how it all began. Thank you!
00:15:05 If you have questions, feel free to take a picture of this as it is about to be printed.
00:15:15 I'll also send out the code. I am unsure how long this will take, but February 22 is the target.
00:15:25 I didn't leave any questions unanswered, I hope. Thank you very much!
Explore all talks recorded at Paris.rb Conf 2020
+12