Summarized using AI

Un-Artificial Intelligence

Melinda Seckington • June 20, 2015 • Earth • Talk

In the video titled "Un-Artificial Intelligence," Melinda Seckington explores the parallels between artificial intelligence (AI) and human learning, emphasizing the concept of 'un-artificial' intelligence. She begins by defining intelligence, not just as the accumulation of knowledge but as the ability to acquire, reason, and apply that knowledge effectively.

Key Points Discussed:

- Definitions of Intelligence: Seckington refers to a Dungeons and Dragons manual to define intelligence in terms of the ability to learn and reason, suggesting that true intelligence involves practical application of knowledge.

- Artificial Intelligence Overview: AI can refer to both the intelligence exhibited by machines and the field of study focused on creating such intelligence. She introduces the concept of intelligent agents—systems designed to achieve optimal outcomes in given circumstances.

- Human vs. Machine Learning: The speaker compares the behavior of intelligent agents to human learning, employing an analogy based on sensors and effectors. Seckington utilizes Pavlov’s classical conditioning to illustrate how humans and animals alike learn through associations.

- Learning Mechanisms: Seckington presents three types of learning algorithms in AI—supervised learning, unsupervised learning, and reinforcement learning—paralleling them with human learning processes that involve directive, conversational, and assessment-based learning.

- Distinctions Between Humans and Machines: While machines operate within a specific framework, humans learn contextually, develop social skills, and possess emotional understanding, enabling rich and diverse learning experiences.

- Future of AI and Human Collaboration: Seckington posits that as AI develops, there will be a need for collaboration between humans and machines. The future may see educational environments evolving to benefit both parties, blurring the lines between artificial and human intelligence.

Significant Examples:

- Seckington shares an anecdote about training her cats to respond to an alarm sound, illustrating conditioning and the formation of habits in both animals and humans.

- She mentions various learning styles, such as collaborative and assessment-based learning, emphasizing how these mirror AI's learning strategies.

Conclusions:

- A truly intelligent machine must encompass a broad range of learning abilities akin to human cognitive functions. The ongoing development of AI presents opportunities for collaboration rather than competition, inviting a future where both humans and machines learn from each other, with implications for technology and education.

Un-Artificial Intelligence
Melinda Seckington • June 20, 2015 • Earth • Talk

@mseckington

HAL, Skynet, KITT… we've always been intrigued by artificial intelligence, but have you ever stopped to considered the un-artificial? Most developers are familiar with the basics of AI: how do you make a computer, an algorithm, a system learn something? How do you model real world problems in such a way that an artificial mind can process them? What most don't realize though is that the same principles can be applied to people. This talk looks at some of the theories behind how machines learn versus how people learn, and maps it to real life examples of how specifically our users learn their way around interfaces and how designers and developers apply learning methodologies in their day-to-day actions.

Talk given at GORUCO 2015: http://goruco.com

GORUCO 2015

00:00:14.160 Hi everyone, I'm Melinda Seckington, and I'll be talking about un-artificial intelligence. Ever since the Industrial Revolution, we've had a fascination with stories about AI. Just think of recent movies; there are various interpretations of the same theme. How does having AI actually change our world?
00:00:19.430 Now, I'm a huge movie geek, but for my day job, I'm a developer at FutureLearn, a London-based startup focused on social learning. We collaborate with universities and cultural institutions to deliver online courses. This means our team is encouraged to learn more about the theories and principles of pedagogy and how to build excellent learning experiences. My background is in AI, which I studied back at university. I realized that how machines learn—how artificial intelligence functions—is very similar to how people learn. That's what I'll be discussing here today: explaining some concepts from AI and linking them to un-artificial intelligence.
00:01:05.789 But before we explore artificial intelligence, we need to define intelligence itself. How do we define what makes something or someone intelligent? I did what every geek would do and consulted the Dungeons and Dragons manual. While the parts about wizards and spells aren’t relevant, it does define intelligence in terms of how well your character learns and reasons, and having a broad assortment of skills. So here’s one proper definition: intelligence is not just about having knowledge or skills; it's about knowing how to obtain them, reason about them, and use them effectively.
00:01:47.750 What do we mean by artificial intelligence? Well, we use the phrase for two different things. On one hand, it refers to the actual intelligence of machines or software; on the other hand, it's a term for the research field focused on creating intelligence within machines. Within that field, we have four main approaches to implementing AI. Today, we’ll focus on one: systems that act rationally. A system is rational if it performs the right action. But how do we define what the right action is? This leads us into the concept of intelligent agents.
00:02:36.300 An intelligent agent is one that acts to achieve the best outcome in any given situation. We can visualize this with a simple diagram of reflex agents. An agent exists within an environment, equipped with sensors that allow it to observe the world and effectors that let it take actions. It creates a representation of the current state of the world and uses a set of if-then rules to determine what action to take next.
00:03:02.630 When we consider human learning, we can apply the same diagram. Our sensors consist of our five senses—taste, touch, sight, etc.—while our effectors would be our voice, hands, and movements. This becomes most evident when examining the research of Ivan Pavlov, a Russian physiologist known for his work in classical conditioning, where he trained dogs to associate a buzzer's sound with food. For example, I have two cats, Casey and Dusty, who become very annoying when they are hungry. To test Pavlov’s principles, I tried training them using the sound of a standard iPhone alarm bell.
00:04:30.300 Initially, my cats would jump up and rush to the kitchen whenever they smelled food. I started feeding them only when the alarm went off, and over time, they began associating that sound with meal time. Even now, several years later, they recognize the sound whenever it plays, even in TV shows or movies, and they rush to the kitchen expecting food. This behavior can become quite annoying given the number of movies using that same iPhone alarm sound.
00:05:22.490 We're not that different from cats; we apply similar principles to form habits. Each morning, when I hear my alarm clock, I know it means I need to wake up and get out of bed. As a developer, I understand that when I see failing tests, I need to fix them. These are simplified loops of learning, but how do we actually learn new rules about what actions to take? It involves a more complex diagram of learning agents.
00:06:00.150 We still have sensors and effectors, but now we also include a performance element—this whole agent you saw before—that can recreate the state of the world and apply if-then rules. Additionally, we introduce learning elements; these gather feedback from a critic based on past actions. This process helps the agent improve its performance and make better decisions.
00:06:53.030 Learning algorithms play a significant role in how this happens. Several known learning algorithms can help illustrate this. The first is supervised learning, where feedback is based on labeled training data. The agent learns rules for matching inputs to labels. Then there’s unsupervised learning, which doesn’t provide specific outputs, but the algorithm must identify patterns within the input data. Finally, there's reinforcement learning, where the agent makes decisions and receives feedback on whether those decisions are correct or incorrect, requiring a deeper understanding of their environment.
00:07:37.450 In the same way that machines learn through different algorithms, humans learn from various activities. Here’s an overview of 16 different types of learning activities. I’ll highlight a few since we don't have time to cover them all. For instance, one type is directive, where learners receive information directly—a process similar to supervised learning. As developers, we seek out information through reading books, watching videos, and attending conferences.
00:08:06.720 Another type is conversational and collaborative learning, which involves conversing with others to construct a shared understanding—similar to unsupervised learning. We engage in this when we work collaboratively with peers, forming a collective comprehension of tasks. Lastly, there's assessment, which involves receiving constructive feedback, akin to reinforcement learning; we learn from the feedback we get during code review processes and peer assessments.
00:08:56.900 So, what makes us different from machines? For starters, we learn contextually and can understand the situation in which we're learning. Unlike machines, we're not confined to one purpose or domain; we are constantly learning, often unconsciously, processing everything around us. In contrast, machines often rely on specific tasks. Additionally, we have a rich repository of prior knowledge that allows us to form associations between disparate pieces of information. Notably, we also learn emotionally, assigning value to skills, information, and experiences which informs our decision-making.
00:10:56.060 Finally, we learn socially—we learn from others and need social interaction, such as this conference, to enhance our learning. We are advancing towards creating machines endowed with these skills, but they still don't integrate all these abilities as we do. A truly learning machine must generalize across multiple skills before we categorize it as intelligent in the human sense. I believe we will have a form of artificial intelligence within this century, not as a threat, but as machines that can learn and reason like humans. In a world where humans and machines can learn similarly, will they collaborate? Will educational environments evolve to accommodate both? As we consider the future of web development, let's reflect—what we create for humans should also be beneficial for machines.
Explore all talks recorded at GORUCO 2015
+5