Artificial Intelligence (AI)

Summarized using AI

The Algorithm Ate My Homework

Yechiel Kalmenson • November 08, 2022 • Denver, CO

In this engaging talk titled "The Algorithm Ate My Homework," given by Yechiel Kalmenson at RubyConf 2021, the speaker explores the ethical implications and responsibilities surrounding artificial intelligence (AI) and algorithms, particularly in terms of decision-making. Kalmenson opens by discussing the Trolley Problem, a classic ethical dilemma, and highlights that we are approaching a time when algorithms will have to make similar decisions. He emphasizes the lack of frameworks for understanding the responsibilities of tech companies and programmers when algorithms make harmful decisions. Key points of the discussion include:

  • Historical Context: Kalmenson draws connections between Talmudic discussions and modern ethical dilemmas. He explains how ancient Jewish laws have been applied to contemporary issues, suggesting that rabbinical scholarship can inform our understanding of technology today.

  • Question of Responsibility: A central question posed is, "Am I my algorithm's keeper?" This inquiry leads to an examination of the accountability of programmers and corporations when algorithms lead to unethical outcomes.

  • Case Studies: Several examples are cited where algorithms have led to discrimination or unethical practices. For instance, hiring algorithms that inadvertently favor candidates based on background imagery or biased court algorithms disproportionately affecting minorities. Kalmenson argues that such incidents raise serious questions regarding culpability.

  • The Nature of Algorithms: The talk delves into the nature of algorithms and whether they should be viewed as agents making independent decisions or merely as tools executing commands from programmers. Kalmenson refers to the Talmudic concept of "shaliach" (messenger) to explore this gray area.

  • Morality and Technology: Kalmenson emphasizes the moral implications of deploying technology that can cause harm. He argues that companies have a responsibility to ensure that their technologies do not perpetuate biases or discrimination.

  • Individual Contributions: In concluding, he urges listeners to contemplate their own roles in technology development, highlighting a Talmudic teaching that while one may not be responsible for completing the entire task of ethical oversight, one cannot neglect this moral duty either. Each individual’s contribution is crucial in the broader societal push towards ethical technology.

Kalmenson’s talk serves as a poignant reminder of the evolving landscape of technology and the pressing need to engage in conversations about the ethical responsibilities of artificial intelligence and its impacts on society.

The Algorithm Ate My Homework
Yechiel Kalmenson • November 08, 2022 • Denver, CO

Am I My Algorithm's Keeper?

If I write an AI that then goes on to make unethical decisions, who's responsibility is it?

In this Talmudic-styled discussion, we will delve into ancient case laws to try and find legal and ethical parallels to modern-day questions.

We may not leave with all the answers, but hopefully, we will spark a conversation about the questions we should be asking.

RubyConf 2021

00:00:10.480 Thanks for coming, everyone. I'm really excited; this is obviously my first in-person conference in just about two years. I've probably spoken to more people in the last two days than I have in the last two years combined. It's a little overwhelming, but I'm happy to see everyone, and I'm happy to be here.
00:00:28.840 Hello and welcome. The Talmud actually says that you should start off by thanking your hosts, and I would really like to thank RubyConf from the Ruby Foundation for putting together this conference, Valerie for hosting the track, and of course, in a sense, my audience are my hosts. You've all come here to listen to me, so I’d like to thank each and every one of you who are here, as well as those watching at home and those who will be watching the recordings in the future. It really means a lot to me, and I thank each and every one of you personally.
00:01:09.920 We're all familiar with the trolley problem. We have a trolley barreling down the tracks. If you do nothing, it'll head right over to five people who are tied onto the tracks. We won't get into who tied them there and what they're doing there. You, as the operator, can turn the lever that will take it away from those five people, but that'll kill another person who is on another branch of the track. This problem comes up in many different variations by many philosophers and ethicists.
00:01:28.720 We're heading towards a time where these questions aren't theoretical. They're not going to be decided by humans. Instead, the trolley is going to have to be making those decisions, along with other ethical and moral decisions. We really need to start developing frameworks to think about these questions: the responsibility of technology, the responsibility of algorithms that make these decisions, and of course, the programmers who program them and set them up for these decisions. Before we get into that, a little bit about myself—who am I and why am I giving this talk? My name is Yechiel Kalmenson, and I'm an engineer at VMware. However, I wasn't always a programmer; I used to be a rabbi and a teacher.
00:02:09.440 As a rabbi, I studied Talmud for decades and have been teaching it for almost as long. Even now that I've moved into the field of tech and programming, Talmud study and Jewish studies are still very much a part of my life. I'm part of organizing a Talmud study group in my community, and I still spend a lot of time on it. That's the perspective that I want to share with you today.
00:02:41.280 Of course, I mentioned the Talmud, so before we go any further, what is the Talmud? Many people have not even heard of it, while others may have heard vague references but aren't clear on it. The Talmud is essentially the written record of the conversations of the sages. It was compiled between the second and fifth centuries CE. It serves as the primary repository of the law, legends, and lore of the Jewish people. This is the essential focus of rabbinical scholarship until today.
00:03:03.680 What’s interesting about the Talmud is not just the text; it's not a book that you read and know it. The Talmud is a living conversation. One of the foremost thinkers from before, Rabbi JB Sullivan, gave a fascinating talk where he invoked imagery of giving a class to his students. When the door opens, the Talmudic sages come in, and the students ask them questions. They answer, and the students challenge them, arguing and debating back and forth. In a sense, when you learn the Talmud properly, when you get into it, you're not just learning what the sages said years ago; you become a real part of the conversation, as it's a conversation through the ages.
00:03:38.640 Now, obviously, the Talmud doesn't talk about computers or algorithms, or even trolleys for that matter, but rabbinic Judaism has become very adept at applying principles from the Talmud and its case law to modern problems. For example, discussions in the Talmud about lighting fire on the Sabbath translate to rulings nowadays about driving cars with combustion engines. Similarly, discussions around charging interest in the Talmud have made their way into bank contracts in Israel.
00:04:02.960 So, obviously, that's what I'll aim to do throughout this talk. I won't find much about computers, but I will try to apply and look at the different case studies that the rabbis spoke about and see how they might relate to our modern problems. In true Talmudic fashion, we probably won't leave with too many answers, but hopefully, we'll be able to clarify what questions we should ask.
00:04:34.720 That brings us to the question I will discuss: Am I my algorithm's keeper? What do I mean by that? Ever since the industrial revolution, we've been trying to automate as many menial tasks as possible. This has only accelerated with the advent of computers and the field of robotics. By now, I'm sure everyone here has robots and computers in their homes, automating tasks like washing the dishes, cleaning the floor, and entertaining our kids and ourselves. We're even using them to give us information and secure our homes.
00:05:16.080 It's only a matter of time before we start to automate our decision-making. In fact, that has already started, with various companies and products marketing machine learning and AI algorithms that aim to solve many of our decision-making processes. In theory, that sounds like a very good idea. Human brains are notoriously buggy; some people call them features. These bugs cause many biases and prejudices. In theory, if we offload that task to a computer, we can remove that fuzziness and replace it with the sterile logic of zeros, ones, and logical operators. Sounds good, right?
00:05:58.400 The reality, of course, doesn't work that way. By automating decisions, what often ends up happening is that we just automate our biases. For example, just last year, a company made news by offering a product designed to help companies in the hiring process by analyzing a video of a remote interview and assigning a score to the candidate based on machine learning. That seemed beneficial because we know that biases in interviewing and hiring are huge problems and significant contributors to inequality in tech.
00:06:41.600 However, a research company started to play around with this tool and realized that it was very easy to fool. They found that by making inconsequential changes, such as altering the background of the candidate's video, they could manipulate the scoring. For instance, simply putting a picture behind the candidate or changing the background to a bookshelf would automatically yield a much better score. Obviously, the presence of a bookshelf has nothing to do with how well someone will perform on the job, and it tends to discriminate against those without a traditional home setting.
00:07:15.840 Amazon famously had to pull a tool they used to analyze resumes after it was discovered to discriminate against female candidates. These products might have minor flaws, but they affect people's livelihoods and career progressions. It gets even worse when we consider that some courts have been using various algorithms to help judges make decisions regarding bail, sentencing, and other judicial matters.
00:07:57.680 These algorithms have been shown to discriminate against minority suspects, and the scary thing is they're still in use. So, the question is, who is responsible when things go wrong? Obviously, that's problematic. Once a company is caught using an algorithm that is discriminatory or causes harm, the first reaction is often to say, 'Hey, it wasn’t us; it was the algorithm.' They might argue that the computer made the decision, and they have no insight into why it did, portraying the algorithm as a black box that doesn't hold biases.
00:08:36.480 However, they're trying to offload their responsibility onto the algorithm, as in 'the dog ate my homework.' The real question we're going to explore is: What is the responsibility of an algorithm? Can an algorithm be held responsible, and who is responsible for its decisions? From a rabbinic perspective, I will attempt to approach this question and thank you for joining me as I think out loud.
00:09:15.680 In many Talmud classes, a source sheet is often provided. I’ve compiled a source sheet at this URL, which you don't have to look up at the moment. Everything will be on the board, and if you're interested in exploring this topic further later, you can find it there.
00:09:50.960 Let’s look at one approach to answering this question by examining the Talmudic concept of the "saliyaḥ," which translates to messenger or agent. In Talmudic law, an agent is more than just someone doing a favor. The Talmud states that the legal status of an agent is effectively that of the person being represented. If I ask someone to be my agent to execute a sale, it’s as if I made the sale myself. Back in the days before Zoom, you could set an agent to marry a woman on your behalf; obviously, it wasn’t the agent’s wife, it was yours.
00:10:38.640 The concept of a saliyaḥ is therefore powerful. If we consider an algorithm as my agent, if I program an algorithm and set it into the world, the algorithm would be my agent, which seems to suggest that I would be liable. However, the Talmud also states that you cannot set an agent to do something wrong for you.
00:11:17.360 For example, if I ask someone to take a car from someone else for $100, that agent will be held liable for the theft, not me, because they are expected to know right from wrong. The Talmud emphasizes that the agent has free choice. For instance, if your manager tells you to focus on one task while a teammate suggests another, you listen to your manager because they are responsible for your paycheck.
00:12:04.240 This reasoning mirrors the approach companies take when offloading responsibility onto their algorithms. They argue 'I didn’t do it; it was the algorithm.' However, this reasoning is flawed regarding algorithms. An agent cannot transgress on your behalf because they are expected to have the capacity for moral judgment. Algorithms, at least the ones we currently have before we reach true artificial intelligence, do not comprehend right from wrong. They're programmed to make decisions based solely on data without the moral implications.
00:12:54.240 This means that, while an algorithm cannot be held responsible as an independent agent, it could still be viewed more as a tool. It's akin to the situation where someone shot an arrow at another person; you cannot say it wasn’t you, it was the arrow. Even though the arrow may operate at a distance, the fact remains that you're responsible for its damage.
00:13:30.160 Yet, algorithms might seem fuzzier because even the programmers who design them can't predict their specific decisions. Despite this unpredictability, the Talmud does have material relevant to this dilemma. If someone lights a fire in their yard and it spreads to a neighbor's yard, resulting in damage, they can't excuse themselves by claiming they didn’t anticipate how it would spread.
00:14:07.360 According to the Talmud, a fire is like an arrow in this context because it is predictable. If you have a fire and wind, it will spread in the direction of the wind. Thus, if someone creates a fire, they are liable for the damage even if the specific outcome (where it spreads) wasn't anticipated. There is, however, an edge case: if someone lights a fire safely but an unexpected, unusually strong wind carries it to cause damage, they will not be held liable because it constitutes something beyond the ordinary, much like an act of God.
00:15:11.680 One might argue that an AI programmed by me could yield unpredictable outcomes. I might think I’m not liable because it made its own decisions that were beyond my control. However, the AI does not possess the moral agency to differentiate right from wrong, but it does make decisions within its domain that might be unpredictable to me. Therefore, it raises a question of whether there’s something between complete agency and a lack thereof.
00:16:13.680 The Talmud states that if someone hands a flame to a child or someone who is cognitively impaired and it results in damage, that person is exempt by human laws but is still liable according to heavenly laws. This principle highlights a scenario where the law may find something technically legal but morally wrong. A court wouldn’t convict someone due to a lack of causation, yet they know they have acted irresponsibly by handing a dangerous object to someone incapable of making judgments.
00:17:10.720 This provides a possible middle ground for algorithms. Companies deploying algorithms might satisfy § obligations on a technical level, but still hold moral ambiguity. Thus, these companies should conduct diligence to ensure their technology causes no harm. I may not have the answers yet, but I believe responsibility should lean toward the companies, though these are conversations we need to start having now.
00:18:01.320 Our days of theoretical philosophical discussions are well behind us; these questions are now integral to the real world we live in. When thinking about such thorny questions, you may find yourself asking, what can I do as an individual contributor? You might feel powerless regarding decisions within your organization or on a global scale and wonder why it’s essential to engage in these discussions.
00:19:07.920 This brings me to one of my favorite Talmudic quotes: Rabbi Tarfon once said, 'It is not your duty to finish the work, but you are not at liberty to neglect it.' This work may be bigger than each of us individually, but that doesn't absolve us from doing our part. Consider all the immense projects humanity has accomplished — landing a man on the moon, building space labs, skyscrapers, massive dams — all these were produced not by a singular individual but by the collective efforts of thousands.
00:19:52.960 In the same way, we're all part of a significant collective effort to improve the world and leave it better than we found it. Though the world is moving in the right direction, we must each play our part to leave it slightly improved, better than before. If anyone has questions, it looks like we still have time.
00:21:13.920 If not, I'm around for the rest of the conference, which isn't much longer. I am always available online @yechielkay on Twitter or through email if that's easier for you. I'll briefly promote that if you're interested in questions about the intersection of Talmud, religion, ethics, and technology—my friend Ben Greenberg and I have a weekly newsletter discussing these topics, and we host a small Discord community at torantech.dev. Thank you so much.
00:22:37.440 Good question. To repeat for the camera: Are we morally obligated to leave jobs if they engage in things we find morally unacceptable? This is not an easy question, as leaving a job is a privilege that not everyone possesses. Yet, significant atrocities have occurred when individuals simply did their jobs. Ultimately, the decision rests on your assessment of how morally questionable the work is and your privilege to leave.
00:24:01.520 In an ideal world, everyone would be able to make that decision, yet life isn’t black and white.
00:24:17.440 Thank you so much for engaging with me!
00:24:20.000 From a Talmudic perspective, is it better to stay and try to affect change from within or leave as a protest? It's not a straightforward answer; it really depends.
00:24:23.920 If you're positioned to thus effect change from within, that would be ideal. Your ability to influence change and work collectively can hinge on your circumstances.
00:25:00.960 Thank you, everyone!
Explore all talks recorded at RubyConf 2021
+95