Talks

Summarized using AI

Breaking the Chains of Oppressive Software

Kinsey Ann Durham • February 22, 2019 • Tegernsee, Germany

In her presentation titled "Breaking the Chains of Oppressive Software," Kinsey Ann Durham discusses the critical issue of biases embedded within algorithms and the broader implications for society. She begins by asserting the necessity of recognizing how these biases manifest in various forms of software that influence significant life decisions. The key points she outlines include:

  • Understanding Bias: Bias is defined broadly as an inclination or prejudice towards particular groups or ideas. Durham emphasizes that the focus should be on biases in algorithms rather than solely on unconscious human biases in tech companies.

  • Impact of Algorithms: Algorithms are increasingly pivotal in operations across sectors such as education, justice, and social services. They can both enhance convenience and exert control over lives, thereby necessitating scrutiny for potential biases.

  • Examples of Bias:

    • Search Engine Bias: Durham shares a video highlighting sexist and racist results from Google search algorithms, revealing how these biases are not incidental but systematic.
    • Image Recognition Flaws: She cites the incident where Google Photos mistakenly tagged people of color due to inherent biases in the image recognition algorithms.
    • COMPAS Algorithm: This algorithm predicts recidivism rates for defendants, but its black box nature raises due process concerns as its bias can affect sentencing dramatically.
    • Recruitment Tools: Amazon's scrapped AI recruiting tool, which favored male candidates, illustrates the dangers of algorithmic bias in employment.
  • Root Causes of Bias: Durham argues that biases begin at the data collection stage, and the historical reliance on binary classifications in programming perpetuates these biases into modern AI systems.

  • Call to Action: To tackle these issues, she encourages developers to engage in discussions about ethical algorithm design, advocate for algorithmic accountability, promote diversity in tech, and be aware of personal biases. She suggests practical steps such as running the implicit bias test and implementing bias reduction techniques in code reviews.

Durham concludes by asserting that developers have the power to create a more equitable tech landscape and emphasizes the importance of addressing the moral implications of their work. The session serves as a powerful reminder of the responsibility developers hold in ensuring ethical use of technology, with an invitation for them to leverage their skills towards creating inclusive and just systems.

Breaking the Chains of Oppressive Software
Kinsey Ann Durham • February 22, 2019 • Tegernsee, Germany

We have the power to stand up against oppression that exists in our software. Data discrimination, biases in algorithms, etc. are becoming an issue. You’ll learn about the biases we build, software that is bettering us and what you can do about it, as a developer, to truly make a difference.

By Kinsey Ann Durham https://twitter.com/@KinseyAnnDurham

Kinsey Ann Durham is an engineer at DigitalOcean working remotely in Denver, CO. She teaches students from around the globe how to write code through a program called Bloc. She co-founded a non-profit called Kubmo in 2013 that teaches and builds technology curriculum for women’s empowerment programs around the world. She, also, helps run the Scholar and Guide Program for Ruby Central conferences. In her free time, she enjoys fly fishing and adventuring in the Colorado outdoors with her dog, Harleigh.

https://rubyonice.com/speakers/kinsey_ann_durham

Ruby on Ice 2019

00:00:11.940 Come back everyone! Hello and welcome. Now the first speaker of this year is Kinsey. She's an engineer at DigitalOcean.
00:00:19.720 She is also the co-founder of Kubmo, a non-profit that helps women find their way in tech. Today, she is going to tell us about breaking the chains of oppressive software. Please give her a warm welcome!
00:00:48.420 Okay! Can everyone hear me? Awesome! Great! Hi everyone, good to see all of you. I'm Kinsey, and I am based in Denver, Colorado.
00:00:51.120 I spend a lot of time in Vail, Colorado, which I don't know if any of you have heard of. It's a mountain town about two hours outside of Denver, and it was actually modeled after Bavaria.
00:01:05.260 Here is a picture of the land. I have to say it looks pretty similar! I'm really excited to be here. I spend a lot of time in Vail because I really enjoy fly fishing. I don't know if any of you have ever been fly fishing or heard of it, but I really like it.
00:01:30.440 When I'm not coding or writing, I'm often in the mountains. This is my dog, Harleigh, at a lake near Vail. So, if you're ever in the United States, come say hi in Colorado; it's beautiful! But I have to say, it's a lot prettier here, maybe just because the mountains are a lot taller.
00:01:47.200 I work for a company called DigitalOcean. Who here has heard of DigitalOcean? Wow, that's really awesome! Do you guys use it to deploy your software? If you have, at least you’ve heard of DigitalOcean.
00:02:06.300 Prior to this, I worked at a company called GoSpotCheck, where I was part of a team that used machine learning to build image recognition software. I'll talk a little bit about AI and those kinds of topics, so I wanted you guys to know that I've been in that world before.
00:02:24.700 Now, I'm going to stop talking about myself, and here's what we'll go through for the next 20 to 30 minutes.
00:02:30.069 I’m going to talk about biases. A bias is a cause to feel or show inclination or prejudice for or against someone or something. You may think that I'm going to come up here to talk about unconscious biases in tech companies.
00:02:40.959 It's a great topic, but we've definitely heard about it before. I really think it’s more than that; it’s actually the biases in our code, specifically in our algorithms.
00:03:02.100 Algorithms are becoming increasingly important. They're making significant decisions in our societies. It's not just search engines; they impact everything from online review systems to educational evaluations, the operation of markets, political campaigns, and even how social services like welfare and public safety are managed.
00:03:25.350 They are driven by data and continually influence our daily lives. A lot of times, I’ve heard algorithms referred to as black boxes because we put something in and get something out, making it impossible to see what’s going on inside.
00:03:39.349 Algorithms have done a lot of good things. They make a lot of things easier, like Lyft and Uber, and different services like that. However, it’s crucial for us to be aware that these algorithms can make mistakes and exercise power over us.
00:04:03.230 My intent here isn’t to demonize algorithms, as they are beneficial and have done much for us. Yet, we must recognize that they operate with biases, just like the rest of us.
00:04:20.350 There are two schools of thought surrounding algorithms: one is that they are rational and objective— but are they really? Is the data we input into them objective? Are they built with the idea that they need to be objective?
00:04:35.610 The other school of thought is that algorithmic reasoning is fundamentally flawed. I truly believe that they are here to stay, regardless of what anyone thinks, but biased algorithms can be dangerous.
00:04:52.730 Now, I want to get into how algorithms have reflected biases with a few examples to share with you all. First, I'm sure all of you have used Google Search, typed something into Google, and noticed its results.
00:05:07.890 Here is a quick video that I wanted to share from a campaign by UN Women. It highlights how poor Google search algorithms can be.
00:05:22.980 [Video Clip]
00:06:42.060 Yeah, I was an advertising major in college before I learned how to code, and I found that campaign really cool and powerful. It brought to light how problematic Google search algorithms can be.
00:07:07.940 It wasn't just about women; you can search for various things and encounter very sexist and racist results. Google uses a technology called Word2Vec, which is a pretty cool innovation, but obviously has flaws.
00:07:26.630 Back in 2013, researchers at Google created a neural network with around 3 million words from Google News. The goal was to identify patterns in how words appear together.
00:07:55.170 This complex system discovered that words with similar meanings occupied similar positions in a vector space of over 300 dimensions. This is when biases became apparent.
00:08:41.920 For example: 'man' is to 'computer programmer' as 'woman' is to 'homemaker'. This clearly reveals blatant sexism and racism.
00:09:05.330 Another example involves biases in image recognition. A while ago, Twitter exploded when Google Photos tagged people of color as 'gorillas', which was undeniably offensive.
00:09:20.070 Rather than addressing the algorithm's flaws or the data associated with the words, Google simply removed every image of primates from their database as their solution.
00:09:39.920 This approach received significant backlash from various media. There are many instances of similar biases against people of color, and addressing these issues will only grow increasingly urgent.
00:09:57.700 As technology such as facial recognition software becomes more prevalent in law enforcement, border security, and even hiring, the stakes are high.
00:10:24.950 Another troubling example was with Beauty AI, an initiative heavily backed by Microsoft. This AI was designed to hold a beauty contest judged by robots using three algorithms trained with deep learning.
00:10:55.960 This system didn't evaluate skin color, and despite receiving over 40,000 submissions, the finalists were predominantly white or of light skin tone.
00:11:15.500 The organizers blamed their dataset for the outcome, but the glaring issue lies within the algorithm that favored specific appearances.
00:11:31.120 There's also an example of bias in the LinkedIn search engine that often prompts users with male suggestions when searching for female contacts.
00:12:01.990 If you were to search for 'Andrea Williams', it might ask if you meant 'Andrew Williams', even if you have a connection named Andrea.
00:12:35.580 LinkedIn defended itself by stating that more males use their platform rather than women, but perhaps that's due to how women perceive their interactions in LinkedIn.
00:12:54.170 Additionally, biases exist within courtroom algorithms, which is quite alarming. The COMPAS algorithm is one of these secret systems, designed to predict the likelihood that someone will reoffend and is used during sentencing.
00:13:19.290 The system requires defendants to answer 137 questions, and their risk of committing another crime is assessed through this algorithm.
00:13:42.670 For instance, Eric Loomis was sentenced to six years in prison based on the COMPAS algorithm's assessment of him being at high risk for reoffending. However, the factors the algorithm considered are unknown, as it operates as a black box.
00:14:12.550 He appealed this ruling, claiming it violated due process since the inner workings of the algorithm remained undisclosed. This case reached the Wisconsin Supreme Court, which ruled against him.
00:14:30.150 Although they noted the sentence would have been the same regardless of the algorithm used, the skepticism surrounding COMPAS endures. Should we be utilizing this predictive algorithm when it's known bias may exist?
00:14:53.620 As more courts adopt software like COMPAS for sentencing, recognizing biases indicates that this is a dangerous practice that threatens the integrity of the judicial system.
00:15:10.010 The director of a criminal law reform project has expressed concern that predictive algorithms could further accentuate existing inequalities in our justice system. He also pointed out that the data used in this sphere is often unreliable.
00:15:31.860 Now, let’s turn our attention to recruiting algorithms. Amazon tested an AI recruiting tool that ended up favoring male engineers. They utilized 500 computer models with 50,000 search terms and used scraped resumes from LinkedIn and other sources.
00:16:04.920 As a result of the biased algorithms, Amazon decided to scrap the entire recruitment project instead of fixing the biases within their system.
00:16:24.160 The situation raises concerns about how many companies might be implementing similar biased algorithms silently.
00:16:49.640 Now, one of the problems exacerbating these issues is the Computer Fraud and Abuse Act, which oddly prohibits investigating algorithms for biases. Criminal charges can arise from creating multiple accounts to examine variations in data.
00:17:03.130 As a result, the ACLU and certain researchers and journalists have filed lawsuits against this law. Moreover, the lack of diversity within tech companies contributes to the perpetuation of biases.
00:17:29.300 Researchers often lack representation, and the programming teams' inherent biases are mirrored in the software they create.
00:17:51.780 The datasets we rely on also possess their own biases, indicating that no matter how advanced our algorithms are, they can’t rectify biased data.
00:18:10.840 A question arises: is the binary system itself biased? The binary system represents the strings of ones and zeros foundational to all computer systems. It enables efficient calculations.
00:18:51.470 You might think bias in the tech industry stems from Silicon Valley tech bros, but the issue traces back to ancient philosophers like Aristotle, who posited duality.
00:19:06.350 These binary classifications, as Aristotle defined them (finite/infinite, odd/even, man/woman), continue to dominate AI today. His ranking of dualities is echoed by millions of engineers.
00:19:44.470 Considering the moral implications of the binary classification is essential since it often fails to address ethical dilemmas. We're living in a world of ones and zeros, with little room for nuanced understanding.
00:20:13.740 The German philosopher Leibniz created the binary system to procure faster yes/no verdicts and to condense large numbers. Rather than establish coherent universal systems, it entrenched Aristotle’s duality.
00:20:38.570 Now, natural language processing (NLP) frameworks and complex machine learning systems rely heavily on binary representations of data.
00:20:58.430 This introduces biases that can hamper understanding the rationale behind human decision-making. Existing frameworks should be capable of adapting to remedy these biases, yet engineers often operate strictly within the constraints of legislative and binary logic.
00:21:42.990 Statistics illustrate how crucial the incorporation of AI is. Currently, 33% of enterprises utilize AI, with predictions indicating continuous growth. A staggering 1,400% increase in active startups using AI has been noted since 2000.
00:22:01.660 55% of HR managers state that AI will become routine in five years, which raises alarm bells considering companies like Amazon have faced substantial hurdles with their algorithms.
00:22:24.330 Predictions state that global AI spending could increase to $7.3 billion annually by 2022, highlighting ongoing investments in AI for business differentiation and faster service delivery.
00:22:54.870 However, the chief victims of this technological advancement will be impoverished communities and minorities. Researchers believe this technology may exacerbate societal disparities.
00:23:15.430 As societies continue to evolve with technological advancements, certain groups will undoubtedly suffer. The implications of AI on issues of geopolitical conflict are significant.
00:23:37.800 AI plays a crucial role in many life-altering decisions, such as whom to interview for jobs, who gets parole, and credit approvals. It's embedded in popular technologies, including speech recognition, Google services, Netflix recommendations, and Amazon's algorithms.
00:23:58.590 Imagine a future where algorithms prevent innocent people from boarding flights or escalate wrongful insurance premium hikes—these risks are very real.
00:24:17.200 As we engage in this crucial dialogue, we must recognize that technologists are now shaping political and economic structures through their work.
00:24:40.720 With this understanding, we must ask ourselves how the growing reliance on algorithms might intensify societal prejudice, discrimination, and intolerance of differences.
00:25:00.290 So what can you do about it? One potential solution is designing non-binary categorization systems within AI. However, practicality becomes a concern, as most of us rarely have enough time to tackle such endeavors.
00:25:19.680 We need to emphasize programs that implement algorithms ethically and responsibly, including pondering whether morality can be programmed in.
00:25:34.240 It’s crucial for us to begin discussing and acting on these questions. We can advocate for policies supporting research into AI ethics, emphasizing the importance of guidance when deploying algorithms.
00:25:53.400 We have the opportunity to support from policymakers who also recognize the importance of checking AI and science and can influence legislation that promotes the ethical use of algorithms.
00:26:19.930 We can establish a code of conduct for AI, aligned with the methodologies used in other domains like open-source and conference contributions, ensuring that testing is done within ethical bounds.
00:26:36.360 Fostering diversity within the engineering field responsible for creating algorithms is equally important. The more inclusivity we have, the more awareness can develop against inherent biases.
00:26:55.380 Moreover, making ourselves aware of our personal biases contributes positively. I highly suggest taking the implicit test available online. It surprised me, revealing biases I didn't even realize I had.
00:27:10.720 Incorporating bias reduction processes in our code reviews is another vital avenue. Mozilla now offers a Firefox extension to anonymize pull requests, allowing contributors to focus solely on code quality.
00:27:39.660 We must also demand algorithmic accountability. Leading tech companies like Google, Microsoft, and IBM are beginning to express their commitment to this cause.
00:28:05.720 There are corporations emerging focused solely on auditing algorithms, which is a refreshing development. You can also leverage fairness tools developed by research organizations like Accenture and the Alan Turing Institute to combat biases.
00:28:35.510 Be transparent regarding your datasets and algorithms; don’t let them remain black boxes. Your stakeholders should encourage this type of openness to address concerns.
00:28:54.860 How can we ensure that our algorithms are accountable? First, can we trust our own data? Bring these questions to the forefront in meetings, code reviews, and project conversations.
00:29:12.880 Engaging in ethical discussions around AI and integrating these values into our planning and execution stages promotes conscientiousness in our technology.
00:29:37.040 So to summarize, it’s critical to address the ethical and moral implications of our code. Our responsibility as developers carries weight, and we possess the power to invoke change.
00:29:52.400 Let’s use our power wisely, set a precedent for future generations of developers, and strive for a world that is just and equitable.
00:30:15.980 Thank you for listening, and thanks to Ruby on Ice for hosting such an amazing conference! Here are some articles and papers I referenced, and I can send out the implicit bias test if anyone is interested.
Explore all talks recorded at Ruby on Ice 2019
+8