Joy Buolamwini: Algorithmic Bias

Podcast_Innovators_Joy_square (1).png

Joy Buolamwini is a Ghanian-American computer scientist at MIT who researches algorithmic bias in computer vision systems and is focused on identifying bias in artificial intelligence and practices for accountability. She’s also the founder of the Algorithmic Justice League. Wildly accomplished, Joy’s research on this subject has been covered in over 40 countries.

In this episode, we discuss how the tech industry has a tendency to center the potential of technology, while often overlooking existing gaps and outcomes that perpetuate harm. In Joy’s case, she identified racially biased facial recognition software that couldn’t detect her skin color due to outdated code. By re-writing the practices that perpetuate harmful bias, Joy calls herself a “poet of code.” Listen to learn about how Joy’s discoveries are evidence for why developing ethical software matters.

 

Transcript

Dawn Mott:

Welcome to humanizing tech. We interview people to dig below the surface of their achievements and challenges showcasing the story behind the story. We believe that focusing on the person and humanizing their lived experiences will help us shape the future of tech.

Ochuko Akpovbovbo: 

Welcome to Humanizing Tech innovators. These short segments celebrate individuals from the past who have changed the world through technological innovations and young people today are shaping the future of tech for all of us. Before we get started, we want to acknowledge the ground. We're all on wherever we're tuning in from PDXWIT has events all over Portland, Oregon, and beyond. And we want to pause to acknowledge the history of the area and work towards decolonization of the tech industry. Portland rests on traditional village sites of the Multnomah, Cathlamet, Clackamas Chinook, Tualatin, Calapooya, Molalla, and many other tribes who made their homes along the Columbia river. So please join us in a moment of acknowledgement for the land we're on. If you'd like to learn more about PDXWIT’s action related to land acknowledgement, please visit our website. We'll add a link in the show notes as well.

Ochuko: 

Hello everyone. It's Ochuko here. She/her, and we're glad to have you on another episode of innovators. 

Anusha Neelam: 

Hello everybody. This is Anusha here. She/her. I am so excited to talk about our innovator today. I have been a huge fan of hers for a while now. The first time I heard of her, it was a couple of years ago. I was in a room full of like 200 people at a tech conference. And someone was doing a presentation and they played this video called AI anti a woman. And I think I might've referenced this video in a previous segment before as well, but it is just, it was such a powerful message. And I think just listening to it with that many people in the room was just that much more real and raw. And I can still remember looking around the room and watching everybody's faces as we were watching it.

And I've never seen anything capture that many people's attention like that before. We'll talk a little bit more about that video and what it showcases, but it was amazing. It was also incredible to see a message instantly get that many people's attention so quickly. So by now some of you may know who I'm talking about. On today's episode, we're going to be talking about Joy Buolamwini. So why don't we go ahead and just dive right in. Joy Buolamwinijoy is a Ghanian American computer scientist at MIT who researches algorithmic bias in computer vision systems. She's also the founder of the algorithmic justice league. And by the way, just want to stop right here to say how cool that is. It's like a superhero group that fights for equality in the algorithms.

Ochuko: I know right? It’s pretty cool. I love it. 

Anusha: Yeah. I just, I love the name. But anyway, at the algorithmic justice league, joy focuses on identifying bias in artificial intelligence and developed practices for accountability. Her Ted talk on algorithmic bias has been viewed over 1 million times. Her MIT thesis uncovered large racial and gender bias in AI services from companies like Microsoft, IBM and Amazon. I actually read a little bit about her research on Amazon's recruitment algorithm which essentially found men to be the ideal candidates - surprise, surprise. And what it was doing was just basically sweeping through these resumes to find words that are associated with women. So whether it was women's names or women's colleges just things like that. And it would start downgrading those resumes and Joy jokingly says that algorithms can be weapons of math destruction. But joking aside that's more or less the truth.

When we find that AI is reflecting the same biases as our society, it can have detrimental consequences. And I think that's something that a lot of people may not be actively thinking about. So and by the way, for those of you wondering how this may be happening, Joy explains how she dug into where this was stemming from. And we'll cover that a little bit later in the segment, but what engineers do in order to teach machines is feed them large data sets so that they can start understanding the definition and configure the technology accordingly. So if they don't see variation, then it doesn't learn to detect it. And it doesn't learn that there can be a variation. So I just wanted to explain where that problem is rooted. 

Ochuko: Yeah, and you know, for me being Nigerian I had a lot of fun researching Joy and having that pride in seeing a woman in tech who has African roots, a black woman and doing all these amazing stuff.

So I just want to point out that, you know, it's really cool that when people like this have opportunities, they can really make change for people just like them. And so I think Joy is amazing and her research has actually been covered in over 40 countries and she has championed the near frog arrhythmic justice in the world economic forum and the United nations. And Joy's research explores the intersection of social impact technology and inclusion, her subsequent New York Times Op-Ed on the dangers of facial analysis technology really galvanized lawmakers to investigate the risks posed by this technology. And I remember a while ago seeing this all over tech news and just like news in general, and I didn't really know where this was coming from, but I knew that it was a very important subject. And so kind of like research to enjoy and going back to seeing that it in many ways started from her is really amazing and really empowering. Like I said, as a black woman. 

And as a college student, Joy discovered that some facial analysis systems couldn't detect her dark skin face until she'd done a white mask. How wild is that? She says, I was literally not seen by technology. And that is what sparked her MIT graduate thesis when she found that existing data sets for facial analysis systems contain predominantly pale skin and male faces. Joy created a gender balanced set of over a thousand politicians from Africa and Europe. When determining gender, the error rates of these systems were less than 1% for lighter skinned males, but for darker skinned female faces, the error rates were as high as 35%. Wow. I'm just going to pause on that for that to sink in and for us to acknowledge just how scary that is, especially for a world that's moving towards AI in so many ways, as education, the criminal justice system, like knowing that there's such a high error rate is really scary.

Anusha: 

And so, yeah. And enjoys Ted talk. She actually shares actual live video footage of her testing various facial recognition software. And what she does is like she sits in front of the software just herself. And it is not able to detect her face. But then she shows how she puts this white mask on and it instantly detects her face. And she also shows the same kind of example with her white coworkers and how easily it's able to detect their faces. And one of the things that I love and I just find it so interesting about Joy's work is how she has found this very unique way to share this major fault in AI so clearly. I think sometimes when we talk about the faults and technology, it's not always easy to show where it might be happening or what's occurring, and it's not as blatantly obvious, but I've had conversations with family members after going through Joy's work and just kind of explaining this finding.

And it's very easy for just the general population to understand how severe this is without having to get into the weeds of it. And so I just like how she's just found these various platforms, various kinds of talks that she's given over time, that's really allowed her to showcase this. So I think that that's really cool. 

Ochuko: Yeah. And you know, it's amazing to know that Joyce serves on the global tech panel convened by the vice president of the European commission to advise world leaders and technology executives on ways to reduce the harms of AI. In late 2018 and partnership with our Georgetown law center for privacy and technology, she launched the safe face pledge, the first agreement of its kind that prohibits the lethal application of facial recognition and facial analysis and recognition technology. 

Anusha: In some cases, errors that are made by AI can be more of an annoyance like when you get wrongly mislabeled in a picture on social media. But with a growing number of fields that are starting to rely on AI, specifically law enforcement who has started using it for predictive policing, like Ockuho mentioned earlier, and judges who are using it to determine whether prisoners are likely to re-offend. The opportunities for injustice are incredibly frightening. In one of Joy's podcast appearances she shared another really frightening statistic where one in two adults in the U S have their faces in a facial recognition network. That's 50% of the United state population. So we are gathering all this data and information from people's faces, but we don't even have accurate systems in place. And that is really, really scary. Joy credits her parents as being her biggest mentors. Her mother is an artist and her father is a professor of medicinal chemistry. And she says growing up, she saw art and science as one, she considers herself a poet of code. 

As a creative science communicator Buolamwini has written op-eds on the impact of artificial intelligence for publications like Time magazine and the New York Times, the annual woman's award media awards recognized Buolamwini as the Carol Jenkins award recipient for her contributions in dissecting the coded gaze and its impact on gender equality. In her quest to tell stories, she released her poem, AI anti a woman, which shows AI failures on the faces of iconic black women like Oprah Winfrey, Michelle Obama and Serena Williams. This poem, as I mentioned in the beginning, gave me all of the feels. I was sad. It was mind blowing, but at the same time it was shocking that I hadn't really heard about this through any other channel or network before. And we will link this in the notes so that all of our listeners can tune into that, but I really urge everybody to listen. It made a huge impact in the way that I saw things. So, definitely recommend it. 

Ochuko: As we go through this series, one thing I'm realizing is that all these women that we profile are so amazing in so many ways, from doing tech to poetry, to art, some of them, and it's really beautiful to see these amazing people being able to do so many amazing things and to communicate their work in through so many different mediums. Right? I know on our last episode we talked…  the person you talked about used Tik TOK and which was really cool. And so, you know, seeing Joy use portrait and writing to express her work, I think is really, really powerful, right? 

Anusha: Yeah. I feel like we, we no longer have to feel like we need to be in one box or another. Like, I think that what's unique about all of the innovators we've covered so far is that they are using multiple platforms to get their messages across. They're using multiple techniques to get people to understand the flaws within technology. And I think that that's really incredible that that is… cause I, I do, I do think that there's an intersection with a lot of things and technology. So the fact that people are using that in a cross-functional way is really inspiring. 

Ochuko: Yeah. And even to add onto that, I love how, you know, there's this trend of people bringing in like the humanity into all this, because I think traditionally tech is this very black and white, rigid, the algorithm will tell you the data so, and it's interesting to see people say yes, but also as a person, I can see that this is wrong and this is right.

And, this technology evokes feelings in me and I'm going to communicate that without the fear of being seen as weak or overly emotional or whatever it is. And so that's something I really appreciate about, you know, the women that we talk about here. 

Anusha: Yeah. I think that the great thing about women finding their passions and technology is that they are allowing those around them to see the power and emotion, which I've always felt like has been sort of a passive quality or considered a passive quality in tech. But I think the more that we're seeing these women step up and make their point. And like you said, maintaining that humanity and, and showing that they're human, I think that it's only making our products in the tech space better.

Ochuko:  Yeah. I agree a hundred percent with that. All right. Let's get back to talking more about Joy and her amazing accomplishments. As a road scholar and a Fulbright fellow, Joy has been named to notable lists, including the Bloomberg 50, Tech Review 35 under 35, BBC's Hundred Women, Forbes top 50 women in tech, she was actually the youngest person on that list. And of course the Forbes 30 under 30 and Fortune magazine named her the conscience of the AI revolution. Wow. I love that. And she holds two master's degrees from Oxford university at MIT, as well as a bachelor's degree in computer science from the Georgia Institute of Technology. How amazing is she, how amazing is this woman? 

Anusha: Yeah, she's really, I love that she's sort of making appearances and, and writing all these pieces for various outlets because I think it really shines the much needed light on this particular issue. So that's amazing. So what's next for Joy? Beyond just bringing these biases to light. She hopes to develop practices to prevent them from arising in the first place. For instance, making sure facial recognition systems undergo accuracy tests. Joy makes a great point that a lot of these biases stem from using standard practices or code that was built many years ago, she says that in order to fix these gaps in technology, we need to question those standard practices and we need to challenge the status quo. Creating ethical software is not just about bad code. It's also about being conscious and responsible about the harm that technology can cause, as we've learned today throughout our segment. One thing that stuck with me that Joy mentioned was how in the tech industry, we have the tendency to center our conversation around the potential of technology. And while that is a great thing, and there is a lot of potential for technology and what it can do in the future.

It's also important to assess the gaps that are currently present and how we might go about fixing those and shining a light on these sorts of gaps like she's done is a great place to start. 

Ochuko: I completely agree with that. And for me, a takeaway is just really how important it is to have people who are not like white men in rooms where a resource like this is going on, where decisions like this are happening. Imagine if Joy wasn't there. Imagine if someone like Joy wasn't there. Would this be a conversation that we're having now and even questioning why there seemed to be more recognition for white men through the AI? I mean, I'm sure it has something in the kind of data that's imputed, the kind of people working on the project.

And so this is just a warning to us to really invest in diversity and technology, because if technology is going to be deciding a lot of our future, and it's going to…  I mean, think about it, law enforcement, that's like a pretty big deal. Right? I think it's really important to have as many voices, as many different voices in the rooms on deck making these decisions. 

Anusha: Absolutely. Yeah. Another reason to highlight the diversity that needs to be at the table, like you said, Ochuko. Great, great point. My takeaway from this segment is to identify practices that we may be following each and every day that are potentially perpetuating existing biases and norms that we don't want to carry forward. Just taking an inner look at what that might be, what are we doing every day that we're not challenging. So with that, just want to thank everybody for tuning in to the segment today. We had a lot of fun talking about joy. And as I mentioned, we will add a couple of links to the notes that you can check out. And I highly recommend it. So, if any of you listening have any suggestions for innovators that we can highlight in the future, please feel free to email podcasts@pdxwit.org. We would love to hear from you. 

Ochuko: Thanks for tuning in!

PDXWIT is a 501c3 nonprofit with the purpose of encouraging women, non-binary and underrepresented people to join tech and supporting and empowering them so they stay in tech. Find out more about us at www.pdxwit.org. Like this podcast? Subscribe and like us on your favorite podcast platform! Want to give us feedback? Contact us at podcast@pdxwit.org to help us improve and ensure you learn and grow in the stories you hear on humanizing tech.