When Emily Reid was an undergraduate at Tufts University more than 15 years ago, she was one of a few women in her class of more than 30 students taking an Introduction to Computer Science course. She was assigned a male lab partner who seemed to have a better understanding of programming than her. He said to her, “I thought we were getting paired up with partners of similar abilities.” This moment was just one of the myriad points during her education and early career when Reid experienced overt and covert othering. Recognizing that she does carry privilege as a white woman, Reid said it was a sign of just how deep the systemic issues ran in technology.
“I felt totally like a fish out of water,” Reid told Observer. Today, she’s anything but. As CEO of AI4ALL, a nonprofit working to educate young minorities on A.I. skills, she spearheads initiatives to build the next generation of what her organization calls A.I. changemakers, particularly those who can bring their diverse experiences to the table.
Research suggests that diverse teams increase the likelihood of recognizing and addressing biases in A.I. systems. Such biases might include not serving everyone who uses a particular technology, as evidenced by Dr. Joy Boulamwini’s work on facial recognition, which has increased error rates with darker skin tones. But, according to a 2021 Unesco report, women account for less than one-third of employees in the tech sector and 22 percent of A.I. workers. A 2022 McKinsey survey showed that the average proportion of racial or ethnic minorities developing A.I. solutions is 25 percent—29 percent of respondents say their organizations have no minority employees working on their A.I. solutions at all.
An A.I. system may get input from a data set that does, indeed, accurately reflect the world. “But we also live in a biased world,” said Reid. “It runs the risk of further solidifying some of those systemic biases.” For example, A.I. systems are at play in the criminal justice system, predicting recidivism risk, or the risk of reoffense, which may be used to determine sentencing. However, biased outputs can result in unfair consequences.
“Everyone should be able to have the opportunity to go into A.I., and we should also be building A.I. that can actually work for everyone,” said Reid. “The A.I. field should be, at a minimum, representative of the world that we live in in terms of diversity, but it should also be focused on what values as a society we want to support.”
AI4ALL was founded in 2015 by three industry leaders: Stanford computer scientist Fei-Fei Li, Princeton computer scientist Olga Russakovsky and Rick Sommer, a mathematics and philosophy lecturer at Stanford. All three sit on the nonprofit’s board. Today, the organization focuses on transitioning students from learning about A.I. for the first time, likely in college, to entering the workforce. “Generative A.I. is exploding, and I fear that if we are using this homogenous model to define it, that next generation will be going in the wrong direction,” said Reid. But instead of relishing in that fear, her work continues to build the tracks for a train that has long since left the station.
For Reid, there were a few pivotal moments in her life that led to her path of strategically diversifying responsible A.I. research, development and implementation. Born into a family of educators with a drive to use education as a pathway to solving problems, she got it honestly. A college mentor, now friend and industry peer, Dr. Elena Jakubiak (currently leading machine learning efforts at SimpliSafe), shared a book with Reid that changed her understanding of the industry: Unlocking the Clubhouse: Women in Computing by Jane Margolis and Allan Fisher. She learned that women were once inching towards being the computer science majority, but that fell off once men deemed computer science careers lucrative. Her experience as the director of education at another nonprofit called Girls Who Code proved to her how hands-on learning can have a real impact on people of all stripes.
Of course, gender is just one layer of the diversity onion, and Reid knows this. AI4ALL aims for at least 60 percent of its students going through its semester-long accelerator program, dubbed AI4ALL Ignite, to be Black, Latinx or indigenous, as well as women or non-binary. Project-based learning allows students to become more prepared for internships and job applications with a tangible portfolio impact. One alumna of the accelerator, Maya De Los Santos, took part in the program in high school and is now studying computer science at Northeastern University while taking part in the Northeastern Civic A.I. Lab, where she designs and researches A.I. systems that ensure fair work opportunities for Latina gig workers.
For Reid, fighting homogeneity in the field of A.I. research and development is paramount — and moments like this show true impact is not just possible but plausible. “There are times when I can get quite dystopian about where A.I. could go,” said Reid. “My hope is that I’ll work for one of our students someday. They are so far and away from where I was in college. It is one of the things that makes me feel more hopeful.”