There are two sides to artificial intelligence (A.I.). Certainly, A.I. provides more efficient and personable consumer experiences, such as music recommendations on Spotify or automated email filters on Gmail. But A.I. is clearly not just for fun and games. Machine learning technology can also use demographic data to hasten the process of who gets a job or a bank loan. But applying A.I. at scale requires collecting, and protecting, swarms of demographic data.
“If your data is restricted, then the machine will give answers that are already biased,” says Tiger Tyagarajan, CEO of global professional services firm Genpact. Tyagarajan tells Susie Gharib of Fortune that given public reliance on A.I. technology, companies have a responsibility to mitigate inherent bias by expanding data sources to include multiple demographics, not just a narrow set.
For example, an A.I. engine shouldn’t make assumptions based primarily on a person’s online connections, race, or even genres included in their public Spotify playlist.
The need for diversity doesn’t end with data feeds. Companies should hire and cultivate a diverse set of employees representing different ethnicities, genders, and professional functions who are able to guide the accumulation of data and the development of A.I. algorithms.
“We need real governance on A.I. If companies don’t collaborate on topics like bias, then the government will step in to regulate,” says Tyagarajan.
Watch the video above for more from Fortune’s interview with Tyagarajan.