Bias in, bias out: the Stanford scientist out to make AI less white and male
- Chinese-American Fei-Fei Li, an expert in deep learning who helped rewrite Google’s ethics rules, wants more women and minorities in artificial intelligence
- She says how AI is engineered, and by whom, will determine whether it helps all humanity or reinforces the wealth divide and human prejudices
Sometime around 1am on a warm night in June 2017, Fei-Fei Li was sitting in her pyjamas in a Washington, DC hotel room, practising a speech she would give in a few hours. Before going to bed, Li cut a full paragraph from her notes to be sure she could reach her most important points in the short time allotted. When she woke up, the five-foot three-inch expert in artificial intelligence put on boots and a black and navy knit dress, a departure from her frequent uniform of a T-shirt and jeans. Then she took an Uber to the Rayburn House Office Building, just south of the United States Capitol.
Before entering the chambers of the US House Committee on Science, Space, and Technology, she lifted her phone to snap a photo of the oversized wooden doors. (“As a scientist, I feel special about the committee,” she says.) Then she stepped inside the cavernous room and walked to the witness table.
The hearing that morning, titled “Artificial Intelligence – With Great Power Comes Great Responsibility,” included Timothy Persons, chief scientist of the Government Accountability Office, and Greg Brockman, co-founder and chief technology officer of the non-profit organisation OpenAI. But only Li, the sole woman at the table, could lay claim to a groundbreaking accomplishment in the field of AI. As the researcher who built ImageNet, a database that helps computers recognise images, she’s one of a tiny group of scientists – a group perhaps small enough to fit around a kitchen table – who are responsible for AI’s recent remarkable advances.
That June, Li was serving as the chief artificial intelligence scientist at Google Cloud and was on leave from her position as director of the Stanford Artificial Intelligence Lab. But she was appearing in front of the committee because she was also the co-founder of a non-profit focused on recruiting women and people of colour to become builders of artificial intelligence.
It was no surprise that the legislators sought her expertise that day. What was surprising was the content of her talk: the grave dangers inherent in the field she so loved.
The time between an invention and its impact can be short. With the help of artificial intelligence tools like ImageNet, a computer can be taught to learn a specific task and then act far faster than a person ever could. As this technology becomes more sophisticated, it’s being deputised to filter, sort and analyse data and make decisions of global and social consequence.