US unveils artificial intelligence ‘Bill of Rights’ to safeguard civil rights from technological abuse
- The Blueprint for an AI Bill of Rights does not set out specific enforcement actions, instead serving as guidance for the federal government
- The five core principles set out in the document cite research on real-world harms from AI-powered tools, such as discrimination against Black citizens
The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people’s personal data and limit surveillance.
The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the US government to safeguard digital and civil rights in an AI-fuelled world, officials said.
“This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the centre and civil rights at the centre of the ways that we make and use and govern technologies,” said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. “We can and should expect better and demand better from our technologies.”
It puts forward five core principles that the White House says should be built into AI systems to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.
The non-binding principles cite academic research, agency studies and news reports that have documented real-world harms from AI-powered tools, including facial recognition tools that contributed to wrongful arrests and an automated system that discriminated against loan seekers who attended a Historically Black College or University.
The white paper also said parents and social workers alike could benefit from knowing if child welfare agencies were using algorithms to help decide when families should be investigated for maltreatment.
Earlier this year, after the publication of an AP review of an algorithmic tool used in a Pennsylvania child welfare system, OSTP staffers reached out to sources quoted in the article to learn more, according to multiple people who participated in the call. AP’s investigation found that the Allegheny County tool in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children.