ACLU Statement on President Biden’s Executive Order on Artificial Intelligence
WASHINGTON — Today, President Biden issued an Executive Order on safe and equitable artificial intelligence. The order builds on foundational principles laid out in the administration’s “Blueprint for an AI Bill of Rights,” including by centering civil rights and civil liberties in our national AI policy. While the order makes important strides, such as requiring agencies to protect civil rights and civil liberties in any use of AI in governmental programs, it also fails to meaningfully address AI use in national security and offers insufficient protection from law enforcement uses of AI – critical federal programs that carry some of the greatest risks.
Instead, the order exempts AI systems involved in national security, including those that routinely impact Americans in the context of surveillance, immigration, and border activities, leaving such systems subject to a future policy memorandum. Law enforcement uses of AI are subject to an interagency process and a report on current uses of AI in law enforcement.
The following reactions are from:
ReNika Moore, director of the ACLU’s Racial Justice Program: “We’re encouraged that the Biden administration recognizes the need for a whole-of-government approach to address discrimination and other real world harms of artificial intelligence and other automated systems in critical areas of people’s lives, such as in the workplace and in housing. But the administration essentially kicks the can down the road for these tools in national security and law enforcement, areas where the use of AI is widespread and growing and where there are often profound impacts on liberty, equity, and due process.”
Cody Venzke, senior policy counsel in the National Political Advocacy Department at the ACLU: “Artificial intelligence has become integrated into our daily lives in significant, but often subtle ways, exacerbating and magnifying discriminatory harms in housing, education, employment, and more. Today’s executive order invokes existing civil rights authority to address discriminatory AI throughout the government. However, the order raises significant red flags as it fails to provide protections from AI in law enforcement, like the role of facial recognition technology in fueling false arrests, and in national security, including in surveillance and immigration. The executive order also does not address a critical foundational question: should AI be used in any particular context or use case at all?”
Critical work remains to be done in other areas, including addressing the government’s purchase of our personal information from data brokers and its current inadequate assessment of the privacy impacts of its own practices. Similarly, investment into technologies, such as watermarking and content provenance, could have implications for the future of speech online, including the right to speak anonymously.