Washington lays down standards for ‘safe, secure, trustworthy’ AI

Image sourced from ai.gov

United States President Joe Biden has issued an Executive Order (EO) that establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.

The EO builds on previous actions Biden has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.

With the EO, the US President “directs the most sweeping actions ever taken” to protect Americans from the potential risks of AI systems, according to the White House.

In accordance with the Defense Production Act, the EO will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.

Also, relevant US agencies — such as the National Institute of Standards and Technology, Department of Homeland Security, and Department of Energy —  are tasked to develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. 

Further, US agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.

In addition, the US Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. 

US federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.

The EO also directs the establishment of an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software. 

Meanwhile, the directive orders the development of a “National Security Memorandum” that directs further actions on AI and security, which will ensure that the US military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter enemies’ military use of AI.

To better protect Americans’ privacy, including from the risks posed by AI, Biden called on the US Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids.

To protect consumers while ensuring that AI can make Americans better off, the EO directs the advancement of the responsible use of AI in healthcare and the development of affordable and life-saving drugs. 

The order also directs the shaping of AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalised tutoring in schools.

To mitigate these risks, support workers’ ability to bargain collectively, and invest in workforce training and development that is accessible to all, the EO directs the development of principles and best practices to mitigate the harms and maximise the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection.

The EO also mandates the promotion of a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialise AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.

To ensure the responsible government deployment of AI and modernise federal AI infrastructure, the EO directs the issuance of guidance for agencies’ use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment.  

According to the White House, they have already consulted widely on AI governance frameworks over the past several months — engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the United Arab Emirates, and the United Kingdom.