White House, Tech Leaders Agree on Safe AI Practices

OpenAI, Google, Anthropic, Microsoft, Meta, Amazon and Inflection voluntarily commit to responsible AI practices

Deborah Yao, Editor, AI Business

July 25, 2023

3 Min Read
Executives from OpenAI, Google, Microsoft, Meta, Amazon, Anthropic and Inflection AI met with President Biden and committed to responsible AI.
From left: AWS CEO Adam Selipsky, OpenAI President Greg Brockman, Meta Global Affairs President Nick Clegg, Inflection AI CEO Mustafa Suleyman, Anthropic CEO Dario Amodei, Google Global Affairs President Kent Walker, Microsoft President Brad SmithANNA MONEYMAKER/GETTY IMAGES

At a Glance

  • OpenAI, Google, Microsoft, Meta, Amazon, Anthropic and Inflection AI executives voluntarily committed to deploy AI safely.
  • They met with President Biden at the White House in a private meeting.

Tech companies leading the charge in AI development reached what could be an unprecedented, voluntary agreement with the White House on a series of practices to ensure the safe deployment of AI.

Executives from OpenAI, Google, Microsoft, Meta, Amazon, Anthropic and Inflection AI met with President Biden and committed to responsible AI practices.

“These commitments are real and they are concrete,” said Biden in a press briefing before he met with the tech executives privately.

Joining the president were OpenAI President Greg Brockman, Google Global Affairs President Kent Walker, Microsoft President Brad Smith, Meta Global Affairs President Nick Clegg, AWS CEO Adam Selipsky, Anthropic CEO Dario Amodei and Inflection AI CEO Mustafa Suleyman.

“We are joined by leaders of seven American companies who are driving innovation in artificial intelligence, and it is astounding,” Biden said. “Artificial intelligence (carries) enormous promise of both risk to our society and our economy and our security, but also incredible opportunities.”

This agreement is "only the first step" in developing and enforcing rules around AI, according to the White House. The administration will continue to take executive action and pursue legislation to make sure AI systems deployed to the public are safe, secure and trustworthy.

These talks with tech leaders will complement the G-7 Hiroshima Process, a forum for international talks to develop AI guardrails and chaired by Japan; the U.K.’s Summit on AI Safety; and the Global Partnership on AI with India as chair. The White House also said it is “discussing AI” with the U.N.

The administration said it is already working with allies to create a “strong” international framework to govern AI. There already are voluntary commitments from Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the U.K.

In May, Vice President Harris met with the CEOs of OpenAI, Google, Microsoft and Anthropic to secure of pledge of letting the public vet their AI models before release. Last fall, the administration also released an 'AI bill of rights' to protect society from its harms.

“This is a serious responsibility,” Biden said. “We have to get it right.”

But there was a moment of levity as well. When Biden first entered the press briefing room, he said “I’m the AI” – and was met with guffaws.

What AI Leaders Agreed to Do

The tech companies voluntarily agreed to do the following, which one could argue would be in their best interest to do so anyway:

Ensuring product safety before public release. Companies pledge to conduct security tests of their AI systems internally and externally before releasing them. Independent experts in part would do the test to guard against risks in areas like biosecurity and cybersecurity. They also promise to share information − including best practices to ensure safety as well as technical collaboration − across the industry and with governments, civil society, and academia on managing AI risks.

Prioritizing security when building systems. They will invest in cybersecurity and act to prevent threats from insiders to protect proprietary and unreleased model weights. Model weights, which control how much influence an input will have on an output, are the most essential part of an AI system. The companies also agreed to third-party discovery and reporting of vulnerabilities in their AI systems.

Earning public trust. They will develop “robust” technical mechanisms such as watermarking to identify AI-generated content. This would let creativity flow while mitigating risks from deepfakes and other deceptive uses of AI. The companies will publicly report their AI systems’ capabilities, limitations, and “areas of appropriate and inappropriate use,” including both security risks and societal risks, such as the impact on fairness and bias.

They will prioritize research on AI systems’ societal risks, including avoiding harmful bias and discrimination and protecting privacy. Also, the companies promise to develop advanced AI systems to help solve society’s greatest challenges such as cancer prevention and climate change.

About the Author(s)

Deborah Yao

Editor, AI Business

Deborah Yao is an award-winning journalist who has worked at The Associated Press, Amazon and the Wharton School. A graduate of Stanford University, she is a business and tech news veteran with particular expertise in finance. She loves writing stories at the intersection of AI and business.



Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like