AI Companies Pledge to Implement Safety Features Amidst Pressure from Biden
By: Jayden Ho
In a move that could redefine the future of artificial intelligence, seven leading AI companies have pledged voluntary safeguards on the development of artificial intelligence. The White House announced this commitment on July 21, 2023, following a meeting with President Biden.
The companies, namely Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, have agreed to uphold new standards for safety, security, and trust. This commitment comes amidst a competitive race to develop increasingly sophisticated AI technologies. These advancements, while promising, have raised concerns about potential risks, including the spread of disinformation.
During the announcement, President Biden emphasized the importance of vigilance. He stressed the need to manage the threats posed by these emerging technologies to our democratic values. "We must be clear-eyed and vigilant," he said, "about the threats that emerging technologies can pose to our democracy and our values."
The companies have agreed to a range of voluntary safeguards. These include testing products for security risks and using watermarks to identify AI-generated material. Specifically, watermarks allow users to distinguish between human-created and AI-generated content, thereby reducing potential misinformation and abuse. Even so, these measures are seen as initial steps. Governments worldwide are grappling with the challenge of establishing comprehensive legal and regulatory frameworks for AI development.
The voluntary nature of these commitments means they will not be enforced by government regulators. But they represent a significant step towards responsible innovation. The companies have agreed to a range of measures, including security testing, bias and privacy research, risk information sharing, and the development of tools to combat societal challenges like climate change.
The Biden administration has emphasized the obligation of these companies to adhere to ethical principles and safety standards. They stated "Companies that are developing these emerging technologies have a responsibility to ensure their products are safe," However, critics argue that more needs to be done.
Paul Barrett, Deputy Director of the Stern Center for Business and Human Rights at New York University, called for legislation. He believes that transparency, privacy protections, and research on AI risks are vital. "The voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections, and stepped-up research on the wide range of risks posed by generative A.I.," Mr. Barrett said in a statement.
The announcement comes at a time when European regulators are poised to adopt AI laws. This has prompted many companies to encourage similar regulations in the U.S. Several lawmakers have introduced bills, including licensing for AI companies, the creation of a federal agency to oversee the industry, and data privacy requirements. Nevertheless, consensus on these rules is yet to be reached.