By: Daniel An
On Friday, July 21, President Biden persuaded seven leading companies of AI into agreeing upon voluntary safeguards on the new technology’s advancement, enforcing a pledge of managing AI’s risk while competing against one another. The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally made their commitment to new standards for safety, security and trust at a meeting with President Biden at the White House.
Artificial intelligence, or AI, has been the target of attention since the rise of ChatGPT last November. The assistant chat bot and similar algorithms can potentially cause societal danger as they can be utilized to commit fraud and misinformation.
The seven leading companies have committed prior to the meaning to generate self regulated safeguards, but President Biden regarded them as not “cleareyed and vigilant about the threats emerging from emerging technologies that can pose to democracy and values.”
AI utilized to create fraud and misinformations created worries that have increased rapidly. Supporters of Republican Candidate Ron DeSantis, who will be running for President, has been using artificial voice of former President Donald Tromp to mislead citizens in a new television ad.
In their pledge, the companies agreed to develop “robust technical mechanisms,” such as watermarking systems, to ensure that users know when content is AI—and not human—generated.
An executive order will only invoke more resistance and rejection. A voluntary guardrail would open more cooperation. Nick Clegg, the president of global affairs at Meta, the parent company of Facebook, is “pleased to make these voluntary commitments alongside others in the sector.”
Officials also said that Biden has also been working on an executive order with limited powers, but does not require congressional approval on AI safety.