top of page
  • community959


By: Olivia Fang

On July 21, 2023, after a meeting at the White House with President Joe Biden, America’s seven largest tech companies -- Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI -- voluntarily agreed to implement new safety guidelines regarding future AI development. These commitments prioritize both the safety and security of users, while also providing increased transparency.

During his remarks after the meeting, President Biden stated that people “must be clear-eyed and vigilant” regarding the potential threats that these new technologies could pose to “our democracy and our values.” He uses social media as an example of these threats, stating that “social media has shown us the harm that powerful technology can do without the right safeguards in place.”

The details of these commitments include “testing their technology both internally and externally”, “avoiding bias and discrimination”, and “using things like a watermarking system so users know when content is AI-generated,” as stated by NPR’s Deepa Shivaram. The seven companies have also agreed to hire professionals to search for any potential weaknesses in their respective systems. This information would then be shared with each other, along with the government and researchers.

It has been reiterated that these measures are simply the first step when it comes to enacting security regulations for artificial intelligence. A White House official has stated in a call with reporters from NBC that, “The White House is actively developing executive action to govern the use of AI for the president’s consideration. This is a high priority for the president.”

Upon the announcement of this news, there have been a few concerns raised by the public. Emory law professor, Ifeoma Ajunwa, brings up that, while tech leaders should be a part of this discussion, “We also want to ensure that we are including other voices that don't have a profit motive.” Others argue that, because this is not legally binding, the government will not have enforcement power, and there are no strict punishments in place if circumstances were to change.

AI impacts a wide variety of industries, making proper security measures vital. As Nick Clegg, Meta’s global affairs president stated, “AI should benefit the whole of society. For that to happen, these powerful new technologies need to be built and deployed responsibly.”

30 views0 comments
bottom of page