top of page
  • community959

Biden Forces Big Tech to Agree to AI Rules

By: Josie Liao

Thirty years ago, society thought that technology would be so developed in the future, that humans would have constructed flying cars by 2030. Fictional stories were told and cartoons were drawn concerning technological advancement, and how it could facilitate daily chores and tasks. Yet now, in 2023, society might just be moving in the complete opposite direction. Before we remotely consider the idea of a self-driven vehicle, we first need to worry about a perhaps more advanced and more dangerous invention: artificial intelligence.

Last Friday, July 21st, Biden announced his future guidelines regarding AI development at a meeting in the White House, forcing seven companies to accept his safeguards and prevent the violation of American privacy and safety.

As competition grows, each of the seven AI companies–Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI–are incorporating more versions of AI into their businesses to both promote and create new and more innovative ways to perform simple tasks. When under control, AI is a very powerful tool and can be very beneficial. However, President Biden is afraid that, in the future, artificial intelligence will have grown an immense amount, not only encouraging the spread of misinformation and invading the privacy of American citizens, but also becoming an unstoppable force and supposedly driving humans almost to the point of “extinction.” Thus, to avoid such circumstances, the agreements include “testing products for security risks and using watermarks to make sure consumers can spot AI-generated material.”

Another main concern involves security from other countries. Due to the rapidly evolving technology, China in particular is a highly probable culprit when it comes to stealing private information, new programs, or ultra-confidential complex code. To combat this problem and ensure that “innovation doesn’t come at the expense of Americans’ rights and safety,” Biden and his administration placed new restrictions on the export of large language models. Much of the software can fit, compressed, on a thumb drive and can be protected.

While it may seem that this is a huge commitment for AI companies, they have acknowledged that these agreements will not restrain their plans, nor hinder the development of their technologies.

However, even with the measures put in place by Biden and the compromises between Big Tech companies, lawmakers are unsure that true regulations can be reached. First, because they are voluntary commitments, the government has opted out of enforcing strict regulations. Second, and likely more importantly, the restrictions on other countries may not be enough to stop them from stealing specific guarded information. Just last week, Microsoft had to take action against an invasion of the private emails of American officials who had been discussing China. China had somehow obtained one of Microsoft’s most guarded pieces of code–a private key used to authenticate Microsoft emails.

It’s widely believed that more needs to be done to protect society from the risks of artificial intelligence. These people, including Sam Altman, CEO of OpenAI, have implored lawmakers to regulate the AI industry, pointing out the potential for the new technology to cause undue harm. But that regulation has been slow to get underway in Congress; some lawmakers still struggle to grasp what exactly AI technology is. Multiple efforts are attempting to promote action in Congress.

Whether or not it be in the near future, artificial intelligence is no doubt both a blessing and a curse. If used correctly and tactically, AI is often a useful addition to human intelligence. But its most valuable components are also what make it the most dangerous. While it may seem harmless now, government regulation against the use of artificial intelligence could very well be the determining factor against the technological downfall of our society. So, before robots inevitably demolish the face of the earth, maybe we should get on with those flying cars instead.

2 views0 comments


bottom of page