AI COMPANIES ADOPT REGULATIONS FOR DEVELOPING TECHNOLOGY
By: Joshua Dong
During a meeting with representatives of artificial intelligence companies in the White House Roosevelt Room on Friday, July 21st, President Joe Biden announced that those companies must adopt legal frameworks in order to make sure that American values and democracy are not threatened by the increasing sophistication of AI technology.
There are currently seven well-known and leading companies in the AI development industry: Amazon, Google, Meta, Microsoft, Anthropic, and Open AI. By the end of Friday afternoon, they all committed to new and higher standards of security while developing, testing, and releasing artificial intelligence models.
Artificial intelligence is a machine’s ability to perform cognitive functions that are often associated with human minds, such as learning and reasoning. Basically, a machine but with human intelligence.
“We must be cleareyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values,” Mr. Biden said in brief remarks from the Roosevelt Room at the White House.
The “emerging technologies” Biden is talking about has immense potential benefits towards for the advancement of society as a whole–if done right. However, the dark side of this developing industry could be disastrous toward civil society and include risks such as privacy and security concerns with the manipulation of data, utilizing AI for evil, and overreliance on AI, leading to loss of normal human cognitive functions.
However, there are many potential benefits of AI that are attracting more and more humans into the field. Things such as no human error, unbiased decision making, and being able to perform repetitive jobs without getting bored makes artificial intelligence very valuable.
Brad Smith, the president of Microsoft and an executive attending the meeting assured the White House that Microsoft have been and would continue to support and strengthen security, especially the voluntary safeguards being introduced into the AI industry.
“By moving quickly, the White House’s commitments create a foundation to help ensure the promise of A.I. stays ahead of its risks,” Mr. Smith said.
Part of Microsoft’s eagerness to participate in the safeguards is because one of Microsoft’s private keys were somehow disclosed to China. That private key held very important information on their code for authenticating emails, something Microsoft defended greatly.
Nick Clegg, the executive representing Meta in the meeting also declared that Meta gladly accepts the commitments to safety and security, both in their development of AI and the users who use it.
The voluntary safeguards are not at all specific in the sense of what companies were required to do in order to maintain strict cybersecurity around their developing AI technology. All of the companies in attending were already practicing security measures on collecting data from users. In this sense, the voluntary safeguards were not very useful, since companies would naturally want to protect their data and the secrets they hold.
The safeguard would be sacred to the public because they prevent companies from obtaining and using user data in potentially unsafe ways. With newer and more recent, and not yet fully developed forms of technology, especially artificial intelligence, there’s no telling what they have the capability of doing–good or bad.
However, the security and restrictions announced and described Friday afternoon are not enforceable. This is causing some people to question how effective the safeguards will actually be in keeping Americans safe. One such person is Paul Barrett, the deputy director of the Stern Center for Business and Human Rights at New York University.
“The voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections, and stepped-up research on the wide range of risks posed by generative A.I.,” Mr. Barrett said in a statement.
Regulators in Europe are ready to adopt AI laws later in 2023, encouraging companies to prompt and expect AI regulations in the United States. Some American lawmakers are starting to introduce those types of bills, however, there is tension in Congress on these laws.
Many Congress members don’t even have the basic understandings of artificial intelligence needed to make well-defined laws for the good of both the growing field of AI and the cybersecurity of the American people. Without Congress members understanding what laws being passed actually do, there is no way for America to have effective rules on the rapidly developing industry, much less protect the people of America from any discrepancies or the harmful nature of AI.
Apart from laws for controlling AI and security measures, other Congress members are very concerned with falling behind opponents for dominance in the field of AI, such as China. Many people from House and Senate panels have been researching and discussing the AI industry’s top companies and critics to determine what types of laws they should pass.
All of this is necessary to prevent something bad or unsafe from emerging and growing bigger, eventually resulting in artificial intelligence taking over the world.