The US Takes the Lead in Regulating AI: A Critical Analysis

On Monday, US President Joe Biden unveiled an extensive executive order on artificial intelligence (AI), positioning the United States at the forefront of the global conversation on AI regulation. This move allows the US to leapfrog over other nations in the race to govern AI. While Europe had been leading the way with its AI Act, passed by the European Parliament in June 2023, the act will not come into full effect until 2025. The presidential executive order addresses a wide range of concerns related to AI regulation, from immediate issues like AI-generated deepfakes to long-term concerns such as the potential existential threat AI poses to humans.

The US Congress has been slow to pass significant regulation targeting big tech companies. Thus, the executive order can be seen as an attempt to bypass the often deadlocked Congress and catalyze action on AI regulation. For instance, the order calls on Congress to pass bipartisan data privacy legislation, a challenging task given the current political climate. The executive order is expected to be implemented over the next three months to one year and covers eight principal areas: safety and security standards, privacy protections, equity and civil rights, consumer rights, jobs, innovation and competition, international leadership, and AI governance.

On one hand, the executive order tackles several concerns raised by academics and the public. For example, it directs the issuance of official guidance on watermarking AI-generated content to minimize the risks associated with deepfakes. Additionally, the order requires companies developing AI models to undergo rigorous safety testing before deploying them for broader use.

However, the executive order fails to address several pressing issues. Notably, it does not specifically outline how to handle the potential dangers posed by killer AI robots, a topic extensively discussed at the recent United Nations General Assembly. Disregarding this concern could have significant consequences, as both the Pentagon and Ukraine have already made advancements in developing AI-powered autonomous drones capable of identifying and attacking targets without human intervention. Moreover, the order merely calls for ethical use of AI by the military, without clearly defining what entails ethical AI deployment.

The executive order also overlooks the critical issue of protecting elections from AI-powered weapons of mass persuasion. There have already been reports of deepfakes influencing elections, such as in Slovakia, and concerns persist about AI misuse in the upcoming US presidential election. If stringent controls are not implemented, society risks living in an era where online information cannot be trusted. Highlighting this concern, the US Republican Party has already released a campaign advertisement entirely generated by AI, demonstrating the potential for AI manipulation.

While the executive order features many commendable initiatives, there is an opportunity for other countries, such as Australia, to adopt similar measures. The guidance proposed in the order, which aims to prevent discriminatory practices and address algorithmic bias, could be replicated elsewhere. For instance, Australia should provide clear guidelines to landlords, government programs, and contractors regarding the use of AI algorithms to avoid discrimination. The criminal justice system should also address algorithmic discrimination, as AI plays an increasingly prominent role in critical decision-making processes.

Perhaps the most controversial aspect of the executive order is its attempt to regulate the potential harms associated with “frontier” AI models. These advanced models, developed by companies like Open AI, Google, and Anthropic, have ignited debates about their existential threat to humanity. While some experts argue that these concerns are overblown and divert attention from more immediate harms like misinformation and inequity, the executive order treats frontier AI models as a national security issue. It invokes the 1950 Defense Production Act, granting the federal government broad powers to oversee such models’ training and safety testing. However, regulating the development of frontier models poses challenges as companies can still develop them overseas, outside the reach of US government influence. Moreover, the open-source community can create them in a distributed manner, negating any restrictions imposed by borders.

The executive order’s most profound impact is expected to be on how the government itself utilizes AI, rather than on businesses. It is still a commendable step towards regulating AI and ensuring its responsible use. In comparison, UK Prime Minister Rishi Sunak’s AI Safety Summit appears to be more of a diplomatic exercise. It underscores the envy one might feel for the presidential authority to effect change.

The US has taken a significant leap forward in regulating AI with President Biden’s executive order. While the order addresses several important concerns, such as deepfakes and AI model safety, it overlooks crucial issues like killer AI robots and AI-powered election manipulation. The executive order’s impact is likely to be primarily on the government’s AI use, posing challenges to regulate frontier AI models effectively. However, it serves as an example for other countries to replicate and adapt for their specific contexts. The US has positioned itself at the forefront of AI regulation, but continued efforts and global collaboration will be necessary to address the multifaceted challenges and potential risks associated with AI.

Science

Articles You May Like

Microsoft Plans to Expand Xbox Game Pass Offerings
The Nipah Virus Outbreak in Kerala: A Critical Analysis
The Discovery of an Intermediate Mass Black Hole in IRS 13
Critique of Disney Entertainment Television’s Presence at San Diego Comic-Con

Leave a Reply

Your email address will not be published. Required fields are marked *