Countries around the world are taking different paths to regulate artificial intelligence (AI), ranging from a hands-off approach in the United States to stringent oversight in the European Union. With the Paris AI Summit scheduled for February 10–11, here’s a look at how key regions are tackling AI governance.
United States: A Deregulated Landscape
Under President Donald Trump’s administration, the U.S. has significantly rolled back AI oversight. In January, Trump repealed an executive order issued by Joe Biden in October 2023 that had required AI developers like OpenAI to disclose safety assessments and share critical information with the government.
Although the previous framework was largely voluntary, it sought to safeguard privacy, civil rights, and national security. Now, with no federal AI-specific laws in place, regulation is minimal beyond existing privacy protections.
Some experts have compared the current U.S. approach to the “Wild West.” Digital law expert Yael Cohen-Hadria remarked, “They’ve put their cowboy hat back on—it’s full-speed ahead with no legal constraints.”
China: Strict Control with Party Interests in Mind
China is still refining its AI regulations, but existing “Interim Measures” impose strict guidelines on AI-generated content. AI models must align with “core socialist values,” protect user privacy, and label AI-generated images and videos.
Foreign companies face tight restrictions, while domestic AI firms must comply with censorship rules. For instance, China’s DeepSeek AI model avoids answering sensitive political questions, including topics related to President Xi Jinping or the Tiananmen Square protests.
While these regulations are applied strictly to businesses, experts believe the Chinese government will likely make exceptions for itself when using AI for surveillance or governance.
European Union: The Most Comprehensive AI Laws
Unlike the U.S. and China, the EU has placed strong emphasis on ethical AI regulation. The AI Act, passed in March 2024, is considered the world’s most detailed framework, with some provisions taking effect this month.
The law bans AI systems that enable predictive policing based on profiling, as well as those that infer personal traits like race, religion, or sexual orientation using biometric data. It follows a risk-based approach—higher-risk AI applications face stricter requirements.
EU officials argue that clear regulations benefit businesses by providing stability and legal certainty. The AI Act also strengthens intellectual property protections while promoting data accessibility, allowing businesses to innovate more efficiently.
India: Cautious but Commercially Driven
India, which is co-hosting the upcoming Paris AI Summit, has yet to introduce a dedicated AI law. Instead, AI-related issues are addressed through existing laws on privacy, defamation, cybercrime, and intellectual property.
While India recognizes AI’s economic potential, the government has been reluctant to impose heavy restrictions. However, concerns arose in March 2024 when the IT ministry issued an advisory requiring firms to obtain government approval before deploying “unreliable” AI models.
This move came shortly after Google’s AI system, Gemini, made controversial remarks about Prime Minister Narendra Modi. The backlash led to a revision, with authorities instead requiring disclaimers on AI-generated content rather than outright approval.
United Kingdom: Prioritizing AI for Economic Growth
The UK, home to the world’s third-largest AI sector after the U.S. and China, is taking a pro-business approach. In January, Prime Minister Keir Starmer introduced an AI Opportunities Action Plan, emphasizing innovation before regulation.
The government believes that premature AI restrictions could hinder technological advancements. Instead, it advocates for “tested before regulated” policies, aiming to strike a balance between safety and economic competitiveness.
To protect creative industries, the UK is also reviewing how copyright laws apply to AI-generated content.
The Future of AI Governance
As AI technology evolves, global regulation remains fragmented. The EU’s structured legal framework contrasts sharply with the U.S.’s hands-off approach, while China prioritizes control and censorship. Meanwhile, India and the UK are still refining their policies to balance economic interests with necessary safeguards.
With AI becoming increasingly influential, international cooperation may become essential to prevent regulatory conflicts and ensure responsible AI development worldwide.