Google Commits to EU AI Code, Raises Concerns

- Google will sign the EU’s voluntary AI code of practice to align with new regulations, but worries the rules could slow innovation in Europe.
Google, a subsidiary of Alphabet, has announced its intention to sign the European Union’s voluntary code of practice for artificial intelligence. The move comes as the company seeks to align with the bloc’s new AI regulations. In a blog post, Kent Walker, Google’s president of global affairs and chief legal officer for Alphabet, confirmed the decision. He noted that the company hopes the code will give European citizens and businesses access to “secure, first-rate AI tools.” The EU’s AI Act aims to establish a global benchmark for AI regulation.
This voluntary code was developed by 13 independent experts to provide clarity on how companies can comply with the new rules. It requires signatories to publish summaries of the content used to train their general-purpose AI models. The code also mandates compliance with EU copyright law. By signing the code, companies can achieve more legal certainty in their AI development processes.
Google’s Reservations and Industry Reactions
Despite its decision to sign the code, Google expressed several concerns about its potential impact. Walker specifically noted that certain aspects of the AI Act and the code of practice could hinder the development and deployment of AI within Europe. He cited potential issues, such as deviations from EU copyright law and requirements that might expose trade secrets. These factors, he argued, could negatively affect Europe’s competitiveness in the rapidly evolving AI landscape.
Other major tech companies have had varying reactions to the EU’s code. Microsoft’s president, Brad Smith, previously told Reuters that his company would likely sign the code. However, Meta Platforms has decided not to sign it, citing legal uncertainties for model developers as its primary reason. These differing stances highlight the complex challenges and ambiguities companies face in navigating the new regulatory environment.
The Goal of the EU’s AI Act
The European Union’s AI Act is a landmark piece of legislation designed to create a framework for the responsible use of artificial intelligence. It seeks to establish guardrails for a technology that has become a staple of modern business and daily life. The EU hopes its regulatory efforts will set a precedent for the global AI market, which is currently dominated by technology giants from the United States and China. The act’s tiered approach to risk, which imposes stricter rules on high-risk AI systems, is central to its regulatory philosophy.
The EU’s proactive approach to AI governance contrasts with the more hands-off stance taken by some other governments. As AI technology continues to advance, the EU’s framework could influence how other nations choose to regulate the industry. The voluntary code of practice, while not legally binding, is a key step toward implementing the principles of the AI Act. This effort reflects a broader global debate about balancing innovation with safety and ethical considerations in the AI sector.
The EU’s Role as a Global Regulator
The European Union has a history of setting global standards for technology. The General Data Protection Regulation (GDPR), enacted in 2018, is a notable example. Although it is a European law, its strict data privacy and security requirements have influenced companies worldwide, forcing them to adapt their practices to serve EU citizens. This phenomenon, often called the “Brussels effect,” demonstrates the EU’s ability to shape global tech policy through its large, unified market. The AI Act is seen by many as the next major effort to wield this regulatory power on a global scale.