EU AI Code of Practice: Microsoft In, Meta Out

- Microsoft is likely to sign the EU’s voluntary AI code of practice for compliance, while Meta has rejected the guidelines due to legal uncertainties and concerns it overreaches the AI Act.
Microsoft is reportedly leaning towards signing the European Union’s voluntary code of practice, a set of guidelines designed to help companies navigate the bloc’s pioneering artificial intelligence regulations. Brad Smith (pictured), Microsoft’s President, informed Reuters on Friday that while the company still needs to review the full documentation, their intent is to be supportive. He expressed appreciation for the direct engagement between the newly established EU AI Office and the industry.
This code of practice, developed by 13 independent experts, aims to offer legal certainty to its signatories. Companies that adopt the code will be required to publish summaries of the content used to train their general-purpose AI models. Furthermore, they must establish a clear policy for complying with EU copyright law, ensuring responsible and legal data usage.
Industry Divided on Compliance
The voluntary code is an integral part of the broader EU AI Act, which officially came into force in June 2024. This landmark legislation will apply to a vast array of companies, including major players like Google’s parent company Alphabet, Meta, OpenAI, Anthropic, and Mistral, as well as thousands of other businesses developing AI solutions. OpenAI and Mistral have already publicly committed to the code, signaling their readiness to align with the EU’s ethical and regulatory framework for AI.
However, not all tech giants are on board. Meta Platforms has reiterated its strong criticism of the code, indicating it will not be signing the guidelines. Joel Kaplan, Meta’s Chief Global Affairs Officer, voiced the company’s concerns in a LinkedIn blog post on Friday. He stated that the code introduces “a number of legal uncertainties for model developers” and includes measures that, in Meta’s view, “go far beyond the scope of the AI Act itself.”
Concerns Over Innovation and Overreach
Kaplan emphasized that Meta shares the apprehensions raised by a group of 45 other European companies. This collective concern centers on the belief that the code’s perceived “over-reach” could hinder the development and deployment of cutting-edge AI models within Europe. Critics worry that such stringent, potentially ambiguous guidelines might stifle innovation and impede European companies looking to build businesses atop these advanced AI technologies.
The debate highlights the ongoing tension between fostering a safe and ethical AI environment and ensuring that regulation doesn’t inadvertently curb technological progress. While some companies welcome the clarity offered by the voluntary code, others fear it could impose undue burdens or create legal ambiguities that complicate AI development and deployment across the continent. This divergence in opinion underscores the complex challenge of regulating rapidly evolving AI technologies effectively.
Broader Context and Previous Discussions
The EU AI Act has been a focal point of global AI regulation discussions for several years. Initial drafts and debates surrounding the Act, which began as early as 2021, highlighted the ambitious scope of the legislation. The concept of a “code of practice” emerged during these discussions as a flexible mechanism to provide more granular guidance without amending the core legal text of the Act itself. This approach was intended to allow for quicker adaptation to rapid technological advancements in AI.
Interestingly, previous reports indicated varying levels of enthusiasm from major tech companies during the drafting phase of the AI Act. Some companies, eager to demonstrate responsible AI development, expressed support for clear guidelines, while others voiced concerns about potential overregulation. Meta, in particular, has been a vocal critic of certain aspects of the EU’s digital policies, including the Digital Markets Act (DMA) and Digital Services Act (DSA), often arguing that they disproportionately target larger platforms and hinder innovation. This current stance on the AI code of practice aligns with Meta’s historical pattern of challenging what it perceives as overly broad or restrictive European regulations. The voluntary nature of the code means companies can choose not to sign, but doing so might lead to increased scrutiny from the EU AI Office.