Poland Seeks EU Probe Into TikTok AI Content

Tiktok
  • TikTok profile promoting EU exit for Poland gains traction, then disappears
  • TikTok removed content violating its rules, spokesperson says
  • EU’s Digital Services Act requires platforms to assess AI-related risks
  • Poland urges EU probe into TikTok’s compliance with Digital Services Act

Poland Flags AI‑Generated Disinformation on TikTok

Poland has urged the European Commission to launch a formal investigation into TikTok following the spread of AI‑generated videos calling for the country to leave the European Union. A profile featuring young women in Polish national colors gained traction before disappearing from the platform. Officials described the content as almost certainly Russian disinformation based on linguistic patterns found in the recordings. Deputy Digitalization Minister Dariusz Standerski warned in a letter that such material threatens public order, information security and democratic integrity across the EU.

The letter argued that the nature of the narratives and the use of synthetic audiovisual content suggest TikTok may not be meeting its obligations as a Very Large Online Platform under EU law. A government spokesperson reinforced the claim, stating that the videos contained clear signs of Russian syntax. TikTok responded by saying it had communicated with Polish authorities and removed content that violated its rules. Representatives of the Russian embassy in Warsaw did not respond to requests for comment.

Concerns about foreign influence have grown across Europe. Several EU countries have warned of attempts by external actors to manipulate public opinion, particularly ahead of elections. Russia has repeatedly denied involvement in such activities. The incident adds to a broader pattern of scrutiny directed at major social media platforms.

Last year, the European Commission opened proceedings against TikTok over suspected failures to curb election interference, including during Romania’s 2024 presidential vote. That case remains ongoing. Poland’s latest request signals continued pressure on platforms to comply with EU regulations. The Commission confirmed receipt of the letter and noted that risk assessments related to AI are mandatory under the Digital Services Act.

EU Oversight Under the Digital Services Act

The Digital Services Act (DSA) imposes strict requirements on large online platforms operating in the EU. Companies such as TikTok, Facebook and X must identify and mitigate risks related to harmful content, including disinformation, hate speech and synthetic media. Failure to comply can result in fines of up to 6% of global annual turnover. The law aims to increase accountability and transparency in how platforms moderate content.

In March 2024, the Commission requested information from several platforms, including TikTok, regarding their measures to address AI‑related risks. That inquiry focused on how companies detect and manage synthetic content that could mislead users. The latest concerns raised by Poland may prompt further scrutiny of TikTok’s processes. The Commission has not yet indicated whether it will open a new formal investigation.

Poland’s request highlights the growing challenge of moderating AI‑generated media. Synthetic videos can be produced quickly and tailored to specific audiences, making them difficult to detect. Platforms must balance rapid response with accurate identification of harmful content. The DSA requires them to demonstrate that they have effective systems in place.

The case also underscores the geopolitical dimension of online disinformation. EU member states have repeatedly warned of attempts by foreign actors to influence political discourse. Election periods are considered particularly vulnerable. The Commission has encouraged governments and platforms to coordinate more closely to counter these risks.

Broader Implications for Platform Governance

The incident raises questions about how platforms handle emerging forms of AI‑generated content. TikTok’s rapid growth and algorithm‑driven distribution model make it a focal point for regulatory attention. Governments are increasingly concerned about the speed at which misleading narratives can spread. The disappearance of the profile in question does not resolve broader concerns about detection and prevention.

Poland’s call for action reflects a wider trend of member states seeking stronger enforcement of EU digital rules. The DSA gives the Commission significant authority to investigate and penalize non‑compliant platforms. Ongoing cases against major companies indicate that regulators are prepared to use these powers. The outcome of Poland’s request may influence future enforcement priorities.

The situation also highlights the role of AI in shaping online information ecosystems. Synthetic media can amplify political messages, distort public debate and undermine trust. Regulators are working to understand how these technologies interact with platform algorithms. Their findings may lead to additional guidance or regulatory updates.

Platforms face increasing pressure to invest in advanced detection tools. AI‑generated content often requires AI‑based moderation systems capable of identifying subtle patterns. The challenge is compounded by the global nature of social media, where content can cross borders instantly. Effective governance will likely require cooperation between platforms, governments and independent researchers.

AI‑generated political content has become a growing concern worldwide, but the EU is one of the first regions to regulate it explicitly through legislation like the DSA. Some researchers note that synthetic media can be more persuasive when it appears to come from relatable personas, such as the young women featured in the Polish case. Studies in digital communication suggest that users often trust content that feels culturally familiar, even when its origins are artificial. This dynamic makes AI‑driven disinformation campaigns particularly challenging to detect and counter.


 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.