EU scrutinizes Big Tech over generative AI for election manipulation


The European Commission has sent requests to Bing, Google Search, Facebook, Instagram, Snapchat, TikTok, YouTube and X under the Digital Services Act (DSA).

The requests ask the companies to provide information on their risk mitigation measures related to generative AI, such as “hallucinations” where AI provides false information, the viral spread of Deepfakes, and automated manipulation that can deceive voters. Generative AI is one of the risks identified in the Commission’s draft guidelines on the integrity of electoral processes.

In addition to risk mitigation measures, the Commission is requesting information and internal documents on risk assessments and mitigation measures related to generative AI and the dissemination of illegal content, protection of fundamental rights, gender-based violence, protection of minors, psychological well-being, data protection, consumer protection, and intellectual property.

The questions cover both the distribution and creation of generative AI content. Based on the analysis of the responses, the Commission will consider the next steps.



If the companies concerned do not answer the questions, the Commission may decide to request the information by decision. In case of incorrect, incomplete or misleading information, the Commission may impose fines.

Companies must provide the requested information by April 5, 2024 for election-related questions and by April 26, 2024 for all other questions.

EU AI law is under fire for allowing surveillance technologies in certain scenarios

The European Parliament recently passed the AI Act, which provides for a risk-based approach to the regulation of AI. The law requires high-risk AI systems, such as those used in medical devices or critical infrastructure, to meet safety requirements.

It prohibits AI applications that threaten the rights of citizens, including biometric categorization based on sensitive characteristics and the untargeted scanning of facial images from the Internet or surveillance cameras. It also bans emotion recognition systems at work and school, as well as social scoring.

In the context of law enforcement, however, the Act leaves room for powerful surveillance technologies such as comprehensive real-time facial recognition or behavioral tracking in public spaces. Algorithmwatch speaks of surveillance loopholes that must now be closed by member states.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top