All Europe. No filter.

EU Challenges AI Companies to Choose Their Approach

From 2 Feb 2025, EU governments have gained sweeping powers under the European AI Act: public surveillance, refugee monitoring and even facial recognition based on political and religious profiles. Investigate Europe highlights the paradox of safeguarding rights vs. enabling misuse. While the AI Act aims to regulate AI, new national security exceptions—pushed by some Member States like France—allow law enforcement to bypass bans on public AI use. Protests and demonstrations could now face surveillance under “security” pretenses. Emotional recognition tech is banned in schools and workplaces but not for police or border authorities. Biometric tools are prohibited for profiling individuals, yet police have exemptions, raising concerns of overreach. Initially, ‘high-risk’ AI technologies required strict checks (judicial approval, rights impact assessments).
Now? Companies can self-certify “low risk,” removing obligations. NGOs warn this undermines safeguards for vulnerable populations.
The foundational models behind AI (e.g., ChatGPT) risk embedding biases present in societal data. With Big Tech like Google, Microsoft and OpenAI dominating development, their near- monopolies and lobbying activities deeply influence regulation France, Germany and Italy argue for “innovation-friendly” AI rules, opposing strong regulations on foundation models. Big Tech’s influence grows: startups, lobbying summits and direct access to top policymakers.
As the EU elections approach, questions arise: Is Big Tech too big to regulate?
With lobbying dominance, have corporations derailed efforts for accountability in AI? Advocates call for limits on industry influence, akin to the measures taken against Big Tobacco. Future risks are high, a regulatory framework vulnerable to abuse, the undermining of human rights and systemic biases. What kind of precedent will this set for balancing AI innovation with public safety? As debates continue, EU citizens overwhelmingly support stronger AI regulation, according to surveys. The spotlight is now on policymakers to ensure ethical AI doesn’t fall to profit-driven hands.