Prohibited AI Practices and Compliance under the EU AI Act

The EU Artificial Intelligence Act (EU AI Act) establishes a comprehensive legal framework for regulating AI systems in the European Union. At the heart of this framework lies a risk-based approach, which classifies AI systems based on their potential impact on fundamental rights and public interests.

Among the most stringent provisions are the prohibitions on certain AI practices, which are considered fundamentally incompatible with EU values such as human dignity, freedom, democracy, and the rule of law. These AI systems are banned not because they are merely high-risk, but because they present unacceptable risks to individuals and society.

Prohibited AI Practices Under the EU AI Act

The AI Act outlines specific categories of AI uses that are forbidden within the EU, including:

  • Subliminal or manipulative techniques that influence users without their awareness and may cause harm;

  • Exploitation of vulnerable persons, such as children or those with disabilities, which impairs their autonomy;

  • Social scoring systems that evaluate individuals based on their behavior or characteristics in a way unrelated to the specific context;

  • Predictive policing tools that attempt to forecast criminal behavior through profiling;

  • Untargeted scraping of facial images from online platforms or surveillance footage to build facial recognition databases;

  • Emotion recognition technologies in workplaces and educational settings;

  • Biometric categorization systems used to classify people based on sensitive traits such as political opinions or ethnicity;

  • Real-time remote biometric identification systems (e.g., facial recognition) used in public spaces, subject to strict exceptions.

Compliance Considerations

Although these practices are broadly prohibited, the EU AI Act allows for limited exceptions, especially in areas such as law enforcement. The European Commission is tasked with issuing guidance to clarify the scope of these prohibitions and their application (Article 96(1)(b)).

It’s essential to recognize that compliance with the EU AI Act alone is not sufficient. Other EU legal instruments—such as the General Data Protection Regulation (GDPR)—may impose further restrictions. For example, automated decision-making involving personal data may be restricted under Article 22 of the GDPR. Additionally, existing prohibitions on discrimination continue to apply in the AI context.

Legal practitioners and AI developers should therefore ensure that their systems comply with both the EU AI Act and related EU regulations, especially when deploying high-impact or sensitive AI applications.

was last modified: June 2nd, 2025 by Gökhan Cindemir