Chapters I and II of the AI Regulation came into force on February 2nd, 2025. In addition to Article 4, which obliges providers and operators of AI systems to ensure AI expertise, Article 5 of the AI Regulation, which designates and defines prohibited practices in the field of AI, also applies.
A very limited number of particularly harmful AI applications that violate the EU values are considered an unacceptable risk. These are the following systems:
- Exploitation of the vulnerability of individuals, manipulation and use of subliminal influence techniques;
- Evaluation of social behavior (social scoring) for public and private purposes;
- individual predictive police surveillance based solely on the creation of personal profiles;
- untargeted analysis of facial pictures from the internet or surveillance footage in order to build or expand databases;
- Emotion recognition in the workplace and in educational institutions, except for medical or safety purposes (e.g. monitoring a pilot’s fatigue);
- biometric categorization of natural persons to infer ethnic origin, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation. However, it is still possible to label or filter data records and categorize data in the area of criminal prosecution;
- real-time remote biometric identification in publicly accessible areas by law enforcement authorities, with narrowly defined exceptions.
The European Commission is developing guidelines for the practical implementation of the AI Regulation to accompany its coming into force. Although these guidelines are not binding, they are an important aid to interpretation and application.
The guidelines on prohibited practices of AI within the meaning of the AI Regulation were published on February 4, 2025. The European Commission has approved the draft guidelines but has not yet formally adopted them. The draft guidelines and a short statement by the European Commission are available here.