The European Union has taken a historic step in dealing with artificial intelligence (AI) by agreeing on the Artificial Intelligence Act. The member states have not only adopted stricter rules for AI models, including those from industry giants such as OpenAI, but have also banned certain use cases.

Division into risk groups

Following extensive internal discussions, the EU Commission has divided AI applications into three risk groups: minimal, high-risk and unacceptable. The 'unacceptable' classification is to be banned as it could pose a potential threat to civil rights and democracy. This includes the untargeted reading of photos to create facial recognition databases as well as systems for categorizing people according to their political beliefs or sexual orientation.

The ban also includes AI systems that could manipulate people's behavior and influence their free will. Furthermore, systems that enable the monitoring and evaluation of employees in the workplace or private individuals based on their social behavior are to be prohibited.

Despite these strict regulations, the EU Commission reserves the right to make an exception for biometric identification systems that can be used by law enforcement authorities with judicial authorization. However, this exception only applies to a strictly defined list of criminal offenses and is limited in terms of time and location.

Transparency requirements and reporting obligations

Transparency requirements have been defined for AI systems classified as 'high-risk'. Providers must therefore provide technical documentation and disclose information on the training data used. In the case of models that meet the criteria for high-risk applications, there is a reporting obligation for serious incidents and for energy efficiency.

Ursula von der Leyen, President of the European Commission, welcomed the Act as a key contribution to the development of global rules and principles for human-centered AI. She emphasized that the Artificial Intelligence Act will promote responsible innovation by ensuring the safety and fundamental rights of people and businesses.

What does the decision mean for companies and employees?

The decision has far-reaching implications for companies that develop, use or are affected by AI technologies and their employees. Clearly defined categories and prohibitions will require companies to review and adapt their technologies to comply with the new regulations. This could require investment in the development of safer and ethically responsible AI models.

The transparency requirements and reporting obligations mean that companies must provide more transparent information about their technologies and how they work. This could lead to an improved basis of trust between companies, consumers and regulatory authorities.

In Bezug auf die Mitarbeitenden ist davon auszugehen, dass der Einsatz von KI-Systemen am Arbeitsplatz strenger reguliert werden wird. Die Manipulation des Verhaltens von Mitarbeitenden oder eine Bewertung aufgrund ihres Sozialverhaltens wird explizit verboten. Unternehmen müssen sicherstellen, dass ihre KI-Anwendungen die Grundrechte und den freien Willen der Mitarbeitenden respektieren.