Artificial intelligence law comes into force

According to the EU, the world’s first law regulating artificial intelligence (AI) comes into force on August 1st. An overview of goals, projects and weak points.

According to the EU, the world’s first law regulating artificial intelligence ( AI ) comes into force on August 1st. The European Union wants to be a pioneer in the regulation of. The Europe-wide requirements for AI are intended to make the use of the new technology safer. For this purpose, applications are divided into risk groups.

Which risk groups are there?

The EU Commission divides AI systems into four risk groups: from green to yellow and orange to red. In the first stage there is no risk or only a minimal one. According to the EU Commission’s assessment, the vast majority of artificial intelligence systems fall into this green area. They will therefore remain unregulated in the future.

The next highest risk level (yellow) includes, for example, chatbots such as ChatGPT as well as images and videos generated or modified using artificial intelligence. They are primarily subject to new transparency obligations. Users must be informed when content is generated by AI. It must also be possible to understand which data the systems were trained with. If there are serious incidents, they must be reported to the commission.

The third risk group (orange) includes applications with a “high risk”. The Commission is fundamentally in favor of the use of such systems, but at the same time it sees a high risk to fundamental rights, security or health. They should therefore be regulated before they come onto the market.

Artificial intelligence

What is banned?

If AI systems pose an “unacceptable risk”, they fall into the fourth and highest risk level (red) and are banned. This applies, for example, to applications for recognizing emotions at work or at school. Evaluating social behavior with AI, so-called social scoring, as used in China, is also prohibited. The use of artificial intelligence in predictive policing will also be restricted.

Who enforces the AI ​​law?

Both the EU and the member states are responsible for enforcing the rules. If violations occur, companies face fines running into the millions. National authorities are responsible when it comes to general market supervision. There is also a European AI Office in the Commission with experts from all EU countries. The office aims to position Europe as a leader in the ethical and sustainable development of artificial intelligence technologies.

What is the schedule?

The AI ​​law comes into force on August 1st. The bans will apply from February 2, 2025. The majority of all other provisions will apply from August 2, 2026. In the “AI Pact”, around 700 companies have declared that they will apply the regulations earlier.

How do experts assess the Artificial Intelligence ​​law?

In principle, many people welcome the law. AI creates new possibilities and thus also new ethical questions. Especially in such a situation, it is socially important to claim to consciously and actively control technological development, says Claudia Paganini, professor of media ethics at the University of Philosophy in Munich, to the Evangelical Press Service (epd).

“The central question must be: What do we gain and what do we lose through AI? Where can we improve the quality of coexistence and what poses a danger?” This is exactly the topic that the EU AI law addresses.

“Although it can be assumed that given the speed of progress there will be a need to make adjustments, it is still important to create a legal basis,” emphasizes Paganini. She is particularly positive about the high priority placed on transparency and the fact that a kind of complaint system is being created.

Where are the weak points?

Critics fear loopholes for biometric mass surveillance. The artificial intelligence ​​law prohibits mass surveillance in public spaces, but at the same time creates exceptions. Paganini thinks this is problematic. “Because it has been shown often enough in the past that such exceptions can be used very quickly (abusively) against people who think differently.”

Cornelia Ernst, who sat as a left-wing member of the EU Parliament until June and closely followed the negotiations surrounding the law, also expressed criticism. She complains that Parliament’s ban on real-time facial recognition in public spaces has been effectively overturned by a long list of exceptions. Another huge gap in the regulation is that there are no bans on the use of artificial intelligence systems in the migration and border context. “This turns people on the run into guinea pigs and the EU’s external borders into a testing laboratory. “That is unacceptable,” said Ernst.

Leave a Comment