EU AI Act - Gesetz über künstliche Intelligenz

 

 


by S.

 

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States,
based on a future-proof definition of AI. They follow a risk-based approach:

Minimal risk: The vast majority of AI systems fall into the category of minimal risk.
Minimal risk applications such as AI-enabled recommender systems or spam filters
will benefit from a free-pass and absence of obligations, as these systems present
only minimal or no risk for citizens' rights or safety.
On a voluntary basis, companies may nevertheless commit to
additional codes of conduct for these AI systems.

High-risk: AI systems identified as high-risk will be required to comply with strict requirements,
including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation,
clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity.
Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.

Examples of such high-risk AI systems include certain critical infrastructures for instance
in the fields of water, gas and electricity; medical devices; systems to determine
access to educational institutions or for recruiting people;
or certain systems used in the fields of law enforcement, border control,
administration of justice and democratic processes.
Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk.

Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned.
This includes AI systems or applications that manipulate human behaviour to circumvent users' free will,
such as toys using voice assistance encouraging dangerous behaviour of minors or systems that
allow ‘social scoring' by governments or companies, and certain applications of predictive policing.
In addition, some uses of biometric systems will be prohibited, for example emotion
recognition systems used at the workplace and some systems or categorising people or
real time remote biometric identification for law enforcement purposes
in publicly accessible spaces (with narrow exceptions).

Specific transparency risk: When employing AI systems such as chatbots, users should be aware
that they are interacting with a machine.
Deep fakes and other AI generated content will have to be labelled as such,
and users need to be informed when biometric categorisation or emotion recognition
systems are being used.
In addition, providers will have to design systems in a way that synthetic audio, video,
text and images content is marked in a machine-readable format, and
detectable as artificially generated or manipulated.

12/2023

 

https://de.wikipedia.org/wiki/Gesetz_%C3%BCber_k%C3%BCnstliche_Intelligenz