zde se nacházíte:
Index > AI Act

Detail news

AI Act

03/24/2024

On March 13, 2024, the European Parliament approved the Artificial Intelligence Regulation, known as the "AI Act." This regulation is directly applicable and will start being effective in May 2024, reaching its full effect in 2026. The AI Act is key to harmonizing AI regulation at the European level. Concurrently, the European AI Agency was established to oversee AI in Europe. More details can be found in the article.

The AI Act is key to harmonizing AI regulation at the European level. Concurrently, the European AI Board was established to oversee AI in Europe.

As a European regulation, the AI Act will have a direct impact across the entire European Union and does not require any further transposition into national legal systems, which simplifies its implementation and guarantees uniform application.

The AI Act presents a comprehensive framework intended to ensure safe and ethical use of artificial intelligence throughout Europe. Its full application will be gradual, with some provisions, such as the ban on AI systems with unacceptable risks, coming into effect just 6 months after the act's introduction. These preliminary measures underscore the European Union's effort to quickly adapt to the risks associated with certain uses of AI.

The proposed legal regulation defines 4 levels of risk for AI systems:

  1. minimal,
  2. limited,
  3. high, and
  4. unacceptable risk.

According to the AI Act, all artificial intelligence systems considered to be a clear threat to people's safety, livelihood, and rights are to be banned. This includes, for example, AI for assessing social scores by governments or toys with voice assistants that promote dangerous behavior.

High-risk

High-risk AI systems are to include technologies used in:

  • critical infrastructures (e.g., transportation), which can threaten the life and health of citizens;
  • educational or professional programs that can determine access to education and an individual's career path (e.g., examination grading);
  • security components of products (e.g., artificial intelligence applications in robot-assisted surgery);
  • employment, workforce management, and access to self-employment (e.g., resume sorting software in recruitment processes);
  • essential private and public services (e.g., creditworthiness assessments that can prevent citizens from obtaining loans);
  • law enforcement, which can interfere with people's fundamental rights (e.g., reliability assessment of evidence);
  • management of migration, asylum, and border control (e.g., automated processing of visa applications);
  • administration of justice and democratic processes (e.g., artificial intelligence solutions for searching court decisions).

High-risk AI systems will be subject to strict regulation before being launched on the market. The following obligations will apply:

  • adequate risk assessment and mitigation systems;
  • high quality of data sets used by the system to minimize risks and discriminatory outcomes;
  • activity logging to ensure traceability of results;
  • detailed documentation providing all information necessary for authorities to assess the system's compliance;
  • clear and adequate information for operators;
  • appropriate measures of human oversight to minimize risks;
  • high levels of robustness, security, and accuracy.

All systems for remote biometric identification are considered high-risk and will be subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes will be essentially prohibited. Exceptions will be strictly defined and regulated, for instance, when necessary to find a missing child, to prevent a specific and immediate terrorist threat, or to detect, locate, identify, or prosecute a perpetrator or suspect of a serious criminal offense. These uses will be subject to authorization by a court or another body.

Limited risk

The limited risk is related to the risks associated with a lack of transparency in the use of artificial intelligence. The AI Act introduces specific transparency obligations to ensure that people are informed when necessary. For example, when using artificial intelligence systems such as chatbots, people should be informed that they are interacting with a machine so they can make an informed decision about whether to continue or withdraw. Providers will also need to ensure that AI-generated content is identifiable. In addition, AI-generated text published for the purpose of informing the public about matters of public interest must be marked as artificially created. This also applies to audio and video content that constitutes deepfakes.

Minimal or no risk

AI Act will allow the free use of artificial intelligence with minimal risk. This includes applications such as video games or AI-supported spam filters. According to the European Commission, most of the artificial intelligence systems currently used in the EU fall into this category.

Copyright Rights

The obligation, which the AI Act is supposed to impose on providers of GPAI – i.e., general-purpose AI like ChatGPT, Gemini, Midjourney, etc., to establish a policy that will respect EU copyright law, is highly controversial. GPAI will have the duty to identify and take into account, through the use of state-of-the-art technologies, copyright reservations, i.e., the prohibition of compiling datasets and training AI models on these datasets without the authors' consents. Such an action is referred to as so-called data mining and will be prohibited according to the AI Act (except for scientific purposes or when the subject data is publicly accessible) if the authors expressly prohibit this use.

In other words, if authors as rights holders make a relevant data mining "opt-out," no one may extract or use the subject data to train AI, as this would violate the exclusive rights of third parties. However, this copyright ban is likely to be impossible to comply with, given the huge quantity of training data and the way it is collected (automated downloading across the internet).

Therefore, this approach could be restrictive for EU developers and could lead to the distribution and development of AI outside the EU territory, which would further significantly disadvantage the EU against the USA. Nonetheless, the EU also plans to support innovations in the field of AI by providing AI startups and ethical businesses with access to advanced computing resources, such as European supercomputers. This initiative is intended to underline the European Union's intention to become a leader in the field of trustworthy and ethical AI.

Moreover, the creation of "AI factories" is planned, which will provide infrastructure and support services for the development of AI applications, including providing data repositories and access to quality data for startups and small and medium-sized enterprises. However, the question remains whether this can be fulfilled due to the overly restrictive approach.

More see on EU website

Do you have any further questions? Do you need help or advice?

 

Please do not hesitate to contact us!

 

Mgr. Beata Sabolová, LL.M., Attorney - contact: sabolova@chslegal.eu

Mgr. David Cigánek, Attorney and Partner - contact: ciganek@chslegal.eu