What’s happening in EU? A gentle introduction to the AI Act
Francesca Lizzi
INFN Pisa
1st AI INFN Hackathon, 27/11/2024, Padova
AI Act
On 1 August 2024, the European Artificial Intelligence Act (AI Act) enters into force. The Act aims to foster responsible artificial intelligence development and deployment in the EU.
Today, I will try to explain some key points that could help you in understanding the terminology and the logic behind this regulation.
Outline of the presentation:
I am not a lawyer, this talk is meant to discuss our opinions about this kind of regulations.
AI act - Scope
Improving the functioning of the internal market and promoting the deployment of human-centered and trustworthy artificial intelligence (AI)
Ensuring a high level of protection of health, safety and fundamental rights enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union
Promoting innovation
AI act - Article 1: Subject Matter
Prohibitions on certain AI practices
Measures to support innovation, with a particular focus on SMEs, including start-ups.
Rules on market monitoring, market surveillance, governance and enforcement
Specific requirements for high-risk AI-systems and obligations for operators of such systems
Harmonised transparency rules for certain AI-systems
Harmonised rules for the placing on the market, putting into service and use of general purpose AI-systems and AI models in the Union
This Regulation lays down:
Chapter I - Article 3: definitions
Just to give you an insight of how much complex is to understand this regulations, we have 68 definitions!
AI system
“…a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
Provider
“a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge”
Deployer
“a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity”
Chapter 1, article 2: two more definitions
General Purpose AI (GPAI)
“an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”
Risk
This is a very important definition. All the AI Act is based on the risk assessment.
The combination of the probability of an occurrence of harm and the severity of that harm
Risk-based assessment: taxonomy of risk
The AI Act establishes 3 risk categories, let’s have a look!
Unacceptable risk (Chapter II)
Low or minimum risk
High risk: the border between the previous two
So, when an AI system is considered as high risk?
High risk: the border between the previous two
So, when an AI system is considered as high risk?
The AI system represents a product or safety component of a product governed by the harmonization legislation in Annex I (e.g., EU Reg. 2017/745 and EU Reg. 2017/746 on medical devices)
(+)
The aforementioned product (whether it is the AI system itself or the product of which the AI system is a safety component) is subject to conformity assessment, for the purposes of its placing on the market or putting into service, by a third party, pursuant to the legislation in Annex I
All this, regardless of whether the AI system is placed on the market or put into service independently of the aforementioned products
Areas included in Annex III:
- biometric identification and categorization of persons;
- management and operation of critical infrastructure;
- education and vocational training employment, management of workers and access to self-employment;
- access to and use of essential private services and essential public services;
- law enforcement activities
migration management, asylum and border control;
…
Exception:
If at least one of the following conditions is met:
a) the AI-system is intended to perform a limited procedural task;
b) the AI-system is intended to improve the outcome of a previously completed human task;
c) the AI-system is intended to detect decision-making patterns or deviations from previous decision-making patterns and is not intended to replace or influence previously completed human assessment without adequate human review;
d) the AI-system is intended to perform a preparatory task for an assessment relevant for the purposes of the use cases listed in Annex III.
The system is not considered high-risk if it does not present a significant risk of harm to the health, safety or fundamental rights of natural persons, including in the sense of not materially influencing the outcome of the decision-making process
GPAI: on going consultation
General purpose AI systems have a special attention in the AI act but… what is a GPAI?
(a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks;
(b) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII
From annex XIII:
the quality and size of the data
its level of independence and scalability
the model's performance on various tasks
the type of input and output the model
the computational power needed to train the model
the model's complexity
Its impact on the market
The Code of Practice and the role of stakeholders and INFN
GPAI must be compliant with the Code of Practice currently being drafted.
“One ring to rule them all”?
To rule what?
All the other regulations connected to AI
“One ring to rule them all”?
To rule what?
All the other regulations connected to AI
Thank you for your kind attention!
Comments? Questions?
Penalties
These penalties must be effective, proportionate, and dissuasive, and consider the interests of small and medium-sized enterprises (SMEs) and startups. Non-compliance with certain AI practices can result in fines up to 35 million EUR or 7% of a company's annual turnover. Other violations can result in fines up to 15 million EUR or 3% of a company's annual turnover. Providing incorrect or misleading information can result in fines up to 7.5 million EUR or 1% of a company's annual turnover. SMEs will receive lower fines. The severity of the fine will depend on various factors, including the nature of the violation, the size of the company, and any previous violations. Member states must report annually to the Commission on the fines they have issued.
As I have been appointed as the INFN Point of Contact in the European Union (EU) commission for the writing of the General Purpose AI (GPAI) Code of Practice (CoP) of the AI Act, I would like to propose a short talk about it. The AI Act is the first law in the world that attempts to regulate the EU Artificial Intelligence market and it is based on risk assessment policy. I will illustrate some key points, starting from the definitions to the taxonomy of risk. Finally, I will discuss how the GPAI Cop writing has been carried on until now as we have arrived at the Third Draft version and the deadline for writing it has been fixed in May 2025. The discussion is made in four Working Groups: the first is focused on transparency and copyright, the second on Risk assessment for systemic risk, the third on technical risk mitigation and the last one is focused on governance risk mitigation. It is important to discuss these topics inside the INFN in order to build a common perspective and a vision of what we and our institute can build in the next future on AI initiatives.
—--------------------------------------------------------------------------------------------------------------