The Artificial Intelligence Act: EU’s Attempt to Regulate AI
top of page

The Artificial Intelligence Act: EU’s Attempt to Regulate AI


On the 21st of April 2021, the European Commission released the first proposal to regulate artificial intelligence (AI): the Artificial Intelligence ActAI Act”. The AI Act is the first major attempt to regulate AI, and this is relevant not just for the European Union but also for many countries who will be able to use it as a reference for potential future AI regulations.


At the international level, the act will surely attract controversy among non-European actors who want to bring their AI products and services to European territory. Conflicts similar to the one that arose with the overruling of the Privacy Shield, related to lack of compliance with the EU’s minimum privacy guarantees in the exchange with the United States of personal data may arise — in this example, the conflict between the U.S. and the EU arose due to lack of guarantees and safeguards by the U.S. when European user data was transferred to U.S. territory (see the 2020 Court of Justice of the European Union case “Data Protection Commissioner vs. Facebook Ireland Ltd.”, also called Schrems II which invalidated the then-existing Privacy Shield).


As addressed throughout the entire Artificial Intelligence 101 series, AI has a tremendous potential to benefit society; however, it also hosts inherent risks which can no longer be overlooked with the current state of AI development. AI applications and systems often create risks that need to be assessed and mitigated as much as possible in order to guarantee a minimum level of security and trustworthiness for AI systems. When these risks cannot be mitigated or controlled, the AI in question should not be used. First and foremost, fundamental rights and the protection of individuals have to be ensured.


In this regard, the AI Act proposal is the result of extensive consultations with major stakeholders that worked on the contents of the White Paper on Artificial Intelligence published on 19 February 2020. Some major conclusions after the public consultation were:


1) A general sense of a need for action, considering the existence of major gaps in the current legal framework and the need for new legislation, also avoiding overregulation.

2) The need to have a clear definition for AI and the terms “risk” “high-risk” and “low-risk”.

3) Stakeholders tend to be in favor of a combination of an ex-ante risk self-assessment and ex-post enforcement for high-risk AI systems.


In light of these findings, the AI Act was negotiated by the European Union to reflect the following objectives:

  • ensure that AI systems used in the EU are safe and respect existing laws on fundamental rights and Union values;

  • ensure legal certainty to facilitate investment and innovation in AI;

  • enhance governance and effective enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems;

  • facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

Davis, A. (n.d.). Abstract robot face [Illustration].

With the above framework in mind, a cursory look at some of the main provisions of the AI Act is needed:


1) The act provides main AI rules that apply to all industries by introducing a risk framework divided into 4 risk categories: unacceptable risk, high risk, limited risk, and minimal risk.


2) In line with the risk assessment, AI products and systems will have to comply with the EU safety and compliance standards to obtain the “CE” certificate and marking that certifies safety and security standards compliance. AI products and systems will only be allowed to enter the European Economic Area when they have obtained the CE certificate, as a market conformity standard.


3) It also creates a new enforcement body at the European level called the European Artificial Intelligence Board (EAIB). This new body will also have a presence at the state levels with supervisors, resembling the control mechanism of the GDPR.


4) Also, since most regulations are not effective until economic hardship is implemented, the AI Act foresees fines for violation of the rules that can be up to 6% of global turnover or 30 million euros for private entities.

Unknown. (2021). Risk-assessment pyramid of the AI Act [Graphic].

Although the AI Act seems to be a fair approach to regulating AI which has been well received by most stakeholders and organizations, it is not devoid of criticisms and comments regarding missed points and a lack of definition in certain matters. To highlight some of the most relevant areas that have been criticized, the Ada Lovelace Institute, led by Lilian Edwards, a leading academic in the field of Internet Law published the report: Expert opinion: Regulating AI in Europe – Four problems and four solutions. Some of the problems pointed out are explained below:


1) AI is more than just a product, for it to be tackled under the European product framework. Meaning also that the much-needed allocation and distribution of responsibility throughout the AI life is left insufficiently addressed: “AI is not a product nor a ‘one-off’ service, but a system delivered dynamically through multiple hands (‘the AI lifecycle’) in different contexts with different impacts on various individuals and groups.” (Edwards, 2022, p. 5).


In line with the discussions reflected in the article Artificial Intelligence 101: A Special Mention to Accountability, the report states what seems to have been left pending to resolve: “The Act fails to take on the work, which is admittedly difficult, of determining what the distribution of sole and joint responsibility should be contextually throughout the AI lifecycle, to protect the fundamental rights of end users most practically and completely.” (Edwards, 2022, p. 7).


2) Individuals who suffer the consequences of AI have no say or rights under the AI Act.


“Those impacted by AI systems – sometimes thought of as end-users, data subjects or consumers – have no rights, and almost no role in the AI Act. This is incompatible with an instrument whose function is to safeguard fundamental rights.” (Edwards, 2022, p. 5).


3) The risk table is not enough to fully determine the potential consequences of an AI system and there is no specific fundamental rights violation assessment.


“The alleged ‘risk-based’ nature of the Act is illusory and arbitrary. A genuine assessment of risk based on reviewable criteria is necessary. […] The Act lacks a general fundamental rights risk assessment, for all AI systems in scope of the Act, not just ‘high-risk’ AI.” (Edwards, 2022, p. 11).


Davis, A. (n.d.). Why an insecure internet is actually in tech companies best interest [Illustration].

No regulation is perfect. Indeed, rising AI standards that are forcing humans to rapidly adapt to circumstances were thought to be utopian ten years ago. The technological revolution is taking place at a much faster pace than it did; for example, the industrial revolution. Relevant technological changes appear and impact our lives within a margin of 5 to10 years, giving little room for humans to sit back and just observe without taking proactive measures. In light of the current times, the AI Act may as well not be perfect, but it represents a strong international commitment to protecting people’s rights and interests.


To achieve the best possible standards, attention has to be paid to expert opinions as well. For example, Professor Mauritz Kop, from Stanford Law School argues:


Responsible, trustworthy AI requires awareness from all parties involved, from the first line of code. The way we design our technology is shaping the future of our society. In this vision, democratic values and fundamental rights play a key role. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps, and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex-ante and life-cycle auditing. (Kop, 2021, p. 1).

Considering the current state of AI creation, development, and implementation, which evolves at a high pace, the timing of the AI Act couldn't be better as it would work towards preventing harmful discrimination and provide, at least, a minimum level of harmonized standards to which AI actors must conform. With time, the criticisms and recommendations of expert actors, scholars and experts will hopefully be implemented and the AI Act will be adjusted accordingly. For now, it is good to see the EU's preliminary attempt to launch a first global regulatory enforceable model that balances innovation, development, and implementation in the AI sector, alongside the protection of the rights and freedoms of all citizens, so that we can all benefit from this technology.


Bibliographical references:

Court of Justice of the European Union. Judgment of 16 July 2020, Schrems II, C‑311/18, ECLI:EU: C:2020:559


Edwards, L. (2022). Regulating AI in Europe: four problems and four solutions. Ada Lovelace Institute. https://www.adalovelaceinstitute.org/report/regulating-ai-in-europe/


European Commission. (2020). White Paper on Artificial Intelligence - A European approach to excellence and trust.


European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts.


European Institute of Public Administration. (2021). The Artificial Intelligence Act Proposal and its Implications for Member States. EIPA.


Kop, M. (2021). EU Artificial Intelligence Act: The European Approach to AI, Transatlantic Antitrust and IPR Developments. Standford Law School.


Lomas, N. (2022). Europe's AI Act contains powers to order AI models destroyed or retained, says legal expert. TechCrunch.


Visual sources

Davis, A. (n.d.). Abstract robot face. [Illustration]. Retrieved from: https://arielrdavis.com/Wired-1


Davis, A. (n.d.). Why an insecure internet is actually in tech companies' best interest. [Illustration]. Retrieved from: https://arielrdavis.com/TED


Unknown. (2021). Risk-assessment pyramid of the AI Act [Graphic]. Retrieved from: https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/

Author Photo

Mar Estrach

Arcadia _ Logo.png

Arcadia

Arcadia, has many categories starting from Literature to Science. If you liked this article and would like to read more, you can subscribe from below or click the bar and discover unique more experiences in our articles in many categories

Let the posts
come to you.

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn
bottom of page