Why is it necessary to consider regulating artificial intelligence?

When we think of Artificial Intelligence, the first thing that usually comes to mind is Chat GPT, or the generation of images, videos, music, etc. Then we might think of machine learning, neural networks or robots. But there is one issue that is often overlooked, yet is crucial for this technological revolution to continue with certainty: its regulation.

The legal framework or regulation governing AI is not a trivial matter, as we are talking about issues that increasingly affect us: skirting the boundaries of fundamental rights, possible cases of discrimination, misinformation or opacity, the use of deepfakes or cloned voices… That is why we have decided to create a series of articles to address how this is being dealt with in different legislations.

On the global stage, different players are opting for different approaches, as each region has different legal regulations. Today we will focus on how the European Union is approaching this issue.

The European approach, the AI Act

The EU is taking an approach based on risk, fundamental rights and a digital single market, as opposed to the options chosen by other countries, such as laissez-faire or state control systems. This year saw the entry into force of the AI Act, a law that aims to provide a legislative umbrella for artificial intelligence.


But getting to the AI Act has been a long process, and not without controversy.

First steps

In August this year, we passed the halfway point by which the various EU countries were required to submit their proposals to the competent national authorities responsible for this matter and notify the Commission thereof.

That month also marked the deadline for Member States to establish rules on penalties and fines, notify the Commission of them, and ensure their proper enforcement. In addition, all regulations on the obligations of general-purpose models, such as Chat GPT, came into force.

In other words, the groundwork has been laid for unified European regulations on AI to not only be approved but also fully implemented. This process is expected to be completed by 2027.

And what will happen until 2027?

Although the implementation of this regulation is expected to be completed in 2027, some articles are already in force.

This is the case of Section 5 of the IA Act, which, since February of this year, regulates prohibitions relating to artificial intelligence:

  1. Manipulation and exploitation of vulnerabilities: AI systems that manipulate people in a subliminal or deceptive manner are prohibited. Systems that exploit people’s vulnerabilities or cause significant harm are also prohibited.
  2. Social scoring: The EU prohibits systems that score individuals or groups based on different personal characteristics (direct or inferred) or their behaviour and use that score to treat them in an unjustified or disproportionate manner.
  3. Predictive policing: That is, it is prohibited to use AI to predict who is most at risk of committing a crime based solely on their profile, traits, or personal characteristics. However, AI is permitted to support an assessment when there are already objective and verifiable facts of criminal activity.
  4. Facial recognition databases: The use, creation or expansion of facial recognition databases on a mass and non-selective basis is prohibited unless there is a clear legal basis for doing so.
  5. Reading emotions at work and in education: Yes, it can be used for medical or safety reasons, but attempting to detect emotions in work or educational settings is considered too invasive and unreliable.
  6. Biometric categorisation of particularly sensitive characteristics: Race, political opinions, religious beliefs, sex life, etc. As a caveat, the labelling or filtering of biometric data acquired legally for the purpose of enforcing a law is excluded.
  7. Real-time remote biometric identification in public spaces for policing purposes, except in specific cases such as searching for specific victims (of kidnapping, human trafficking, missing persons, etc.), preventing serious, imminent and specific threats of terrorist attacks, or locating or identifying suspects of serious crimes.
  8. If AI was already illegal due to issues with GDPR or any other law, it will continue to be illegal regardless of the AI Act.

The foundations for comprehensive regulation in European Union member states are clear, but this is only the beginning of our series of articles on AI regulation. We can already begin to glimpse the complicated landscape involved in attempting to regulate a technology that offers new advances practically every day.

Artículos relacionados