Section 50 of the AI Act

Continuing with our series of articles on Artificial Intelligence legislation, today we are dedicating a special section to Article 50 of the AI Act.

Section 50 of the AI Act is the section that answers a very simple question: when do I have to clearly state that AI is involved? And the answer is: much more often than we might think.

The starting point is the idea of transparency: people have the right to know whether they are interacting with an AI system rather than a human, or whether the content they are viewing has been generated or manipulated by AI.

Article 50 transforms this idea into specific legal obligations for suppliers and companies that deploy AI.

This article regulates deepfakes (when AI through video, audio or images presents content that purports to be a person/object/event/etc. that appears to be taken from the real world in order to make the person consuming it believe that it is indeed real) in such a way that they must be clearly labelled so as not to be misleading to the consumer.

There are particularly sensitive cases, such as when the events presented may give the impression of reality or shape opinion on public matters, while at the same time there is editorial responsibility and someone who assumes authorship, and the fact that it is a deepfake for artistic, satirical or fictional purposes may be considered exceptions to the rule.

In the case of companies, it is important to include the use of generative AI and its extension in the contractual relationship, explain how authorship and rights to the results will be managed, and what guarantees there are that the rights of third parties will not be infringed upon to a reasonable extent.

Getting down to practicalities, AI regulation in the day-to-day running of a media agency:

And in my day-to-day life, in my agency, in my business, how does it affect me?

To begin with, think about the content you already generate with AI: copy, claims, headlines, video scripts, images, synthetic voiceovers, videos, chatbots… Every time a piece comes out, you’ll have to ask yourself: does the audience need to know that AI is involved here? Could this content confuse someone about whether what they are seeing is real or a recreation? The famous Article 50 requires transparency and, in certain cases, clear labelling of content generated or manipulated by AI, especially if it appears real (deepfakes) or deals with topics of public interest. Do you already have internal criteria for deciding what to label, how and where?

The second issue is audience segmentation and profiling. Agencies work with their own data, customer data, and data provided by large platforms. Here, the GDPR and DSA rule: what data you can use, on what legal basis, to what extent you can combine sources, and where the red line is for sensitive targeting (health, politics, religion, etc.) or minors. If you are building lookalike audiences, attributing conversions with advanced models, or using AI to score leads, you are actually processing personal data with a direct impact on real people. Are you clear about what you would explain to a user if they asked you, ‘Why am I seeing this ad?’ Most likely, yes. Most of the platforms we use on a daily basis already take these regulations into account when working to make it convenient and legal.

The regulation also applies to internal agency processes. If you use AI for recruitment, filtering CVs, prioritising candidates or evaluating performance, you are approaching the realm of high-risk systems under the AI Act, which requires human oversight, bias analysis, documentation and impact assessments.

Finally, we must not forget the role of contracts: introducing clauses on the use of AI and copyright in agreements with clients. It is best to make it clear from the outset whether generative AI will be used in processes that have creative outputs, who owns the rights to the outputs, what guarantees the agency offers in terms of originality and respect for third-party rights, and how content labelling will be managed when necessary.

On a day-to-day basis, it is important to know: What am I doing with AI? What tools am I using? What guarantees do I offer my stakeholders? And what value? AI with legal guarantees allows it to be used with peace of mind and legitimacy to deliver genuine value efficiently.

What will happen in the coming years?

Now that we know the current situation, we must be aware of what is coming in the next few years. Between 2026 and 2027, AI regulation in the EU will cease to be “what is coming” and become part of everyday operations.

On the one hand, there is the full implementation of obligations for high-risk systems. From 2 August 2026, providers and users of these solutions (e.g. AI for selecting personnel, assigning credit or deciding on access to essential services) will have to fully comply with requirements for risk management, data quality, human oversight, documentation and systematic logging.

At the same time, labelling AI-generated content is becoming an almost universal standard. From 2 August 2026, the transparency obligations of Article 50 will begin to apply in full, and the Commission is developing a specific Code of Practice on AI-generated content based on these obligations.

Not everything, however, is settled. In 2025, the Commission withdrew the proposal for an AI Liability Directive, which sought to harmonise civil liability for damage caused by AI systems. This leaves a gap: the AI Act sets out what constitutes a “lawful” AI system, but the answer to the question “who pays when something goes wrong?” still depends on national law (product liability, fault-based liability, insurance, etc.).

Part of the challenge in 2026–2027 will be to see whether standards, codes of practice, automated compliance tools and support from public authorities manage to make the regulation workable… We will need to make a collective effort to adapt to the new regulatory framework and not only survive it, but actually use it as a springboard to shine even more in the process.

Artículos relacionados