Enforcement status: active

The EU AI Act's transparency obligations under Article 50 became enforceable in August 2025. If your organisation deploys AI systems that generate synthetic content in the EU - images, video, audio, or text - you are required to label that content in a machine-readable format. Non-compliance can result in fines up to 3% of global annual turnover or €15 million.

What does the AI Act actually require?

The EU AI Act is the world's most comprehensive regulation of artificial intelligence. It covers everything from high-risk AI systems in healthcare and law enforcement to general-purpose AI models. But for content provenance, the critical section is Article 50: Transparency Obligations.

Article 50 requires two things related to synthetic content. First, providers of AI systems that generate synthetic images, video, audio, or text must ensure their outputs are marked in a machine-readable format as artificially generated or manipulated. Second, deployers of AI systems that generate or manipulate content constituting a "deep fake" must disclose that the content has been artificially generated or manipulated.

The regulation is technology-neutral - it doesn't name specific tools or standards. But the requirement for "machine-readable" labelling that is "detectable, interoperable, robust, and reliable" narrows the field significantly. In practice, the leading mechanism for satisfying these requirements is C2PA Content Credentials.

Who does it apply to?

The scope is broader than many organisations initially expected. The AI Act applies to:

Providers - any organisation that develops or places an AI system on the EU market, regardless of where they are headquartered. If your AI generates synthetic content that's accessible to EU users, you're in scope. This includes American, Chinese, and other non-EU companies that serve EU customers.

Deployers - any organisation that uses an AI system under its authority within the EU. If you use an AI image generator as part of your marketing, customer service, or content pipeline, you have transparency obligations even if you didn't build the AI yourself.

Importers and distributors - organisations that bring AI systems into the EU market or make them available within the EU.

Key distinction: provider vs deployer

A provider builds the AI system. OpenAI is a provider of DALL·E. Adobe is a provider of Firefly. Their obligation is to ensure their system marks outputs as AI-generated.

A deployer uses the AI system. A marketing agency using DALL·E to create campaign imagery is a deployer. Their obligation is to disclose that the content is AI-generated when publishing it, particularly for deep fakes.

If you are both a provider and a deployer (you built AI tools and use them internally), both sets of obligations apply.

Timeline: what's enforceable and when

August 1, 2024
AI Act enters into force. The regulation is officially law. No obligations are yet enforceable, but the clock starts ticking.
February 2, 2025
Prohibited AI practices enforceable. Banned uses of AI (social scoring, real-time biometric surveillance, etc.) become enforceable.
August 2, 2025
Transparency obligations enforceable (Article 50). This is the critical date for content labelling. AI systems generating synthetic content must mark their outputs. Deep fake disclosures required.
August 2, 2026
Full enforcement. All provisions enforceable, including high-risk AI systems. National authorities fully operational.
August 2, 2027
High-risk systems in Annex I. Obligations for high-risk AI in regulated products (medical devices, machinery, etc.) become enforceable.

The critical takeaway: Article 50 is already enforceable. If your AI systems generate synthetic content accessible in the EU and you're not labelling that content, you are currently non-compliant.

Article 50 in detail

Article 50 sets out specific transparency obligations for different categories of AI systems. Here's what matters for content labelling:

Article 50(2) addresses providers of AI systems that generate synthetic content - images, audio, video, or text. Providers must ensure that the outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. The marking must be effective, interoperable, robust, and reliable, taking into account the type of content, implementation costs, and the generally acknowledged state of the art.

Article 50(4) addresses deployers who use AI to generate or manipulate deep fakes. They must disclose that the content has been artificially generated or manipulated. This disclosure must be made clearly and in a way that's accessible to the affected persons.

Article 50(6) provides limited exemptions. Content that is "part of an evidently creative, satirical, artistic, or fictional" work may be disclosed in a way that doesn't hamper the display or enjoyment of the work. However, even in these cases, the machine-readable marking is still required - only the user-facing disclosure may be adapted.

An important nuance: the text doesn't say "watermark" or "metadata tag" or "C2PA." It says "machine-readable format" that is "detectable, interoperable, robust, and reliable." This technology-neutral language is deliberate - the Commission didn't want to enshrine a specific standard into law. But it also means that organisations must choose a technical mechanism that meets these criteria.

What counts as "machine-readable" labelling?

The AI Act's requirement for "machine-readable" labelling that is "detectable, interoperable, robust, and reliable" effectively rules out several approaches while favouring others:

ApproachMachine-readable?Interoperable?Robust?Likely compliant?
C2PA Content CredentialsYesYes - open standardCryptographic, tamper-evidentStrong case
Invisible watermarking (SynthID, etc.)YesVaries by implementationSurvives re-encodingLikely, if interoperable
EXIF/IPTC metadata tagsYesYes - established standardsEasily strippedWeak - not robust
Visible "AI generated" label onlyNo - human-readable, not machine-readableN/ACan be croppedInsufficient alone
Terms of service disclosure onlyNoN/AN/AInsufficient

The European Commission has signalled through guidance and working papers that C2PA Content Credentials satisfy the Article 50 requirements. The standard is machine-readable (structured CBOR data), interoperable (open specification, multiple implementations), robust (cryptographic signatures, tamper-evident), and reliable (backed by a trust list and conformance programme). No formal harmonised standard has been designated yet - that process is underway through CEN/CENELEC - but C2PA is the leading candidate.

In practical terms: if you implement C2PA Content Credentials on your AI-generated content today, you have the strongest available compliance position. If a harmonised standard is eventually designated and it differs from C2PA (which is unlikely given the current landscape), you would need to adapt - but C2PA's extensible design makes this straightforward.

How C2PA satisfies the requirements

Here's how C2PA Content Credentials map to each element of Article 50:

"Marked in a machine-readable format" - Content Credentials are structured data (CBOR-encoded manifests) embedded in or alongside the file. Any tool implementing the open C2PA specification can read and parse them programmatically.

"Detectable as artificially generated or manipulated" - Content Credentials include assertions that explicitly identify content as AI-generated. When an AI tool like DALL·E or Firefly signs an image, the manifest includes a c2pa.actions assertion with the action c2pa.ai_generated. This is an unambiguous, machine-readable declaration of AI generation.

"Interoperable" - C2PA is an open standard published by a non-profit under the Linux Foundation. The specification is freely available. Multiple independent implementations exist in Rust, Python, JavaScript, C, and other languages. Content signed by one implementation can be verified by any other.

"Robust and reliable" - Content Credentials are cryptographically signed using X.509 certificates. Modifying the content or the metadata after signing invalidates the hash and signature, making tampering detectable. The C2PA Conformance Programme ensures that signing implementations meet security requirements. Time stamping provides independent proof of when content was signed.

Need help with EU AI Act compliance?
We help organisations implement C2PA Content Credentials to satisfy Article 50 requirements. From technical integration to compliance documentation - we'll get you from non-compliant to audit-ready.
View our services →

Penalties for non-compliance

The AI Act establishes a tiered penalty structure. For transparency obligation violations (including Article 50), the penalties are:

Up to €15 million or 3% of global annual turnover (whichever is higher) for non-compliance with transparency obligations. For a company with €1 billion in annual revenue, this means a potential fine of up to €30 million.

For SMEs and startups, the regulation provides for proportionate penalties, with fines capped at the lower of the two thresholds. But even "proportionate" fines for transparency violations can be significant for smaller organisations.

Enforcement is handled by national competent authorities in each EU member state. The European AI Office coordinates cross-border enforcement and handles obligations related to general-purpose AI models. As of early 2026, national authorities in most major EU member states are operational or in the process of being established.

Beyond direct fines, non-compliance creates secondary risks: reputational damage, loss of customer trust (particularly in B2B relationships where EU compliance is a procurement requirement), and potential civil liability if AI-generated content causes harm and was not properly labelled.

Compliance checklist

Article 50 compliance - practical steps
1
Audit your AI systems. Identify every AI system in your organisation that generates or manipulates images, video, audio, or text. Include third-party tools (DALL·E, Midjourney, Firefly) as well as internal systems. Map which outputs reach EU users.
2
Classify your role. For each AI system, determine whether you are a provider (you built or customised it), a deployer (you use it), or both. Each role carries different obligations under Article 50.
3
Implement machine-readable labelling. For AI-generated content, attach C2PA Content Credentials identifying the content as AI-generated. Use the official SDKs (c2pa-rs, c2pa-node, c2pa-python) to integrate signing into your content pipeline.
4
Obtain a signing certificate. For production use, go through the C2PA Conformance Programme to receive a trusted signing certificate. For initial implementation and testing, self-signed certificates work but won't be trusted by verification tools.
5
Implement user-facing disclosure. For deep fakes and content that could be mistaken for depicting real events, add clear disclosure that the content is AI-generated. This is in addition to the machine-readable marking - both are required.
6
Document your compliance. Maintain records of your AI systems, the labelling mechanisms you've implemented, and your compliance rationale. If a national authority requests evidence of compliance, you need to demonstrate your approach.
7
Monitor and update. The regulatory landscape is evolving. Harmonised standards are being developed. Monitor for updates from the European AI Office, CEN/CENELEC, and your national competent authority. Review your implementation annually at minimum.
Get compliance updates
We track enforcement actions, harmonised standards development, and practical compliance guidance as the AI Act matures.

Common questions

We use AI tools but only internally - does Article 50 apply?

If AI-generated content remains entirely internal and never reaches EU citizens, Article 50's transparency obligations are less directly applicable. However, if internal AI-generated content eventually enters any external communication - marketing materials, reports, presentations shared externally - the obligations apply to that content. The safest approach is to label all AI-generated content at the point of creation, regardless of intended use.

We're based outside the EU - does this apply to us?

Yes, if your AI systems or their outputs are accessible to people in the EU. The AI Act has extraterritorial reach, similar to GDPR. A US company whose AI-generated images appear on websites accessible to EU users is in scope.

Our AI vendor already labels content - is that enough?

If you use a provider like OpenAI or Adobe that already attaches Content Credentials to AI outputs, the provider is fulfilling its obligation under Article 50(2). However, as a deployer, you may have additional obligations under Article 50(4) - particularly for deep fakes and content that could be mistaken for real. You should also verify that the provider's labelling is preserved through your content pipeline. If your CMS, CDN, or publishing workflow strips metadata, the labelling may not survive to the end user.

What about AI-generated text?

Article 50 covers text as well as images, video, and audio. However, the machine-readable labelling requirement for text is less technically mature than for visual media. C2PA supports document formats (PDF), but text-specific provenance is an active area of development. For now, the strongest approach for text is to maintain records of which content was AI-generated and to disclose this to users where the text could be mistaken for human-written journalistic or informational content.

How does this interact with GDPR?

Content Credentials can include identity information (who created the content), which may constitute personal data under GDPR. The C2PA standard is designed with privacy in mind - all identity information is optional. When implementing Content Credentials for AI Act compliance, ensure that any personal data included in the credentials is processed in accordance with GDPR principles. In most cases, the signing entity is the organisation (not an individual), which simplifies the GDPR analysis.

When will harmonised standards be designated?

CEN/CENELEC are developing harmonised standards for the AI Act, including standards related to Article 50 transparency obligations. These are expected to be finalised by late 2026 or 2027. In the meantime, organisations should implement the best available technical approaches - which currently means C2PA. Implementing C2PA now establishes a strong compliance baseline that can be adjusted if the eventual harmonised standard differs in any material respect.

The EU AI Act's content labelling requirements are not a future concern - they are a present obligation. Organisations that delay implementation are accumulating compliance risk every day. The good news is that the technical mechanism is mature, the tools are available, and the path to compliance is clear. The organisations that act now will be the ones with the strongest compliance position when enforcement actions begin.

This guide is maintained by the C2PA.ai editorial team and updated as the regulatory landscape evolves. It is intended as educational content and does not constitute legal advice. For specific compliance questions, consult qualified legal counsel in your jurisdiction. Last updated March 2026. Contact us with updates.

Related: The Global Policy Landscape · What Is C2PA? · Developer Implementation Guide · Implementation Services