Articles

The EU AI Act Just Changed. Here’s What It Means for Your Company

EU AI Act update featured image showing a dark blue GDPR Register-style timeline with key regulatory deadlines: 2 Dec 2026, 2 Dec 2027 and 2 Aug 2028, plus a checklist icon, Europe map and subtle circuit-line background.

EU AI Act Update & Simplification Deal: More Time for High-Risk AI, Less Time for Transparency

On 7 May 2026, the EU Council and European Parliament reached a provisional agreement on a package of amendments intended to streamline and simplify the EU AI Act. The changes form part of the broader Omnibus VII legislative package, the EU’s ongoing effort to reduce regulatory complexity and ease compliance burdens for businesses.

The agreement does not water down the AI Act. The core risk-based framework remains intact. However, the amendments do give companies more time in some areas, while accelerating obligations in others. For organisations developing, deploying or procuring AI systems in Europe, the message is clear: the AI Act may become more workable, but it is not becoming optional.

Official press release ->

Key takeaways

The agreement introduces several practical changes that companies should immediately factor into their AI governance and compliance planning.

Key takeaways

  • High-risk AI deadlines move later: stand-alone high-risk AI systems are expected to face full obligations from 2 December 2027.
  • Product-embedded high-risk AI gets more time: high-risk AI in regulated products, such as medical devices or machinery, moves to 2 August 2028.
  • Transparency moves faster: watermarking and provenance-labelling obligations for AI-generated content are expected from 2 December 2026.
  • Bias detection is clarified: special-category personal data may be processed for bias detection and correction, but only where strictly necessary.
  • Registration obligations remain important: providers may need to register AI systems even where they consider them exempt from high-risk classification.

New deadlines: more runway, but the clock is running

The most immediate practical impact for many companies is the revised application schedule for high-risk AI rules. Stand-alone high-risk AI systems are expected to become subject to the full set of high-risk requirements from 2 December 2027. High-risk AI systems embedded in regulated products, such as medical devices, industrial machinery or vehicles, are expected to face a later deadline of 2 August 2028.

For companies that have been preparing against a tighter internal timeline, this is welcome breathing room. But it should not be mistaken for a pause. The infrastructure for AI Act compliance takes time to build. Governance frameworks, documentation practices, conformity assessments, human oversight measures and post-market monitoring processes cannot be created overnight.

Organisations that use this additional time strategically will be far better positioned than those that treat the delay as permission to wait.

AI Act simplification timeline

2 December 2026
Transparency for AI-generated content
Companies deploying generative AI should be ready for watermarking, provenance-labelling and related technical transparency measures.
2 August 2027
Regulatory sandboxes
National competent authorities are expected to establish AI regulatory sandboxes by this date.
2 December 2027
Stand-alone high-risk AI systems
Full high-risk AI requirements are expected to apply to stand-alone high-risk AI systems.
2 August 2028
High-risk AI embedded in products
High-risk AI embedded in regulated products, such as medical devices, vehicles or machinery, receives a later compliance deadline.

Watermarking and AI-generated content: act faster than expected

One area where the agreement accelerates the clock is transparency for AI-generated content. The grace period for implementing technical transparency solutions, including watermarking and provenance-labelling of AI-generated outputs, has been shortened, with the new deadline expected to be 2 December 2026.

For companies deploying generative AI tools, this is no longer a medium-term concern. Customer-facing chatbots, content generation platforms, synthetic media tools and other AI systems capable of producing text, images, audio or video should now be reviewed from a transparency perspective.

The key question is no longer whether AI-generated content should be traceable. The question is whether the organisation has the technical, contractual and operational measures to make that traceability work in practice.

Deadline alert
2 December 2026
Companies using generative AI should prioritise watermarking, provenance-labelling and technical transparency measures for AI-generated outputs.

Bias detection: a narrow but important permission for data processing

One of the more technically significant provisions concerns the use of special categories of personal data for bias detection and correction.

The agreement clarifies that such data may be processed for this purpose, but only where strictly necessary. This matters for companies developing or deploying high-risk AI systems, because meaningful bias testing may sometimes require access to sensitive data, such as health data, biometric data or data revealing racial or ethnic origin.

However, this is not a general permission to collect, retain or reuse sensitive data throughout the AI lifecycle. The processing must be limited, justified and documented. Companies will need to show why the data is necessary, how the use is minimised, how access is controlled, and how the processing aligns with GDPR Article 9 requirements, which continue to apply in parallel.

Registration obligations: no quiet exemptions

The agreement also clarifies registration obligations for AI providers. Providers may need to register their systems in the EU AI database even where they consider the system to be exempt from high-risk classification.

This is practically important for companies operating in or near high-risk categories. A self-assessed exemption should not be treated as the end of the compliance analysis. Legal, compliance and product teams should review AI inventories and determine whether registration obligations apply despite an exemption position.

In practice, this means that internal AI inventories should not only classify systems as high-risk or not high-risk. They should also record the reasoning behind that classification, the exemption analysis, the responsible owner and any related registration decision.

Sectoral complexity: relief for regulated industries

For companies operating in heavily regulated sectors, such as medical devices, machinery, automotive or other product-regulated industries, the agreement introduces a mechanism to address overlapping compliance obligations.

This is a response to one of the most persistent concerns from industry: the risk that companies could be caught between the AI Act and sector-specific legislation that already imposes equivalent or similar AI-related requirements.

The agreement provides for a process through implementing acts to limit the AI Act’s application in certain cases where sectoral law already imposes equivalent AI-specific requirements. The aim is to minimise duplication and reduce unnecessary compliance burden.

For legal and regulatory teams, this is meaningful progress. But it also creates a new dependency: the final compliance position may depend heavily on future Commission guidance and implementing measures. Regulated-sector companies should therefore monitor developments closely and avoid assuming that overlap issues have already been fully resolved.

What becomes easier

  • More time for high-risk AI compliance
  • Later deadline for product-embedded high-risk AI
  • Potential reduction of overlap with sectoral rules
  • More clarity for regulated industries

What becomes stricter

  • Faster deadline for AI-generated content transparency
  • Clearer registration expectations
  • Stricter focus on documentation and justification
  • Explicit prohibition of harmful AI-generated sexual content

Prohibited practices: a new focus on harmful AI-generated content

The agreement also adds a new prohibited practice concerning AI-generated sexual and intimate content. The AI Act will explicitly prohibit AI practices related to the generation of non-consensual sexual and intimate content, as well as child sexual abuse material.

Many companies may assume this is not relevant to them. However, it may have practical implications for platform providers, generative AI developers, AI tool vendors and any organisation deploying open-ended content generation capabilities at scale.

Terms of service, moderation systems, model safeguards, abuse reporting channels and technical restrictions should be reviewed in light of this explicit prohibition.

EU AI Act Update on Regulatory sandboxes: more time to engage

For companies hoping to use national regulatory sandboxes to test innovative AI applications in a supervised environment, the deadline for competent national authorities to establish those sandboxes has been moved to 2 August 2027.

This means that sandbox access may not be available as quickly as some companies expected. For organisations planning to rely on sandbox participation as part of their compliance or market-entry strategy, this should be reflected in product development timelines.

Sandboxes may still become a valuable route for testing AI systems with regulatory engagement. But they should not be the only compliance strategy.

What companies should do now

The agreement is still provisional and must be formally endorsed by both the Council and Parliament before legal and linguistic revision and final adoption. However, the direction is clear enough for companies to act.

The simplification package should be treated as the starting point for practical compliance planning, not as a reason to postpone it.

Company action checklist

1
Update internal AI Act compliance roadmaps with the revised application dates.
2
Review AI system inventories and identify high-risk and near-high-risk systems.
3
Prioritise watermarking and provenance-labelling measures for generative AI outputs.
4
Document any processing of special-category data for bias detection and correction.
5
Reassess registration obligations, including for systems considered exempt from high-risk classification.
6
For regulated-sector businesses, map overlap between the AI Act and sector-specific legislation.
7
Monitor final adoption, legal-linguistic revision and forthcoming Commission guidance.
From insight to implementation

This is where AI Act compliance becomes a workflow

The AI Act is not just a legal text to monitor. It creates an ongoing need to map AI systems, assess risk, assign responsibilities, document decisions and keep compliance evidence up to date.

That is difficult to manage with scattered spreadsheets and one-off reviews. Companies need a structured way to turn AI Act requirements into practical tasks that legal, privacy, compliance and product teams can actually follow.

If you are asking “how do we actually manage this?”, this is where GDPR Register can help.

Our EU AI Act compliance software helps organisations move from uncertainty to a clear, structured compliance process.

Ready to make AI Act compliance manageable?

Final takeaway

The EU AI Act remains one of the most consequential pieces of technology regulation in a generation. The simplification agreement may make the framework more workable for companies, especially those dealing with high-risk systems or regulated products.

But the key compliance message has not changed. Companies still need to understand where AI is used, classify systems correctly, document decisions, manage risks, and prepare for transparency, oversight and registration obligations.

The AI Act is becoming more practical. It is not becoming optional.

“The AI Act is becoming more practical. It is not becoming optional.”
Companies should use the additional time to build AI governance properly, not to postpone compliance work.

Turn AI Act readiness into a practical workflow

The EU AI Act update may now come with adjusted timelines, but companies still need to understand where AI is used, assess risk, document decisions and prepare for transparency and governance obligations.

GDPR Register’s EU AI Act compliance software helps organisations map AI systems, assess risk and manage AI Act documentation in one structured workflow.

Learn more about GDPR Register’s EU AI Act compliance software ->

Tags:
case study
gdpr
gutenberg
interesting
PREVIOUS
Why Waiting for the EU AI Act to “Become Clearer” Is Not an EU AI Compliance Strategy