It’s been a full year since the EU Artificial Intelligence Act was published in the Official Journal and began reshaping the compliance landscape across Europe and beyond.
While the legislation’s most sweeping obligations won’t take effect until 2026, the AI Act is already having a measurable impact, particularly on privacy teams. From unacceptable-risk bans to governance requirements for general-purpose AI, many privacy professionals have found themselves at the centre of a rapidly evolving conversation they didn’t necessarily start, but now have to lead.
So, how are privacy teams dealing with it all?
From Privacy Management to 'Privacy & AI Governance'
Traditionally, privacy teams have focused on well-defined domains: personal data, data subject rights, impact assessments, and cross-border data flows. But the rise of AI systems has blurred those boundaries, introducing new risks, expanding the scope of compliance, and increasing the demand for oversight.
As a result, AI governance is increasingly falling under the responsibility of Data Protection Officers (DPOs) or privacy managers. Whether formally tasked with this role or simply pulled into it by necessity, these professionals are now expected to identify and classify AI systems across the organisation, including so-called shadow AI initiatives that may have slipped under the radar. They must assess risk levels using the criteria defined in the AI Act, collaborate with legal, compliance, and technical teams on assessments and mitigation efforts, and track the provenance, purpose, and explainability of each system in use.
All of this comes on top of their existing responsibilities, adding complexity and urgency to already stretched privacy programs. AI governance isn’t replacing privacy work. It’s stacking on top of it.
The operational load is real
The AI Act is often described as "horizontal" regulation, complementing vertical laws like the GDPR. But in practice, there’s significant overlap, especially around lawful basis, transparency, impact assessments, and individual rights.
This creates a dual burden for privacy teams, many of whom are already operating under resource constraints. For example:
- AI Impact Assessments often resemble DPIAs, but with additional technical and ethical layers.
- AI System Inventories require similar documentation disciplines as the RoPA, but span broader departments and technologies.
- Cross-functional collaboration (with Legal, IT, Risk, Product) is now a necessity, not a luxury.
Privacy teams are well-positioned to lead, but many lack the tools or internal support to scale their work effectively.
A year of learning and scrambling
Since the Act’s publication in August 2024, the focus for many organisations has been on education and internal alignment. Some key steps privacy teams have taken include:
- Running AI system discovery workshops with business and product teams
- Establishing internal AI registers and governance boards
- Mapping AI Act requirements against existing GDPR processes
- Partnering with legal counsel to clarify risk classifications and obligations
- Experimenting with templates for AI assessments and documentation
Still, there’s widespread uncertainty, especially around general-purpose AI models (GPAI), foundation model risks, and evolving enforcement timelines.
How TrustWorks customers made their AI governance more efficient
In June 2024, at TrustWorks, we launched the AI Governance module to help privacy and risk teams move beyond manual tracking and get a head start on AI Act readiness. Our goal? Make AI Governance actionable, collaborative, and scalable. The same way we've been doing with Privacy Management, streamlining operations so that privacy teams can finally shift their focus from repetitive admin work to more strategic, high-value tasks.
Today, we’re proud to have customers — including some Fortune 500 companies — using the AI governance platform to:
- Build and maintain a centralised, audit-ready AI Register
- Conduct AI risk and impact assessments across jurisdictions
- Reduce time spent on AI workshops by automating the discovery and classification of AI use cases
- Get real-time insights into the purpose, legal basis, and risk level of each system
- Collaborate with Legal, Compliance, and Engineering on mitigation plans
- Automatically version-control documentation and track governance workflows
And unlike tools that bundle everything into rigid, all-or-nothing suites, some customers have started with AI Governance only, because that’s where the pressure was. Thanks to TrustWorks’ modular approach, they were able to adopt exactly what they needed, without being forced into a full privacy stack from day one.
If you want to see how one of our enterprise customers is streamlining AI governance right now, book a 1:1 with me. I’ll walk you through what real, operational AI governance looks like.
What’s Coming Next?
The AI Act’s enforcement is phased, and we’ve already passed the first milestone. In February 2025, bans on unacceptable-risk systems and AI literacy requirements came into force, marking the start of real regulatory pressure.
The message from EU officials is now crystal clear: “[...] there is no stopping the clock. There is no grace period. There is no pause.”
Any hopes of a soft landing have been officially dismissed. So what’s ahead?
- August 2025: Obligations for general-purpose AI models (GPAI) begin, including documentation, transparency, and disclosures
- August 2026: Full compliance deadline for high-risk AI systems
- August 2027: Additional requirements kick in for high-risk AI embedded in regulated products (e.g. health tech, machinery)

That means “wait and see” is no longer a viable strategy.
Privacy, risk, and governance teams need to be moving now, not later. Workstreams that should already be underway include:
- Mapping AI systems across departments
- Risk-rating each system based on AI Act criteria
- Preparing for conformity assessments, where applicable
- Understanding transparency and disclosure requirements, including for deployers
- Developing internal review templates tailored to AI use cases
- Defining ownership, escalation, and sign-off processes
- And if you’re operating in B2B, expect buyers to ask for proof of AI Act compliance, especially in regulated industries
This isn’t the moment to scramble. It’s the moment to show that governance is already in motion and that your organisation is not just aware of the rules, but actively prepared to meet them.
Final Thoughts
One year into the AI Act, privacy professionals are adapting. AI governance is no longer just a tech or ethics issue. It’s a compliance priority. But this time, they have a valuable advantage: experience.
Many of the professionals now responsible for AI governance were also the ones who led their organisation’s GDPR implementation. You’ve done this before. You’ve built data maps, created governance frameworks, and translated legal text into operational workflows, and often under tight deadlines and with limited resources.
The rollout of the AI Act, however, offers an opportunity to approach things more effectively. This time around, privacy teams can be more proactive rather than reactive. They can shift away from checkbox-driven tasks toward operational governance by design. They can move from working in silos to building cross-functional workflows from day one. And critically, they can replace manual, repetitive tasks with more automated, efficient processes.
Instead of repeating the fragmented, spreadsheet-heavy approaches many organisations fell into during GDPR readiness, privacy leaders now have access to smarter tooling and frameworks designed for scale. With the right setup, AI governance can become a seamless part of day-to-day operations, not an afterthought or fire drill.
AI regulation is new territory, but for privacy professionals, the core playbook is familiar. And this time, there’s a real chance to build something more resilient, collaborative, and future-proof. From the start.
👉 Ready to streamline your AI Governance? Start with a free trial of the TrustWorks AI Governance module.
No setup fees, no unnecessary add-ons, just a smarter way to get AI Act–ready. Request your AI Governance free trial.
