What if the biggest privacy assumption in your product is wrong?
In this episode of Practical Privacy, Orla Dormer speaks with Noemie Weinbaum, Managing Counsel at UKG, about one of the most common misconceptions in SaaS and AI development: that data can be fully anonymized.
Many companies rely on this assumption to justify analytics, product development, and AI training. But under GDPR standards, true anonymization is extremely difficult to achieve in practice. Noemie challenges this directly, arguing that most datasets used in SaaS environments remain pseudonymized personal data — especially when the same data is used for both service delivery and AI training.
🎥 Watch the full episode
🎧 Listen on your preferred platform
Listen on Spotify
Listen on Apple Podcasts
What we cover in this episode
Rather than focusing on theoretical distinctions, this conversation explores the practical risks of misclassifying data and what that means for real-world SaaS companies. Noemie explains:
- Why true anonymization is almost impossible under GDPR
- How using the same dataset for production and AI training creates risk
- Why assuming data is anonymous can lead to re-identification and non-compliance
- The difference between personal, pseudonymized, and anonymized data in practice
- Why privacy strategies should start with acknowledging risk, not ignoring it
Through real examples, she highlights how false assumptions about anonymization often result in governance gaps, technical debt, and loss of customer trust.
Key lessons from this episode
Treating data as anonymous may feel simpler — but it creates hidden risks. This episode highlights a more practical approach:
- Always assume data can be re-identified, especially with modern AI techniques
- Pseudonymized data is still personal data and must be treated as such
- Privacy should be integrated early into the software development lifecycle
- Transparency with customers builds stronger long-term trust
- Fixing privacy issues late is far more costly than designing for them early
Ultimately, the shift is not just technical — it’s cultural. Organizations that accept and design for this reality build more resilient and credible products.
This episode focuses on a core challenge in modern privacy and AI governance: not just how data is classified, but how assumptions about that data shape risk, compliance, and trust.
Follow the series
If you want more real-world conversations about privacy operations, AI governance, and scaling compliance without unnecessary complexity:
- Follow Orla Dormer on LinkedIn for updates on new episodes
- Subscribe to our YouTube channel.
- Follow the podcast on Spotify or Apple Podcasts
New episodes are released regularly as part of the Practical Privacy series.
🟡 If the challenges discussed in this episode resonate, you don’t have to solve them alone. Book a demo to see how organisations like Randstad and other global companies operationalise privacy and AI governance in practice, reducing complexity while aligning compliance with how their business actually works. 👉 Book your demo here




