,

/

February 10, 2026

Why privacy incidents persist and what leaders are changing that works

Why privacy incidents persist and what leaders are changing that works

Recent discussions with senior privacy, data, and technology leaders pointed to a frustrating truth. Most organisations already have policies, training modules, and security tooling. Yet privacy incidents keep happening, often in ways that feel preventable in hindsight.

One insight cut through the noise: most breaches are driven by individual errors rather than system failures. When that is the reality, the path to fewer incidents is not “more policy”. It is designing privacy as an everyday operating system, where the safest behaviour is also the easiest behaviour.

This piece focuses on what peers said is actually shifting the outcome curve, including how leaders are reframing privacy as business value, moving from reactive firefighting to proactive management, and redesigning the employee experience so privacy stops being optional in practice.

The real reason privacy incidents persist

Most privacy programmes fail in the same place. They are built as compliance artefacts rather than operational workflows.

Leaders described three drivers that keep showing up.

1) Privacy breaks at the moment of human action

Even strong controls struggle when individuals:

  • share the wrong file to the wrong group
  • copy sensitive details into an email chain
  • export data “just for a quick analysis”
  • use an unapproved tool because it is faster
  • misunderstand what is classified and why

Peers emphasised that incidents often happen in normal work, not during malicious activity. That is why privacy work has to move closer to the point of action.

2) Organisations are still too reactive

Several leaders described spending too much time responding to incidents and requests, and not enough time preventing repeat scenarios. In practice, this shows up as:

  • “post-incident” learning that never becomes new workflow
  • repeated minor incidents that do not trigger change
  • training that is periodic rather than continuous
  • controls that exist, but do not match how people actually work

A reactive posture also creates risk debt. Each incident consumes capacity that should be going into prevention.

3) Privacy is still treated as “customer data only”

Peers flagged a common blind spot. Many organisations focus heavily on customer data and underestimate the exposure in employee data, internal documents, and operational systems. Once leaders broaden the lens, privacy stops being a department issue and becomes an enterprise capability issue.

What peers mean when they say “privacy has to map to business value”

A repeated theme was the need to relate privacy directly to business value. Not as a slogan, but as a practical framing that changes how decisions get made.

When privacy is framed only as “risk avoidance”, it tends to lose priority unless there is a crisis. When it is framed as a business enabler, it gains momentum because it:

  • supports faster decision-making with clearer guardrails
  • reduces rework, escalations, and delay
  • enables safer use of data for innovation and AI
  • protects customer and employee trust, which is hard to rebuild after a breach
  • creates defensible positions when regulators or customers ask difficult questions

Peers described that this reframing also improves stakeholder engagement. Leaders outside privacy teams are more likely to support investment when privacy is tied to operational outcomes, not abstract compliance.

Moving from reactive to proactive privacy management

Leaders described proactive management as a set of habits and operating practices, not a single initiative.

Maintain a living view of processing activity

A practical starting point raised was understanding data processing activities and keeping records up to date. The aim is not documentation for its own sake. The aim is decision support:

  • What data is processed?
  • Why is it processed?
  • Where does it flow?
  • Who can access it?
  • What is the retention logic?
  • What controls exist at each step?

When these answers are unclear, organisations cannot prevent repeat incidents effectively because they do not have a reliable map of what is actually happening.

Use incident patterns to redesign workflows

Peers talked about the importance of learning from incidents, but the key difference was what they did with that learning.

Rather than writing a lesson-learned document, they redesign the workflow that allowed the mistake. That might mean:

  • making sensitive sharing require an extra confirmation step
  • reducing broad access and forcing request-based access
  • defaulting to restricted sharing for certain file types
  • standardising where certain data can live, and blocking it elsewhere
  • creating clear “red flag” scenarios employees can recognise instantly

The shift is from “tell people to be careful” to “make it hard to be careless by accident”.

Why training often fails and what peers are doing differently

Training was a major theme, but leaders were candid. Many organisations train employees, and incidents still happen.

Peers highlighted what changes training from a tick-box to a risk reducer.

1) Make it continuous, not annual

Leaders emphasised the need for ongoing awareness, not one-time completion. Short reminders and repeated reinforcement work better than long, infrequent modules.

2) Show real consequences, not generic warnings

Training sticks when it connects to realistic scenarios and explains the damage mishandling can cause. People are more careful when they understand the chain reaction of a mistake, not just the policy rule.

3) Focus on the people who handle personal data most often

Rather than training everyone the same way, peers discussed the need to focus on roles with higher exposure. The message is the same, but the depth and scenarios vary by role.

4) Train for judgment, not memorisation

The most useful training teaches people how to think:

  • how to recognise sensitive data
  • when to stop and ask
  • what “safe sharing” looks like
  • what to do when something goes wrong

That reduces the probability of mistakes when real-world complexity appears.

The controls that actually reduce human-driven exposure

Several leaders described being too reliant on security controls alone, especially when smaller breaches keep occurring. The lesson was not that controls are useless. The lesson was that controls have to be designed around behaviour.

Here are the practical control patterns peers emphasised.

Structured access rather than universal access

Leaders described the need to move away from broad access models. When access is universal, privacy incidents become inevitable. When access is structured, accidental exposure falls.

A simple operational principle peers leaned on is:

  • default access should be narrow
  • broader access should be requested and justified
  • approvals should be fast, so people do not bypass the process

Privacy is everyone’s responsibility, operationalised

A strong peer point was that privacy cannot sit only with a central team. Everyone has responsibility, but responsibility must be supported by clear cues and simple actions.

Some organisations are introducing mechanisms that make responsibility tangible, such as requiring explicit permissioning or “data visas” for employees who handle data, supported by ongoing communication and reinforcement.

The point is not bureaucracy. The point is making privacy competence visible and expected, the same way safety training works in other industries.

Better handling of “small breaches” before they become big ones

Leaders noted that smaller incidents are often under-addressed. They do not always trigger investment, but they indicate a pattern.

Peers described that treating small incidents seriously creates a feedback loop:

  • identify recurring scenarios
  • redesign workflow and permissions
  • update training based on those scenarios
  • communicate clearly and repeatedly
  • measure whether recurrence drops

This is how organisations shift from reactive to preventative.

The overlooked pressure point: data subject rights requests

Leaders discussed the operational burden of data subject rights requests and how they are evolving. A few themes stood out:

  • Requests can surge at predictable times, creating operational strain.
  • Some requests are used as leverage rather than for legitimate purposes.
  • New tooling is making it easier for individuals to generate complaints at scale, including misleading or exaggerated requests.

Peers described the need for proportionate responses, including negotiating the scope of requests where regulations allow, and understanding the intent behind a request rather than treating every request as identical.

For senior decision-makers, the implication is practical. If you do not operationalise rights requests, they become a hidden tax on your privacy team and a constant source of firefighting.

Third-party data and vendor risk is now part of privacy strategy

Peers highlighted that privacy exposure does not only come from internal systems. Third-party tools and vendors often shape risk posture.

One example discussed using external vendors for data collection and the legal team reviewing potential vendors as part of compliance management.

The broader lesson is that privacy programmes need an explicit third-party lane:

  • what data is shared externally
  • what tools are approved for processing
  • how vendors are reviewed and monitored
  • how contracts reflect permitted use and retention
  • how teams are prevented from bypassing approvals when deadlines hit

In many large organisations, this is where privacy risk quietly enters, because tool adoption can move faster than governance.

Certifications and “defensible positions” are evolving

A pragmatic point raised was that privacy-focused certification pathways are shifting, including the possibility of pursuing certain privacy management certification independently of broader information security certification.

The practical peer message was not “get certified”. It was:

  • build defensible positions
  • make your privacy posture auditable
  • be ready to explain decisions and controls clearly

This matters because regulators and stakeholders increasingly expect clarity, not just good intentions.

Why privacy incidents persist

Peers described a landscape where the dominant risk driver is human action, even when security controls are strong. A simple way to visualise the implication:

Primary source of privacy incidents (illustrative from peer discussion)

Individual errors  |████████████████████████████████████████████| 96%
System failures    |██                                          | 4%

The point is not precision. The point is prioritisation. If most incidents are behavioural, your programme must treat behaviour as a design surface.

Peer snapshot of what is changing

What leaders are seeingWhat they are adjustingWhy it is working better
Incidents keep happening despite policiesMoving from reactive response to proactive preventionRepeat scenarios get redesigned out of workflows
Training completion does not equal behaviour changeContinuous awareness and role-specific scenariosReinforcement matches how people actually learn
Smaller breaches are treated as noiseTreating small incidents as pattern signalsReduces repeat exposure before it compounds
Over-reliance on security controlsDesigning controls around human actionThe safe path becomes the easiest path
Privacy viewed as customer-data-onlyExpanding focus to employee and internal dataReduces a major blind spot in large enterprises
Rights requests create operational strainMore proportionate handling and clearer scope managementReduces privacy team burnout and improves response quality
Third-party tools introduce hidden exposureLegal and compliance review of vendors becomes routineReduces unmanaged tool risk pathways
Leaders want privacy aligned to outcomesLinking privacy to business value and trustIncreases stakeholder support and investment

Practical steps leaders are taking now

Peers described a set of actions that can be implemented without waiting for a multi-year transformation.

1) Identify the top five recurring incident scenarios

Not the scariest theoretical risks, the scenarios that actually happen. Then redesign the workflow around them.

Examples of redesign moves include restricted defaults, clearer permissions, and fast request-based access so people do not bypass process.

2) Make responsibility visible and expected

If your organisation is large, “everyone is responsible” needs structure. Mechanisms like explicit enablement requirements for those handling data can make responsibility concrete, supported by ongoing communication.

3) Treat training as an operating system

Move from annual events to continuous reinforcement, with scenarios that match real work. Make it easy for people to do the right thing in the moment.

4) Rebalance controls towards prevention at the point of action

Policies matter, but controls need to show up where mistakes happen, in sharing, access, and tool usage pathways.

5) Build a rights request operating model

Define scope handling, resourcing, seasonal planning, and escalation. This prevents rights requests from becoming an ongoing disruption.

6) Formalise third-party data pathways

Document the approval route, enforce it, and make it fast enough that teams do not bypass it.

Peers described privacy progress as less about “better policy” and more about redesigning daily work so safe behaviour becomes default. When most incidents come from individual errors, the best privacy programmes behave like good product design: they reduce ambiguity, guide behaviour, and create clear, auditable pathways for doing the right thing.

The organisations that are getting traction are not the ones adding more rules. They are the ones turning privacy into practical operating discipline, tied to business value and reinforced through workflows.