Recent discussions with senior data, privacy, and technology leaders highlighted a reality many large enterprises recognise but few quantify clearly. Only 20 to 30% of the data organisations create is actually used. The rest accumulates quietly as cost, complexity, and exposure.
This is not simply a storage problem. It is a decision-making problem. When most data is unused, organisations pay for effort that does not translate into better outcomes. Confidence in reporting stays fragile, verification stays manual, and teams keep rebuilding data locally because it still feels safer than trusting shared assets.
The shift now underway is subtle but significant. Dashboards are no longer the end product. They are becoming a monitoring layer inside a bigger system that aims to answer a different question:
How do we create trusted data that reliably improves decisions, reduces operational drag, and supports safe adoption of AI?
What follows reflects themes raised by senior peers in recent discussions, written for leaders shaping 2026 data priorities in large enterprises.
The signal leaders keep repeating
The “70 to 80% unused” reality is a symptom of something deeper. Many enterprises have built plenty of reporting capability. The gap is that reporting does not consistently convert into action because the trust and usability layer is incomplete.
Leaders described recurring friction points that keep dashboards from changing outcomes:
- People struggle to find the right dataset, so they default to what is familiar.
- Multiple versions exist, and it is unclear which is safe and current.
- Definitions drift, especially after transformations, acquisitions, or system changes.
- Lineage is unclear, so when a metric changes, teams cannot explain it fast.
- Ownership is fuzzy, so issues bounce across teams and take too long to resolve.
- Governance is seen as a document, not a workflow.
When those conditions exist, dashboards can become a trigger for debate rather than a mechanism for better decisions.
Why dashboards are fading even when they look better
Dashboards are not disappearing. Their role is changing.
In many organisations, dashboards are still treated as the final deliverable. Leaders described moving away from that mindset because it produces two predictable outcomes:
- Reporting expands but decisions do not get faster.
- Visibility increases but confidence does not rise in proportion.
A dashboard can be perfectly designed and still fail to create value if the foundations that make data usable are missing. Leaders repeatedly returned to a small set of practical questions that determine whether dashboards help or frustrate:
- Can people quickly find the right data?
- Do they understand what it means and what it does not mean?
- Do they trust its freshness and quality?
- Can they trace where it came from and what depends on it?
- Are access and privacy rules clear and enforced in real workflows?
If the answer is no, dashboards become less influential, and leaders start funding different capabilities.
The new role of dashboards
In the emerging model described by peers, dashboards become:
- Transparency: what is happening right now
- Monitoring: where drift and anomalies are appearing
- Governance visibility: who owns the data, what is trusted, what is restricted
- Decision support: a layer that informs judgement, not a system that replaces it
That shift matters because it changes how investment is justified. Dashboards are funded as part of reliability and trust, not as standalone value.
The real cost of 70% unused data
Leaders framed unused data as a compound cost, not a single line item.
Cost that rarely gets measured cleanly
When most data is unused, the costs are spread across teams and budgets:
- storage and compute that grows quietly
- engineering time building and maintaining pipelines that deliver low adoption
- governance overhead managing assets nobody consumes
- duplicated work because teams rebuild “just to be safe”
- reporting rework after definitions drift or systems change
Over time, this creates an environment where it is easier to produce new outputs than to maintain trusted ones.
Complexity that slows transformation
Unused data increases complexity in subtle ways:
- more datasets to catalogue
- more dependencies that are hard to map
- more access points to control
- more ambiguity about what can be used for AI and automation
That complexity becomes a transformation tax. Even good initiatives slow down because the estate is hard to understand.
Exposure that rises with adoption pressure
Risk and privacy concerns intensify the problem. Leaders discussed that human behaviour is a dominant driver of incidents, and that access and classification gaps become more visible when AI tooling enters the environment.
If organisations cannot confidently classify and govern data, they either slow adoption or accept higher exposure. In both cases, the business feels it.
What peers are changing that is working
Leaders described moving from platform-first programmes to operating models designed around trust, reuse, and measurable outcomes.
1) Treating data as a product, not a project output
Instead of treating data as a by-product of projects, peers described investing in reusable data products that can be consumed repeatedly without bespoke engineering.
In practice, a data product approach includes:
- an owner accountable for quality and communication
- clear definitions explained in business language
- quality expectations that are measurable
- lifecycle management (publish, maintain, retire)
- embedded access and privacy rules
This is one of the most direct ways to reduce waste because it shifts effort from repeated rebuilds to reusable assets.
2) Making trust visible, not assumed
A recurring idea was that trust has to be visible to the user at the moment of consumption. Concepts like a marketplace model with trust scoring came up as a way to make adoption easier.
Trust signals tend to include:
- freshness and update cadence
- quality checks passed
- lineage coverage
- ownership and contact points
- permitted usage and sensitivity classification
When trust is visible, reuse rises and duplication falls, which directly reduces the unused data problem.
3) Using AI to accelerate documentation, while keeping accountability human
Leaders discussed using AI to help generate metadata and documentation, especially in complex estates where manual work cannot keep up.
Where it is working, AI is used to scale coverage, but publishing remains controlled:
- humans approve what is “true”
- sensitive classification is validated
- ownership keeps documentation current
This avoids the common failure mode where documentation exists, but nobody trusts it.
4) Prioritising lineage that answers impact questions quickly
Lineage was discussed in practical terms: it matters when it accelerates decisions and reduces rework.
The lineage questions leaders care about tend to be:
- why did this number change?
- what downstream dashboards, models, or processes depend on this dataset?
- what will break if we change this field?
- is this data safe to use for AI-enabled workflows?
Lineage is increasingly being treated as a risk control and a change management accelerator, not a compliance nice-to-have.
5) Investing in observability and automated testing without adding overhead
Peers discussed automated testing and observability for critical pipelines, while acknowledging that some tooling can create operational overhead if not designed carefully.
The direction is clear: reliability needs to become standard, but the approach has to stay lightweight and focused on what matters most.
A practical pattern is:
- define quality criteria for critical data products
- automate checks with clear exception workflows
- keep the monitoring experience simple
- prioritise fast root-cause over complex visualisation
6) Shifting governance from documents to workflows
A consistent theme was that governance fails when it lives in policy documents rather than in operational workflows.
Peers described governance that works as governance that is:
- embedded into build and publishing processes
- supported by automated controls where possible
- clear about escalation paths when issues arise
- designed to enable innovation early, then tighten as value and risk increase
This is particularly important as AI adoption accelerates, because AI makes classification and access weaknesses visible immediately.
7) Funding adoption as a workstream, not assuming it will happen
A striking peer point was how difficult it is to scale capability-building in large organisations, including examples where workforce scale makes traditional training ineffective.
The common insight was that adoption scales through systems:
- embedded guidance in tools
- role-based learning paths that fit real workflows
- communities of practice and champions
- “safe defaults” that reduce accidental misuse
If adoption is not funded, waste persists, and dashboard dependence remains.
A simple way to prioritise: decisions first
A peer-level approach that keeps programmes practical is to work backwards from decisions, not datasets.
Step 1: Identify the decisions that matter most
Rather than asking “what data do we have?”, peers described starting with “what decisions drive value and risk?”.
Examples of decision categories that typically create leverage:
- risk and compliance decisions
- customer experience and quality decisions
- operational capacity and service reliability decisions
- investment prioritisation decisions
- workforce capability and change decisions
Step 2: Build a small number of trusted data products for those decisions
The aim is not to fix everything. It is to make a small set of inputs trustworthy and reusable.
For each critical data product, peers described focusing on:
- clear ownership and accountability
- definitions that business leaders understand
- measurable quality criteria
- lineage coverage that supports change management
- access and privacy rules that are enforceable
Step 3: Make trust visible, then measure reuse
If the goal is to reduce waste, reuse becomes a core metric. When reuse rises, duplication falls, and the unused portion of the estate starts shrinking.
What leaders are measuring beyond dashboard counts
Peers described shifting success measures away from the number of dashboards or datasets produced and towards operational outcomes.
Measures that align to the “70% unused” reality include:
- time-to-trust: time from “I need data” to “I can use it confidently”
- reuse rate of trusted data products
- reduction in manual verification effort
- fewer incidents caused by incorrect or unauthorised use
- faster root-cause analysis when metrics shift
- reduced cycle time to publish and maintain governed assets
- improved readiness for AI adoption through stronger classification and access control
These measures also translate well across stakeholders because they connect data investment to business speed and risk reduction.
Peer snapshot of what is changing
| Theme | What peers described | Why it matters for large enterprises | A practical move that helps |
|---|---|---|---|
| Low conversion of data into value | Only 20 to 30% of created data is used | Waste becomes a budget target | Pick the top decision datasets and drive reuse first |
| Human behaviour drives incidents | 96% of breaches linked to individual errors | Controls must match how people work | Embed guidance, safe defaults, and enforceable access workflows |
| Quality is becoming explicit | QA example checking all calls against clear criteria | Leaders want measurable evidence | Define quality criteria for critical data products and automate checks |
| Controlled environments unlock AI value | Example of large savings tied to secure internal use | Production requires monitoring and protocols | Treat monitoring and exception handling as core design |
| Predictive outputs still need oversight | Example showing meaningful recall plus human judgement | Full automation is rarely the goal | Design thresholds and escalation paths into workflows |
| Transformations remain complex | Example of large-scale asset migration progress | Leaders need milestones that map to outcomes | Track phased progress tied to adoption, not just migration completion |
| Enablement must scale | Example of large workforce capability challenges | Adoption fails without systems | Build role-based learning and communities of practice |
| Accountability is being formalised | Data contracts discussed as a direction | Ownership reduces drift and rebuilds | Start lightweight for critical assets and expand over time |
| Privacy posture is evolving | Privacy certification and operational discipline themes | Privacy is becoming operational | Make classification and permitted usage visible at the point of use |
The enterprise data usage gap leaders keep coming back to
Data created: 100% |████████████████████████████████████████████|
Data used: 20–30% |███████████ |
Data unused: 70–80% |██████████████████████████████████ |
This simple view explains why investment is shifting. Leaders are prioritising converting what already exists into trusted, reusable decision inputs before funding more collection and more reporting.
Practical actions leaders are taking now
Peers described a few moves that tend to create momentum without turning into a multi-year rewrite.
Tighten the focus to a small set of decision-critical data products
Pick a small number of data products tied to decisions leadership cares about, then make those assets:
- easy to find
- easy to understand
- easy to trust
- safe to use
Establish ownership and clear publishing workflows
Ownership reduces drift. Publishing workflows reduce ambiguity about what is official and current.
Make trust signals visible to users
Users adopt what they can trust quickly. Trust signals reduce verification and reduce rebuilds.
Fund enablement as part of the solution
If adoption is a priority, enablement needs to be engineered into how people work, not layered on through occasional communications.
Treat AI readiness as a trust layer investment
AI adoption is forcing visibility on classification, access, and quality. Strengthening the trust layer makes AI initiatives faster and safer.
Recent discussions indicated that enterprises are not abandoning dashboards. They are repositioning them. Dashboards are becoming part of a broader effort to convert data into trusted, repeatable decision advantage.
The most important shift is this: leaders are rewriting data spend away from output volume and towards trust, reuse, and measurable outcomes. In environments where only 20 to 30% of data is used, the biggest opportunity is not more data. It is improving conversion of what already exists into decisions that improve performance and reduce risk.





