Recent discussions with senior data and technology leaders highlighted a pattern that feels familiar across large enterprises. The organisation commits to centralisation, then swings towards decentralisation, then recentralises again.
It is easy to frame this as indecision or trend-chasing. In practice, most reversals are rational responses to real constraints. Centralisation and data mesh approaches each solve a set of problems, but they also create new ones. As an organisation grows, acquires, globalises, and introduces AI into production workflows, the constraints change. The architecture and operating model changes with them.
This piece unpacks why the pendulum keeps swinging and how peers are structuring the next phase so they do not have to keep “resetting” every few years.
The pattern leaders described
Several leaders described the same sequence:
- Centralise data to create control, standardisation, and a shared foundation.
- Hit bottlenecks and slow delivery.
- Decentralise so domains can move faster and own outcomes.
- Lose consistency and confidence as definitions diverge and governance becomes uneven.
- Recentralise elements again to support global reporting, cross-domain AI use cases, and risk management.
In one peer example, an organisation moved from a federated model towards greater centralisation to support globalisation and AI initiatives. Another described a transformation programme that unified four legacy asset systems to achieve standardisation and cost savings through consolidated data. Others noted that most teams are still running data lakes today, with interest in experimenting with decentralised approaches like data mesh, even if implementations vary in maturity.
The key point is not which approach is “right”. The key point is why each approach becomes attractive at different moments.
Why reversals happen
1) Maturity changes what the organisation actually needs
A repeated theme was that architecture choice depends on business maturity.
When data maturity is low, centralisation can be the fastest path to basic control:
- common definitions
- consistent governance
- a place to put data that is discoverable and manageable
- visibility of what exists
As maturity increases, the pain shifts. The bottleneck becomes delivery speed and domain ownership. That is when decentralised approaches start to look more appealing.
The reversal happens when enterprises treat centralisation or decentralisation as a permanent identity rather than a stage-based decision.
2) Globalisation and regional constraints pull in opposite directions
Leaders described oscillation driven by global and regional needs.
- Global leaders want consistency, consolidated reporting, and shared governance.
- Regional teams need autonomy to meet local requirements, local processes, and differing risk profiles.
If you centralise too aggressively, the business experiences friction and local workarounds. If you decentralise too loosely, leadership loses confidence that the organisation is operating on consistent definitions and controls.
This is a core reason strategies reverse. The architecture has to reflect both realities.
3) Acquisitions and rapid growth force rebalancing
One peer described rapid acquisitions and the resulting data management challenges, with the need to improve maturity to match growth rate.
Acquisitions tend to create:
- multiple overlapping systems for the same function
- conflicting definitions and metrics
- duplicated data pipelines
- uneven governance maturity across entities
In these periods, centralisation becomes attractive again because it creates a practical consolidation path. Over time, as integration progresses, decentralised ownership becomes more viable and often necessary.
4) AI changes the stakes and surfaces weaknesses fast
Leaders described how AI initiatives expose data quality issues, classification gaps, and governance weaknesses quickly. This changes architecture preferences.
A decentralised approach can move fast, but AI increases the need for:
- consistent classification and permitted use
- clear lineage and accountability
- reliable pipelines and monitoring
- shared foundations for cross-domain use cases
When AI becomes a strategic priority, many organisations recentralise specific capabilities to prevent drift, reduce risk, and scale reuse.
5) Stakeholder trust can block progress in either model
Peers talked about the challenge of winning over risk-averse stakeholders during modernisation and platform changes. The specific sector does not matter. The dynamic does.
- Centralisation can be viewed as loss of autonomy.
- Decentralisation can be viewed as loss of control.
Either way, if key stakeholders do not trust the operating model, the organisation will eventually “correct course”, which looks like a reversal.
6) Tooling rarely delivers a single, unified catalogue and lineage view
Leaders discussed cataloguing and lineage challenges across systems. Many platforms provide internal catalogues, but a unified master catalogue remains difficult, especially where legacy systems cannot be scanned cleanly.
This is an underappreciated driver of reversals. Without clear discoverability and lineage:
- decentralised models struggle because domains cannot easily reuse each other’s assets
- centralised models struggle because central teams cannot maintain truth at scale
When catalogue and lineage capability lags, the organisation often reverts to whichever model feels safer in the moment.
What each model solves and why it eventually frustrates
The centralised data lake phase
What it solves:
- Standardisation of definitions and governance
- A shared foundation for analytics
- Consolidated reporting and cost savings through simplified estates
- Easier risk oversight and common controls
Where it often breaks down:
- Delivery slows because the central team becomes a gate
- Domain teams feel disconnected from priorities
- Backlogs grow and workarounds appear
- Reuse still fails if trust and metadata are weak
A centralised lake can still produce low adoption if discoverability, ownership, and trust signals are missing.
The data mesh or decentralised phase
What it solves:
- Domain ownership and accountability
- Faster delivery aligned to business outcomes
- Closer alignment between data products and the decisions they support
- Reduced bottleneck pressure on a central team
Where it often breaks down:
- Definitions diverge and confidence drops
- Governance becomes inconsistent across domains
- Duplication increases
- Cross-domain use cases become slow and political
- Risk posture becomes harder to demonstrate centrally
A decentralised model can still fail if it does not provide a strong shared foundation and clear guardrails.
The hybrid reality most peers are moving toward
The most consistent takeaway from peer discussions is that the best approach tends to be hybrid.
Not “a bit of both” in an abstract way, but a deliberate split between:
- A shared foundation that must be consistent
- Domain ownership where speed and local context matter
A practical hybrid split looks like this:
Shared foundation (centralised):
- classification standards and permitted-use rules
- identity and access patterns
- core metadata approach and publishing workflow
- baseline quality expectations and monitoring
- shared catalogue experience and trust signals
- cross-domain reference data and critical enterprise datasets
Domain ownership (decentralised):
- data product definition for domain decisions
- prioritisation aligned to business outcomes
- product lifecycle ownership and communication
- local transformation and semantic layers
- domain-specific quality rules beyond the baseline
This reduces the need for reversals because it makes the operating model adaptable without changing identity every few years.
The goal is not to stop movement. The goal is to make the movement intentional and contained, so shifts do not feel like a full reset.
The six triggers that usually push organisations to centralise again
Peers repeatedly described centralising elements when one or more of these triggers appear:
- Global reporting confidence drops
Different versions of “truth” start causing friction at executive level. - Cross-domain initiatives become strategic
AI, risk, customer, or operational priorities require consistent data across domains. - Governance becomes uneven
Different domains interpret policies differently, increasing exposure. - Tool sprawl increases cost and complexity
Too many overlapping solutions reduce interoperability. - Acquisitions force consolidation
Integration pressure requires a practical shared foundation. - Operational reliability becomes a priority
Leaders want more resilient pipelines, clearer monitoring, and predictable delivery.
Centralisation in these moments is not ideological. It is a risk and efficiency response.
The six triggers that usually push organisations to decentralise again
Conversely, decentralisation becomes attractive when:
- Central backlogs slow business priorities
Domains cannot move fast enough through a central queue. - Data products fail to match real decisions
Outputs are delivered, but adoption is low because local context is missing. - Ownership is unclear
No one feels accountable for data quality and communication. - Business units want autonomy
Especially where regionalisation and local operating models matter. - Innovation needs room to run
Too much governance too early slows experimentation. - People resist “one central team” control
Trust declines and shadow approaches appear.
Decentralisation in these moments is usually a delivery response.
How peers are designing 2026 operating models to avoid repeated resets
1) Start with the decisions, then assign ownership
A recurring peer point was that data needs defined purpose. Where organisations struggle, they often cannot answer: which decisions is this dataset supposed to improve?
A practical approach is:
- list the most important decisions in each domain
- define the data products needed to support them
- assign accountable owners who can prioritise and communicate change
This reduces the risk that architecture choices become abstract debates.
2) Build a publishing workflow that makes metadata real
Leaders described challenges around metadata, cataloguing, and keeping information current. Some discussed using AI to accelerate metadata generation, but also noted that human validation remains necessary.
The operating model move is to treat metadata like a product release:
- draft (potentially accelerated)
- review
- publish
- maintain and refresh
This creates accountability without relying on informal documentation that drifts.
3) Use lightweight agreements before formal data contracts
Peers discussed data contracts as a concept gaining traction, with some viewing broader adoption as a 2026 horizon. Others noted they are not yet mature enough for formal contracts but are exploring accountability mechanisms.
A staged path tends to work better:
Stage 1: lightweight agreements for critical data products
- owner, purpose, key definitions
- quality expectations
- change communication rules
- access and permitted use
Stage 2: more formalised contracts where risk or reuse is high
- SLAs
- versioning and compatibility
- dependency visibility
- stronger governance and auditability
This avoids introducing heavy bureaucracy before the organisation is ready.
4) Design governance to accelerate, not to police
Peers noted that governance techniques are still fluid and not yet fully crystallised into universal best practices. They also emphasised the need to tackle high-priority, high-value data first and to balance risk and innovation.
A practical governance approach described was categorising decisions:
- which decisions can be automated safely
- which require human oversight
- where thresholds and escalation paths are needed
This keeps governance grounded in operational reality and makes it easier for stakeholders to trust.
5) Invest in reliability where it matters, not everywhere at once
Leaders discussed pipeline reliability, automated testing, and observability. They also acknowledged that visual pipeline tracking can create significant overhead.
The peer direction was pragmatic:
- focus reliability investment on critical business processes first
- automate testing and monitoring for those pipelines
- keep the operational experience simple
- avoid building complexity that becomes its own burden
This is especially relevant in hybrid models where reliability is a shared foundation.
6) Treat human behaviour as part of the architecture
Peers repeatedly stressed that the key challenge remains cultural change and human behaviour.
Architecture alone does not drive adoption. Leaders talked about training and encouragement, and about making governance accessible and user-friendly through communication methods that fit busy stakeholders.
In practice, that means:
- design for the way people actually work
- make the safe path the easy path
- provide clear signposting inside tools, not just in policy documents
- create communities and champions who translate governance into daily behaviour
This is not separate from architecture. It is what makes architecture stick.
Peer-informed patterns and what they usually mean
| Pattern peers described | What it tends to signal | The question worth asking next |
|---|---|---|
| Most teams still run data lakes, but interest in decentralised models is rising | Centralisation delivered a foundation, but speed and ownership are now constraints | Where are we bottlenecked and what should domains own? |
| Organisations oscillate between centralised and decentralised models | Needs change with growth, globalisation, and risk | Which elements must stay consistent and which can vary? |
| Unifying multiple legacy systems delivered standardisation and cost benefits | Consolidation pressure is real, especially after growth or acquisitions | What is the minimum foundation we need to consolidate safely? |
| Metadata and catalogue visibility remain hard across systems | Discoverability and trust still limit reuse | What would make the top 20 data products easy to find and trust? |
| Lineage is crucial, but legacy scanning remains difficult | Change management and confidence are blocked by poor visibility | Where do we need lineage most and what is the practical coverage plan? |
| Data contracts are appealing, but maturity varies | Accountability is the goal, not paperwork | What lightweight agreements can we enforce now? |
| Pipeline reliability matters, but some approaches add overhead | Reliability investment must be targeted | Which pipelines are decision-critical and deserve monitoring first? |
| Governance needs to be user-friendly and accelerate progress | Governance credibility drives adoption | What would make governance feel like help rather than friction? |
How to decide what to centralise and what to decentralise without a reset
A practical peer-aligned method is to treat centralisation as a choice for specific capabilities, not for the whole estate.
Centralise when:
- the capability reduces risk and enables reuse across domains
- inconsistency would create executive-level distrust
- it supports cross-domain AI, reporting, or regulatory needs
- it provides shared cost efficiency without slowing delivery
Decentralise when:
- the capability depends on local context and fast iteration
- outcomes are domain-specific
- ownership and accountability need to sit with the business
- central teams cannot prioritise all local needs effectively
This approach avoids a single “lake vs mesh” identity and replaces it with a capability-based operating model.
The repeated reversals between data lake and data mesh strategies are not a sign that organisations do not know what they are doing. They are a sign that enterprise constraints evolve.
What peers are doing differently now is building hybrid models that are stable enough to support global consistency and safe AI adoption, while flexible enough to let domains own the decisions and data products that drive outcomes.
The aim for 2026 is not to pick a side. It is to design an operating model that can adapt without forcing the organisation to replatform, rebrand, and reset every time priorities shift.





