70%
FAILURE RATE
McKinsey's large-scale survey of global transformation programs places the failure rate at 70%. BCG's research on digital transformation specifically puts it at 70–80%. These figures have remained stubbornly stable for over a decade, despite exponential growth in technology capability and consulting industry investment in methodology. The problem is not the tools. The problem is something more fundamental.
Sources: McKinsey Global Survey on Transformations; BCG Digital Transformation Study 2024
We have been part of 47 transformation engagements — as lead consultants, as diagnostic advisors, and in several cases as the team brought in after an initial transformation attempt failed. Across that experience, a pattern has emerged that is consistent enough to stake a methodology on: the difference between success and failure is almost never about the technology chosen, and almost always about the sequence of decisions made in the first week.
The 30% of transformations that succeed share a set of behaviours that the 70% consistently lack. These behaviours are not complex, technically demanding, or expensive to implement. They are, however, counterintuitive in an era where the pressure to "go digital" arrives top-down, vendor-driven, and in a hurry.
This analysis documents the five most common failure patterns we have observed, and the corresponding success patterns of the organisations that beat the odds.
Five Failure Modes Observed Across 47 Engagements
The most common failure pattern, present in some form in 38 of the 47 engagements we examined. The organisation identifies a technology — a new ERP, an IoT platform, a manufacturing execution system — and commits to implementing it before conducting a rigorous analysis of the processes it will support. The result is a technically successful implementation that is operationally counterproductive: the new system automates broken processes at higher speed and greater cost. When this happens, the technology absorbs the blame. The broken process is rarely examined.
Digital transformation programs that originate in the boardroom and travel downward through management layers frequently arrive on the shop floor as an incomprehensible directive accompanied by a new software system. The people who will use the system daily — production operators, quality technicians, logistics coordinators — were not consulted during the design phase. They receive training on the tool but not on why the tool exists or what problem it solves for them. Adoption is superficial, usage is non-compliant, and the program is retrospectively characterised as a change management failure. It was actually a design failure — a failure to include the users in the design.
The "big bang" transformation — where the entire new system goes live on a single go-live date after 12–18 months of hidden development — has a failure rate that approaches 85% in our experience. The duration of the development phase means that business requirements have shifted by the time the system launches. The compressed testing phase means that issues are discovered in production rather than pre-launch. The all-or-nothing nature of the go-live means that there is no fallback position if something goes wrong. The 90-day sprint model — delivering working capability incrementally, in short cycles, with real users testing real functionality — has the inverse failure profile.
Transformation programs frequently define their success metrics before the transformation begins — and those metrics are chosen to satisfy the stakeholders who approved the budget rather than to accurately measure operational improvement. An ERP implementation is declared a success because it went live on budget, not because it improved order fulfillment time. A data platform is celebrated for the number of dashboards deployed, not for the number of operational decisions those dashboards have influenced. When vanity metrics govern the program, the incentive structure rewards completion over impact — and the organisation congratulates itself on a transformation that has not actually transformed anything.
The final and perhaps most damaging failure pattern is the absent diagnostic. The transformation begins not with a rigorous examination of the current state — what processes exist, how they perform, where the constraints lie, what the organisation's data maturity actually is — but with a solution. The solution is often already partially selected: a vendor demo has been impressive, a competitor is using a particular platform, or a board member has a perspective on the right technology direction. The diagnostic is either skipped entirely or conducted after the solution selection, functioning as a justification exercise rather than a discovery one.
"Every transformation failure we have analysed started with a solution. Every success started with a question. That simple inversion is worth more than any methodology."
What the Successful 30% Do Differently
The organisations that succeed at digital transformation are not necessarily better resourced, more technically sophisticated, or led by more visionary executives. They are distinguished primarily by their discipline around sequencing — doing things in the right order — and their willingness to slow down at the beginning in order to accelerate later.
Technology selected before process analysis completed. Vendor demos drive solution choice.
Process mapped and measured first. Technology selected to solve a defined, quantified problem.
Frontline informed at or after go-live. Training focused on tool mechanics, not operational impact.
Frontline co-designed the solution. Operators can articulate why the change makes their work better.
18-month development cycle with a single go-live date and no incremental user testing.
90-day sprint cycles delivering working capability to real users, with feedback loops after each sprint.
Success metrics tied to project delivery milestones (on time, on budget, live systems).
Success metrics tied to operational outcomes: cycle time, defect rate, decision speed, cost per unit.
Diagnostic phase absent or conducted as a post-hoc justification for a pre-selected solution.
30-day discovery diagnostic precedes all solution design. The problem is defined before the solution is considered.
Process First → People Alignment → Technology Last
The success pattern across our 47 engagements can be distilled into a three-part sequencing framework. It is simple enough to state in a sentence, and difficult enough to execute that the majority of organisations get it wrong.
Map, measure, and improve core operational processes before introducing any new technology. Understand what work is actually happening, where the constraints are, what the waste cost is, and what an improved process would look like. Produce a current-state process map and a future-state design. Quantify the gap in financial terms. Only then are you ready to ask: what technology, if any, would accelerate reaching the future state?
Before any technology is selected or built, secure genuine understanding and commitment from the people who will use it and be affected by it. This is not the same as communicating the transformation plan. It means involving frontline teams in the process design, testing concepts with actual users before finalising them, and ensuring that the case for change is understood at an operational level — not just an executive one. People who understand why a change is being made are far more likely to make it work than people who were simply told what to do.
Technology is the implementation mechanism for a process improvement that has already been designed and socialised. It is not the source of the improvement itself. When selected at this stage — after process clarity and people alignment — the technology choice becomes significantly cleaner: you know exactly what it needs to do, who will use it, what data it needs to access, and how its success will be measured. Vendor selection at this point is a functional fit exercise, not a visionary bet.
The SNZ 90-Day Transformation Model
Our approach to digital transformation is built around this three-part framework, delivered in 90-day sprint cycles that allow the organisation to test, learn, and adjust before committing to the next phase of investment.
Discovery
SNZ embeds with your operational team to map current-state processes, collect performance data from existing systems, and conduct structured interviews with frontline staff. We build a baseline picture of operational performance across all key dimensions: throughput, quality, cost, and speed. No solution is discussed in this phase. The output is a current-state diagnostic report with quantified performance gaps.
Diagnostic
We translate the current-state gaps into a future-state process design, developed collaboratively with your operational team. This phase produces: a future-state process map, a prioritised improvement roadmap, a technology requirements specification, and a change management plan. Only now are specific technology options evaluated — against the requirements specification, not against vendor demo impressiveness.
Quick Wins
Before any technology is implemented, we identify and deliver process improvements that can be made immediately — changes to workflows, scheduling, approval processes, or reporting that require no capital investment and can be live within weeks. These quick wins serve two purposes: they generate immediate financial return, and they build organisational confidence in the transformation program by demonstrating that change is both possible and beneficial.
Transformation
The first 90-day technology sprint begins: scoped tightly, delivered to real users, measured against operational outcomes defined in the diagnostic phase. At day 90, we assess: what worked, what didn't, what has changed in the business context, and what the next sprint should prioritise. This cycle-based approach means the transformation learns and adapts continuously, rather than committing to an 18-month roadmap that becomes obsolete before it completes.
The implication of this model is that the most important conversation in a digital transformation is not "which technology should we implement?" It is: "what problem are we actually trying to solve, and do we understand it well enough to know whether technology is the right answer at all?"
In our experience, approximately one in five operational challenges that are framed as technology problems are, on close examination, primarily process problems that do not require new technology at all. The diagnostic reveals this — and in doing so, saves the organisation the cost, disruption, and demoralisation of a failed technology implementation.
"The most valuable thing a consultant can tell a client isn't which technology to buy. It's that they don't need to buy one yet."
Key Takeaways
-
The 70% failure rate for digital transformations has remained stable for a decade despite advances in technology — because the problem is not the technology.
-
The single most common failure pattern — present in 38 of 47 engagements — is technology selection before process analysis.
-
The 30% that succeed follow a consistent sequence: Process First → People Alignment → Technology Last.
-
90-day sprint cycles with working deliverables outperform 18-month big-bang implementations in adoption rate, cost, and operational impact.
-
The most important question to ask before any transformation: "What problem are we solving, and do we understand it well enough to know whether technology is the right answer?"
Start Right
Start with a Discovery Call — not a software demo.
The SNZ Sona Discovery Call is a 60-minute structured conversation to assess your current operational state, identify your highest-priority improvement opportunities, and determine whether a formal engagement would deliver measurable value. No pitch, no software. Just an honest operational conversation.