Ignite Thesis
The Environment Always Wins: Why Operating Conditions Determine Whether Change Holds
The market has been solving the wrong problem for twenty years. AI just removed the last excuse.
Miguel Guevara · 2026
↓ Download PDFMost change programs produce the right artifacts and still fail to hold. The communications went out. The training ran. The system went live. Six months later, the workforce was working around it instead of in it. The reason is structural: nobody redesigned the environment that was producing the old behavior.
Ignite Consulting's change-management plugin handles the work that eats a change team's time: impact assessments, comms plans, training outlines, stakeholder maps. What used to take weeks takes minutes. Human review makes it better.
That frees the team to do the work that determines whether adoption holds. Remove the old path so workarounds disappear. Change what managers get measured on. Force decisions that can't be reversed before go-live. Let one team succeed visibly in the new way so others follow. Watch for workarounds forming early.
We identified the 83 conditions across 14 domains that make this specific and measurable. Each one maps to a place where the environment will push people back to old behavior. Every red or amber result is a predicted workaround.
The premise is simple. You don't enforce permanence. You design it. Make the new behavior easier than the old one. Close the path back. If people reverted, the program didn't fail at adoption. It failed at design.
What follows is the evidence, the market history, and the framework behind that premise.
The Pattern: The Environment Always Wins
We went live on a Tuesday. By Thursday, half the org had returned to the legacy system.
Not because they didn't understand the new process. Not because the training was bad. Because the environment still made the old way easier. The workaround was faster. The manager's scorecard still measured the old metric. The approval chain hadn't changed. Nobody had closed the path back.
That pattern repeats across every type of transformation. Fortune 500 ERP rollouts, post-merger integrations, PE-backed carve-outs, and AI deployments. Different industries, different scales, different technologies. The same breaking point.
The communications went out. The training ran. The system went live. Six months later, the workforce was working around it instead of in it.
The gap is environmental. It closes when the environment changes. It doesn't close when you add more training, send another email, or add a dashboard that measures the wrong thing more precisely.
The cost isn't just failed adoption. It's stranded investment. The technology works. The business case doesn't.
This isn't a people problem. It's an architecture problem. Change fails not because organizations don't know what to do, but because the sponsor hasn't redesigned the environment that's producing the old behavior.
The Evidence: Why the Traditional Model Can't Close That Gap
The conventional change management approach was built around a logical but flawed premise: build the right artifacts, deliver them well, and behavior change will follow.
Success gets measured by what gets produced. Plans created, emails sent, trainings completed, dashboards showing green. None of them measure whether teams are actually ready to work differently on Day One, or whether behavior change is holding ninety days after go-live.
The structural flaw is in who owns what. Change teams build the plans. Leaders own the outcomes. But leaders are kept informed rather than accountable. The change function operates as a support function, working alongside the business rather than inside the accountability structure that actually drives behavior.
BCG studied 225 transformation programs and found the primary drivers of failure were four hard-side factors: project duration, team capability, management commitment, and the additional effort required of employees beyond their day jobs. Twenty years ago, the data already pointed to operating conditions, not people.[1]
Matt MacInnis, COO of Rippling, arrived at the same conclusion from an operator's seat: leaders make a fundamental error by over-rotating on outputs rather than engineering the inputs that produce them.[2]Screaming about the score doesn't change the plays. Change the plays, and the score follows.
The organizational design literature explains why. Ben Horowitz frames organizational design as a communications architecture: every structure optimizes some paths at the expense of others.[3] When new technology changes who has information and who has decision rights, the old paths become liabilities. The structural move is to redesign them deliberately, before they reassert themselves as informal workarounds.
Academic research. Operator experience. Organizational design theory. All three converge on the same point. The data has been pointing at operating conditions for two decades. The market hasn't built around it.
The budget tells the same story. Deloitte found that 93% of AI budgets are spent on technology. Seven percent goes to the people expected to use it. Companies that focus solely on technology are 1.6 times more likely to report that their AI investments fall short.[4]
The misalignment isn't philosophical. It's structural down to the budget line.
Why the Market Never Built Around It
If the data has been pointing at operating conditions for twenty years, why is the market still organized around communications and training?
Not because practitioners missed the problem. Most experienced change managers can describe exactly where the environment works against them. They see the metrics rewarding old behavior. They see the legacy system running in parallel. They see the middle manager carrying four new roles nobody scoped. Prosci's own research identified mid-level managers as the group most resistant to change, with 43% of practitioners naming them as such.[5]The discipline understood the problem. It couldn't sell the solution.
Change management enters most organizations as a percentage of a larger program budget. Five percent. Maybe ten. Gartner recommends allocating at least 15% of the overall system implementation budget to change management. Yet among companies managing large projects over $1.5 million, 77% reported spending less than 10%.[6] The scope is set before the change lead is hired.
The math gets worse at the program level. Consulting labor routinely constitutes 40 to 60 percent of total ERP program cost.[7] If change management is getting less than 10% of that same budget, the structural subordination is built into the deal before anyone writes a scope document. No firm is going to put a $50 million technology program at risk to argue for a bigger change management scope. So the scope gets defined by what fits inside the budget, not by what the transformation requires.
That shapes who survives. The practitioners who make it in that model stop fighting for environment-level work and start optimizing deliverable volume. The firms that staff it optimize accordingly: solution centers, junior practitioners, offshore resources. When you move change management into a shared services model staffed at that level, you've made an architectural decision about what kind of work it can do. Nobody in a solution center is walking into a sponsor's office to say the operating model needs to change.
And the sponsor relationship matters more than anything else. Prosci's research shows that projects with extremely effective sponsors are 79% likely to meet their objectives, compared to just 27% with extremely ineffective sponsors.[5] But the person closest to the problem was never positioned to have the harder conversation. The staffing model prevents it.
McKinsey documented what happens next. Organizations fail to sustain impact because performance disciplines end with the transformation effort, incentives and budgets are not aligned with new objectives, and management teams stop investing in the future.[8] BCG found the same pattern: most transformations fall short, and what often distinguishes success from stagnation is whether incentives are directly linked to transformation goals.[9]
Prosci's reinforcement data closes the loop. Among organizations that allocated resources to reinforcement and sustainment, 67% met or exceeded objectives, compared to 55% that did not. But while 81% of organizations plan for reinforcement, only 55% actually resource it.[5] The budget was spent. The project closed. The reinforcement activities that would have addressed the operating environment were the first thing cut when the timeline compressed.
The market didn't accidentally solve the wrong problem. The way change management gets bought and sold made the wrong problem the only one that fit the budget.
What AI Changes, and What It Doesn't
That's the market AI entered.
AI can now generate a change impact assessment, a communications plan, a training outline, and a stakeholder map in minutes, work that previously consumed weeks. With human review and oversight, this work can be produced faster and at higher quality than before.
The impact isn't efficiency. It's the removal of the last excuse.
AI doesn't fix change management. It removes the constraint that justified focusing on the wrong work. Any organization can now produce polished change plans at scale, instantly. “We're too busy building materials to do the harder work” no longer holds.
But AI also makes the problem worse before it makes it better. Previous technologies replaced discrete tasks. AI changes the relationship between a person and their work. It redistributes judgment, collapses decision layers, and moves information access from hierarchical to flat. Those are structural changes to the operating model, not problems that communications and training can solve. Treating them as communications-and-training problems doesn't just fall short. It guarantees the workaround.
McKinsey surveyed nearly 2,000 organizations. Only 6% of AI deployments are generating real returns. AI high performers are 2.8 times more likely to have redesigned workflows before deployment, not added AI to an architecture that was already failing them.[10] OpCo Intelligence found the same thing. Their 2026 survey of 123 senior operators at companies including Stripe, Anthropic, Databricks, and Microsoft showed general-purpose chatbots at nearly 90% adoption. The tools exist. The binding constraint is organizational, not technical.[11]
The Center for Creative Leadership names what this costs at the human level: “We are asking people to take their biggest professional risks at the moment they feel least safe.”[12] That is not a training problem. It is a structural one.
If your team's capacity was freed tomorrow, where would it go? The structural work that determines whether change holds looks like this: remove the old path so the workaround disappears. Change what managers are measured on. Force irreversible decisions before go-live. Make one team visibly succeed in the new way of working. Audit for workarounds in formation.
If teams are still spending most of their time producing artifacts, that's no longer a capacity issue. It's a prioritization decision. And it raises a harder question: if the structural work still isn't happening, what's preventing the conversations required to make change real?
Why Conditions Determine Outcomes
Behavior follows structure, not training.
Every change program eventually hits the same limit: the work it does well (communications, training, stakeholder alignment) changes what people know and what they can do. It does not change what the environment rewards, measures, or makes easy. The question is whether anyone recognizes it as a structural constraint, or just asks for more training.
Consider what happens when a new system goes live, but the approval chain hasn't changed. The system is designed for a three-step process. The organization still uses a seven-step approval. People don't resist the new system. They comply with the existing authority structure. People don't resist change. They comply with the system they're held accountable to. The behavior follows the structure, not the training.
Compliance theater is the invisible failure mode.
Projects that don't collapse often become worse. The dashboard shows 85% adoption. The executive sponsor presents it to the board. Beneath the number, the frontline has found three workarounds that let them use the new system to execute the old process. The reports look different. The behavior is identical.
Usage metrics measure login frequency. They don't measure whether workflows have shifted, whether decision rights have moved, or whether the accountability model reflects the new operating reality. AI accelerates this failure mode. When AI moves decisions out of the hierarchy and into the workflow, the gap between what the dashboard shows and what's actually happening widens faster than with any previous technology.
Demonstrated value moves organizations. Mandates don't.
OpCo's 2026 operator survey directly addresses this. One practitioner describes building a performance management tool over a weekend, showing it to skeptical colleagues, and using the tangible result to shift their perception. Not a town hall. Not a communications cascade. A specific person showing a specific team what good looks like, and the environment shifts around that demonstration.
The implication is precise: you cannot communicate your way to sustained behavior change. Someone has to show what the new way of working produces before the organization will reward it. That's a leader conversation, not an artifact.
PE-backed environments concentrate every risk factor at once, and the gap between investment thesis and operating reality has a clock on it.
We led organizational change management for an ERP implementation inside a PE-backed spinout: new ownership, new leadership team, new ERP, and a pending acquisition, all in flight simultaneously. The communications went out. The training ran. The system went live. But the underlying architecture hadn't moved. The workarounds were forming before launch. The technology decision had been made at the deal thesis level. The operating conditions that determined whether it held were never addressed at any level. That's the pattern. The operating partner models the synergies. The implementation team builds the system. Nobody redesigns the environment that will decide whether the workforce uses it or works around it.
Bain's 2025 Global Private Equity Report quantifies the cost. Carve-out revenue and margin improvements that once reached 31% and 29% before 2012 have fallen to 17% and 2%. The common denominator among top-quartile carve-outs is what Bain calls an unbreakable link between the value-creation thesis and the operating setup of the new company.[13] When that link is missing, the thesis stays on the page and the value leaks through the floor.
The 83 Conditions: What the Diagnostic Framework Measures
This is what it looks like to operationalize the argument.
Each of the 83 operating conditions maps to something the organization was built to reward before the change was introduced.
The 14 domains are organized into four clusters:
Program and technology design. Is the foundation sound before anyone touches the operating model?
Operating model and authority structure. Have reporting lines, scorecards, decision rights, and governance been redesigned for the new way of working?
People readiness. Are middle managers, frontline teams, and the workforce prepared for the weight they’ll carry?
Sustainment and communications. Will the feedback loops, accountability structures, and message architecture hold the change at ninety days and beyond?
Every condition is specific and measurable. Every red or amber result is a predicted workaround. The assessment doesn't ask whether plans were completed or training was delivered. It asks whether the environment has been redesigned to support the new behavior.
That's the difference between a change program and environment design. One produces plans, training, and communications. The other redesigns the conditions.
From Diagnosis to Permanence: How the Engagement Runs
The 83 conditions are the operational entry points into the environment argument.
An engagement moves through a clear sequence. Orient to the organization's specific context. Map where the change touches the operating model and who holds the decision rights it requires. Diagnose resistance by type (environmental, capability-based, or political) so the intervention targets the root cause. Surface the dependency chain from open decisions to go-live readiness. Every unresolved decision is a potential workaround in formation.
The work then shifts to sponsor activation and readiness measurement. The practitioner provides a specific list of irreversible commitments the sponsor must make before go-live. Not budget approvals or town hall appearances, but the decisions that close optionality. Closing the legacy system instead of running both in parallel. Tying leader accountability to sustained behavior change, not go-live completion. Removing the workaround before it becomes the default.
Readiness gets measured against operating conditions, not activity. “Manager scorecards updated” instead of “training complete.” “Legacy system access removed” instead of “communications sent.”
Before go-live, the full 83-condition inventory produces a readiness profile showing, domain by domain, where the environment will resist the new behavior. Every red or amber condition is a predicted workaround.
At engagement close, the only question that matters is whether the conditions are holding. Not whether artifacts were produced.
The Test
Return ninety days after launch.
Not to check on progress. Not to write the close-out report. To answer one question: is the behavior holding because the structure supports it, or did the conditions pull it back?
Most change programs measure go-live completion. The dashboard turns green, the consulting firm writes the close-out report, and six months later, the CFO asks why the technology isn't delivering the returns the business case promised. The answer is almost always the same: the behavior changed during the program, then the conditions pulled it back.
Permanence isn't enforced. It's designed. The environment makes the new behavior easier than the old one. The workaround path is closed. The scorecard measures the right thing. The approval chain reflects the new process. Nobody has to remind anyone.
If the behavior didn't hold, the program didn't fail at adoption. It failed at design.
That's the only metric that matters. What would yours show?
Find Out in Five Minutes
The Sponsor Assessment covers 14 operating condition domains and produces a red/amber/green readiness profile for your transformation. If you run an engineering-led organization, the Operator Stress Test covers the same domains in operator language. Both are free.
See How This Applies to You
The IGNITE OCM Plugin is a free download at ignitena.com/for-practitioners. All three diagnostics are free at ignitena.com/diagnostics.
Sources
- [1]
Perry Keenan, Alan Jackson, and Hal Sirkin, “The Hard Side of Change Management,” Harvard Business Review. October 2005; republished by Boston Consulting Group. Study of 225 large-scale transformation programs. bcg.com
- [2]
First Round Review, “Everything in Business is About Fighting Entropy: Here’s How Rippling Does It,” First Round Review. December 15, 2024. Quoting Matt MacInnis, COO of Rippling. review.firstround.com
- [3]
Ben Horowitz, “Taking the Mystery Out of Scaling a Company,” Andreessen Horowitz. August 1, 2010. a16z.com
- [4]
Deloitte, “Organizations Stand at the Untapped Edge of AI’s Potential,” Deloitte. January 20, 2026. Drawing on the 2025 Deloitte CXO Survey. deloitte.com
- [5]
Prosci, Best Practices in Change Management, 12th Edition. 25 years of research from 10,800+ professionals globally. Findings on sponsor impact (79% vs. 27%), mid-level manager resistance (43%), and reinforcement resourcing (81% plan, 55% resource). empower.prosci.com
- [6]
Judge Group / Gartner, “The Keys to Successful Organizational Change: Understanding the Cost of OCM.” Gartner’s 15% minimum recommendation; 77% of companies managing projects over $1.5M spend less than 10%. judge.com
- [7]
Panorama Consulting Group, cited in IT Consulting Authority, “ERP Consulting Services.” Consulting labor as 40–60% of total ERP program cost. itconsultingauthority.com
- [8]
McKinsey, “Common Pitfalls in Transformations: A Conversation with Jon Garcia,” McKinsey & Company. 2022. Analysis of transformation failure patterns, including misaligned incentives and budgets post-transformation. mckinsey.com
- [9]
BCG, “To Keep Transformations on Track, Incentives Are Crucial,” Boston Consulting Group. 2025. Three out of four transformations fall short; incentive alignment distinguishes success from stagnation. bcg.com
- [10]
McKinsey & Company, “The State of AI: Global Survey 2025.” Survey of nearly 2,000 organizations. mckinsey.com
- [11]
Operator Collective / OpCo Intelligence, “State of AI Transformation 2026,” LinkedIn. March 3, 2026. Survey of 123 senior operators. linkedin.com
- [12]
Center for Creative Leadership, March 2026.
- [13]
Bain & Company, “Global Private Equity Report 2025,” Bain & Company. March 2025. Carve-out revenue/margin improvements fell from 31%/29% (pre-2012) to 17%/2%. Top-quartile performance linked to alignment between value-creation thesis and operating setup. bain.com