The Decision Debt Curve
- Decision Debt
- Den akkumulerede omkostning ved langsomme beslutningsprocesser i et miljø hvor teknologien accelererer eksponentielt.
- OODA Loop
- En beslutningsmodel (Observe, Orient, Decide, Act) designet til at matche tempoet i hurtigt skiftende miljøer.
- Strategy Theater
- Processer der føles som fremdrift (møder, post-its), men som ikke ændrer den tekniske eller operationelle virkelighed.
By Mikkel Frimer-Rasmussen, Frimer-Rasmussen Consulting
Key Takeaways
The Core Problem: AI capabilities grow exponentially (new frontier models every 30 days), while organizational decision-making remains constant (6-month planning cycles). This mathematical mismatch guarantees chronically outdated assumptions.
The Evidence: Between November 17 and December 11, 2025, four major AI models launched in 25 days—delivering more capability advancement than most previous years achieved in total. If your decision process takes 6 months, you’re three generations behind before implementation.
The Solution: Replace 6-month strategies with 2-week experiment cycles. Use the OODA Loop (Observe-Orient-Decide-Act) to match decision tempo to technology velocity. Test one “wasn’t worth doing but now is” project next week.
The Question: Before your next strategy meeting, ask: “What task do I need solved next week where AI could be an attractive solution?” If you can’t answer—or your answer requires months of approval—you’ve identified the constraint. It’s not the technology. It’s the tempo.
Reading time: 12 minutes
Thanks for reading! Subscribe for free to receive new posts and support my work.
The Math That Should Terrify You
Here’s the uncomfortable arithmetic:
AI capability generation time: \~30 days
Your decision-making process: 6 months
Result: You’re three generations behind before you start.
Not three years. Three generations—measured in weeks, not planning cycles.
This isn’t hyperbole. Between November 17 and December 11, 2025, four frontier AI models launched: Grok 4.1, Gemini 3, Claude Opus 4.5, and GPT-5.2—delivering more capability advancement in 25 days than most previous years achieved in total.
If your organization spent those 25 days scheduling the third planning meeting about your AI strategy, you didn’t just lose time. You lost relevance.
Now it is January 2026. And the limelight goes to Claude Code, Google Antigravity and open source OpenClaw.
In IT, many know the term “technical debt”. Slow decision processes cause “decision debt”.
The problem isn’t that you’re moving slowly. It’s that slow is the same as wrong when the ground beneath you shifts every month.
Two Curves Diverging: Exponential Growth vs. Constant Tempo
The core issue is structural, not cultural. AI capabilities are growing exponentially—each generation building on the last, compressing what took years into months, then weeks. Meanwhile, organizational decision-making tempo remains constant.
The Old Equation (2015):
-
Major technology shifts: \~12 months
-
Strategic planning cycles: 6-12 months
-
Result: Parallel curves—decisions kept pace with change
The New Reality (2026):
-
AI capabilities: Exponential growth (50+ significant model releases in 36 months, accelerating)
-
Strategic planning cycles: Constant tempo (still 6-12 months)
-
Result: Diverging curves—the gap widens every cycle
Here’s what exponential vs. constant looks like in practice:
![][image1]
Your ROI calculation from Month 0 is now based on assumptions that are 5x outdated. Not 5% off—five times removed from current reality.
This isn’t a temporary misalignment. It’s mathematical inevitability: Exponential beats constant, every time. The longer you operate at constant decision tempo, the further behind you fall.
You’re not running slower than before. The ground beneath you is accelerating exponentially—and your stride length hasn’t changed.
When Claude Sonnet 4.5 launched with 1,000,000-token context windows and Opus 4.5 pricing dropped 67% from Opus 4.1, every ROI calculation from May 2025 became fiction. Not “slightly off”—mathematically invalid.
Case Study: When Experienced Leaders Become Structural Liabilities
I work with two Danish companies—both profitable, both well-managed, both failing at AI adoption for the same reason: They don’t realize technology now sets the tempo.
Company A: Enterprise IT Services (100+ employees)
A stable corporation. Large, established IT systems from traditional vendors. Solid customer base built over decades. Leadership is curious about AI—but it’s not driving urgency.
Month 1-2: Executive team discusses whether to invest in AI tooling for proposal generation and client analysis. “Let’s explore the options thoughtfully.”
Month 3-4: They commission internal assessment of “which AI use cases align with our strategic priorities.”
Month 5: Committee recommends three pilot projects. Budget discussions begin. “We need to ensure proper governance.”
Month 6: Approval for one pilot—content summarization for RFP responses.
Reality check: In those six months:
-
Claude’s context window expanded from 200k to 400k tokens
-
GPT-5.2 launched with 93.2% accuracy on GPQA Diamond benchmark
-
API pricing dropped 67%
-
Their original technical architecture assumptions became obsolete
Their pilot is launching with technology two generations old, solving a problem that’s now trivial, at a cost structure that no longer exists.
The “Microsoft Word” problem: As one AI practitioner recently noted: “In 2025 I was proud of some of my ‘new’ AI workflows. Now I look at it like finding a resume where I listed ‘proficient in Microsoft Word.’”
That’s the depreciation curve in action. What feels cutting-edge today becomes table stakes in weeks—and embarrassing in months.
Company B: Manufacturing (50+ employees, 100+ years old)
A century-old company. Craftmanship matters. Inventory management is core to operations. Leadership values getting things right over getting things fast.
The Pattern: Six months debating whether AI can help with supply chain optimization and customer service documentation. “Let’s make sure we choose the right approach.”
The Miss: While they debated, the World Economic Forum projected that by 2030, 22% of today’s jobs will undergo substantial transformation, with 170 million new positions emerging—meaning their competitors are already retraining staff for AI-augmented workflows.
The Core Problem: Both leadership teams are excellent at running their businesses. They’ve built enduring companies through careful decision-making and operational excellence. But they’re applying constant-tempo decision frameworks to an exponentially accelerating capability landscape.
It’s not incompetence. It’s mathematical mismatch—and it’s lethal.
When one curve grows exponentially and the other stays flat, the gap doesn’t just widen. It explodes.
Here’s the uncomfortable parallel: If your six-month AI strategy from January is finalized in June, you’re essentially putting “proficient in GPT-4o” on your corporate resume—while the market has moved on to GPT-5.2, Claude Opus 4.5, and Gemini 3.
It’s not that you’re wrong. It’s that you’re historically accurate but currently irrelevant.
The OODA Loop: Why Speed Trumps Precision
DARPA’s doctrine is simple: “The only way to win is to make good decisions faster than your opponents.”
John Boyd’s OODA Loop (Observe, Orient, Decide, Act) was designed for fighter pilots, but it’s now the operating system for AI-era business:
-
Observe: What capabilities exist today?
-
Orient: Which solve real problems now?
-
Decide: Go/no-go in days, not months
-
Act: Deploy, test, learn—then loop
Research from Decision Sciences demonstrates that “short-loop experimentation is a fundamental way to gain new information and learn about failures... rapid prototyping and testing seem to increase the potential for getting things right, thus enhancing the speed of new product innovation.”
The companies winning in 2026 aren’t running better six-month planning cycles. They’re completing their sixth OODA loop while competitors schedule their first strategy offsite.
Here’s the uncomfortable truth: In an exponentially accelerating environment, constant decision tempo doesn’t just slow you down. It guarantees exponentially widening gaps between your assumptions and reality.
The Four Categories: Strategic Triage for the Acceleration Era
AI researcher and professor Ethan Mollick’s framework maps perfectly to this tempo problem:
1. “Stuff I should do quickly before it becomes obsolete”
Decision timeline required: 2-4 weeks
Why: These are wrapper solutions—they’ll be platform features within 12 months. The ROI window is narrow. Act fast or skip entirely.
2. “Stuff not worth doing anymore”
Decision timeline required: 1 week
Why: If you’re still building complex rule-based systems for tasks GPT-5.2 handles in a prompt, you’re solving yesterday’s problems with yesterday’s tools. Kill these projects immediately.
3. “Just do it with agents”
Decision timeline required: 1-2 weeks for proof-of-concept
Why: Agentic AI (systems that reason across steps, use tools, self-correct) is the new baseline. Stop asking “should we?” Start asking “which workflow first?”
4. “Stuff that wasn’t worth doing but now is”
Decision timeline required: 2 weeks for validation
Why: This is your competitive edge. GPT-5.2 solving 40.3% of expert-level FrontierMath problems means work requiring PhD-level specialists is now accessible. Projects shelved due to poor ROI are suddenly viable.
Notice the pattern? None of these categories accommodate six-month decision cycles. The technology won’t wait for your planning cadence.
The Experiment-First Framework: Matching Decision Speed to Technology Velocity
The solution isn’t to eliminate planning. It’s to recognize that planning without current data is fiction.
Intuit Design Hub research demonstrates that rapid experimentation “enables businesses to gather data and insights rapidly, leading to faster decision-making processes. This agility gives organizations a competitive edge in fast-changing markets.”
Week 1: Hypothesis → Prototype
Monday-Tuesday (2 days):
-
Identify one Category 4 project (previously infeasible, now viable)
-
Define success criteria (specific, measurable, binary)
-
Assign one decider (not a committee)
Wednesday-Friday (3 days):
-
Build minimal prototype using current tools
-
API-first architecture (easy to swap AI “brain” as models improve)
-
Focus on workflow validation, not UI polish
Key principle: Teams using rapid experimentation “identified they had chosen the wrong location for a feature” in 15-minute sketch tests—saving months of misdirected development.
Week 2: Test → Decide
Monday-Wednesday (3 days):
-
Test with 5 real users (internal or external)
-
Record sessions, document friction and delight
-
Measure against success criteria
Thursday (1 day):
-
Go/no-go decision by assigned decider
-
Go: Allocate production resources
-
No-go: Archive learnings, new hypothesis Monday
Friday (1 day):
-
Document findings
-
Share broadly
-
Queue next experiment
This two-week rhythm matches the pace of AI capability advancement. You’re making decisions based on current technology, not six-month-old assumptions.
![][image2]
What Fast Companies Do Differently
Decision Sciences research on high-tech innovation found that “companies that innovate and experiment can respond to challenges faster and exploit new products and market opportunities better than non-innovative companies.”
The difference isn’t access to better AI. It’s organizational learning velocity.
Fast Companies:
-
Champions embedded in experiments (not committees reviewing PowerPoint)
-
Weekly sprint reviews (not quarterly roadmap updates)
-
“Time to validation” as primary KPI (not “strategic alignment”)
-
One decider per experiment (not consensus-building)
-
API-first architecture (swap models without rebuilding)
Frozen Companies:
-
Six-month consensus processes
-
Monolithic custom builds
-
Annual/bi-annual planning cycles
-
ROI precision requirements that guarantee outdated assumptions
-
Technology decisions by committee vote
LaunchNotes’ research emphasizes: “Rapid Experimentation encourages a culture of innovation and learning within an organization. It fosters a mindset where failure is seen as a learning opportunity rather than a setback.”
The brutal reality: If your organization requires six months to decide, you’ve already decided to lose. The decision just hasn’t been formalized yet.
Measuring Your Decision Tempo: A Self-Assessment
Before implementing any framework, you need to know your baseline. Here’s how to measure whether you’re operating at constant tempo in an exponential environment:
Exercise (10 minutes):
-
Identify your last 3 strategic technology decisions (not routine purchases—strategic choices that affected how work gets done)
-
For each, measure:
-
Time from “we should explore this” to “decision finalized”
-
Number of meetings/touchpoints required
-
Number of people who had veto power
-
Calculate your average decision cycle time
Interpretation:
-
\< 4 weeks: You’re matching AI capability cadence. Maintain this.
-
4-8 weeks: Marginal zone. You’re falling behind but recoverable.
-
> 8 weeks: Danger zone. Your assumptions are outdated before implementation.
-
> 12 weeks: Critical. You’re operating 3+ generations behind technology reality.
The uncomfortable truth: If your average is above 8 weeks, your process isn’t just slow—it’s structurally mismatched to the environment you’re operating in.
![][image3]
This isn’t about working harder. It’s about recognizing that your decision architecture was designed for a different era.
Your Next Two Weeks: A Practical Test
Monday (this week): Identify one “wasn’t worth doing but now is” project:
-
Previously shelved due to cost/complexity
-
Would create genuine business value if solved
-
Could be prototyped in 5 days
Tuesday-Friday (this week): Build minimal prototype. Use off-the-shelf tools (Claude Code, Cursor, similar). Prioritize workflow over polish.
Monday-Wednesday (next week): Test with 5 users. Document everything: What worked? What broke? What delighted? What frustrated?
Thursday (next week): Decide: Scale, pivot, or kill. One person decides. No committee.
Friday (next week): If you’re still debating methodology, you’ve proven the problem: Your process is slower than the technology it’s trying to harness.
Common Objections: Why “Careful” Has Become Dangerous
“But we need thorough analysis before committing resources”
This argument worked when technology changed annually. In an exponential environment, “thorough analysis” based on six-month-old assumptions is less accurate than rapid experimentation with current data.
Research from Decision Sciences demonstrates that short-loop experimentation produces more reliable insights than extended planning cycles. The question isn’t “careful vs. reckless”—it’s “informed by current reality vs. informed by outdated assumptions.”
“Our industry is different—we can’t afford to move fast and break things”
Fair point. But consider the alternative: moving slowly and breaking relevance.
Manufacturing, financial services, healthcare—all have regulatory constraints. But those constraints govern what you can do, not how fast you decide. A two-week validation cycle can include compliance review. A six-month consensus process cannot include current technology assumptions.
The century-old manufacturing company (Company B) didn’t survive 100 years by ignoring risk. They survived by adapting their decision tempo to match environmental change. That tempo needs to shift again.
“We’ve tried rapid experiments before—they didn’t work”
Then you learned something valuable in two weeks instead of six months. That’s the point.
LaunchNotes research emphasizes: “Rapid Experimentation encourages a culture of innovation and learning within an organization. It fosters a mindset where failure is seen as a learning opportunity rather than a setback.”
The companies that succeed aren’t those that never fail. They’re those that fail faster than their competitors, learn from it, and iterate while others are still in their third planning meeting.
“We don’t have the technical capability to move this fast”
Then your first two-week experiment should be: “Can we build a minimal AI prototype using off-the-shelf tools?”
If the answer is no, you’ve identified your constraint in two weeks. If the answer is yes, you’ve proven you can move faster than you thought.
Either way, you’ve learned something concrete instead of debating theoretical capability for six months.
Conclusion: One Question Before Your Next Meeting
HEC Paris research notes that 86% of employers now consider AI transformative to their operations, demanding “more decisive, focused and agile CEOs.”
In an environment where 25 days delivered more AI capability advancement than most previous years achieved in total, competitive advantage doesn’t come from having the best six-month strategy.
It comes from having the fastest OODA loop.
Your competitors aren’t pausing to let you catch up. They’re running their sixth weekly experiment while you’re scheduling your second planning meeting.
The mathematics are unforgiving:
-
Technology generation time: 30 days
-
Your decision time: 180 days
-
Gap: 5 generations behind
This isn’t a technology problem. It’s a tempo problem. And tempo problems don’t get solved with better planning—they get solved with faster cycles.
Before your next strategy meeting, ask yourself this:
“What task do I need solved next week where AI could be an attractive solution?”
Not next quarter. Not after the committee review. Next week.
If you can’t answer that question—or if your answer requires six months of approval to test—you’ve already identified the constraint. It’s not the technology. It’s not the budget. It’s not the talent.
It’s the gap between how fast you decide and how fast the ground is moving beneath you.
The curve is accelerating. You can’t slow it down. You can only choose whether to match its tempo—or watch it pull away.
What will you test next week?
Sources:
-
Ethen Mollick, Ralph J. Roberts Distinguished Faculty Scholar,
Associate Professor of Management,
Co-Director, Generative AI Labs at Wharton,
Rowan Fellow -
McKinsey & Company. Decision-Making Effectiveness Research (2025)
-
World Economic Forum. Future of Jobs Report 2025
-
Intuit Design Hub. Rapid Experimentation Framework (2024)
-
Decision Sciences Journal. Speed in Short-Loop Experimental Learning (2021)
-
Vertu Technology Report. AI Model Releases Nov/Dec 2025
-
Times of AI. AI Model Releases 2025 Roundup
-
LaunchNotes. Rapid Experimentation in Product Management
-
HEC Paris. Five Critical Trends Reshaping Executive Decision-Making (2025)
-
Boyd, John. OODA Loop Framework (Military Strategy)