The CTMO Quarterly Volume 1 · Issue 1 · Q1 2026

The
Autonomy
Trap

Why most companies are deploying AI at the wrong layer — and what the ones getting it right are doing instead.

John Kirker, CTMO March 2026 ctmo.com
Read
In This Issue
Section 01 Editor's Letter

The Real Question Isn't "Are You Using AI?"

JK
John Kirker
Chief Technology Marketing Officer · ctmo.com

Every CEO I've spoken with in the last 18 months is using AI. Most of them are using it for the wrong thing. Not the wrong tools. The wrong layer.

There's a question underneath the AI adoption conversation that almost nobody is asking directly: where, exactly, in your company's system is the AI operating? Is it automating individual tasks — writing copy, summarizing calls, generating images? Or is it operating at the layer where your demand generation system talks to your delivery system, where customer signals reach your product roadmap, where what you learn from marketing informs what engineering builds?

The companies pulling ahead aren't buying better AI tools. They're answering that second question before they buy anything.

"AI adoption is not an AI strategy. It is the beginning of the question, not the answer to it."
John Kirker · The CTMO Quarterly · Q1 2026

This issue is about that distinction. I'm calling it the Autonomy Trap: the pattern of companies deploying AI at the task level, seeing limited compounding returns, and concluding that AI isn't as transformative as advertised. It is transformative. But only at the right layer.

The panel in section 03 disagrees with me on at least two important points. One of them is partly right. I'll let you decide which one.

The trailing question in section 07 is the one I've been sitting with for three months. I don't have a clean answer. I suspect you don't either.

Section 02 Featured Analysis

The Autonomy Trap: What Layer Are You Actually Deploying AI At?

The Belief Worth Challenging

The working belief inside most $10M–$100M companies right now sounds like this: "We have an AI strategy. We've adopted the tools." Marketing has ChatGPT and three SaaS AI platforms. Engineering has GitHub Copilot and a vector database. The head of sales is using AI for call summaries. The CEO got a demo of an AI board report generator last quarter and found it impressive. Leadership agreed: the company is "embracing AI."

None of this is wrong. All of it is insufficient.

Tool adoption is not strategy. It is the precondition for strategy — the part that happens before the decisions that actually determine outcomes.

The Autonomy Spectrum

There is a useful way to think about where AI operates in any given company context. Call it the autonomy spectrum. Five positions, from fully human-controlled to fully autonomous. The important insight is not where you aspire to be. It's where your AI is actually operating right now, and whether that layer is a leverage point or a convenience.

The Autonomy Spectrum
Human
Required
Human
Led
Peer
Agent
Led
Fully
Autonomous
Human performs every step. AI is reference only.
Human decides. AI prepares and supports.
Human and AI produce independent outputs. Human resolves conflicts.
AI executes. Human reviews key outputs and holds override authority.
AI handles the full workflow. Humans monitor and receive exceptions only.

Most companies' task-level AI deployments sit at positions 1 and 2: assisting individual humans with individual tasks. Integration-layer AI operates at positions 3 through 5, connecting systems rather than assisting individuals. The compounding dynamics are completely different.

A human using AI to write better email copy is more productive. That productivity stays with the individual. An AI operating at the integration layer — routing customer signals to the right part of the growth system, connecting what marketing learns to what engineering builds — creates structural advantage that accumulates over time regardless of which specific human is at the keyboard.

"The companies watching their competitors pull ahead in AI are not usually losing on tools. They're losing because they never asked what the tools were supposed to connect."
From the Featured Analysis · Q1 2026

The Wrong Layer Problem, With Evidence

Three patterns observed across growth-stage companies in the last 18 months. The first two are labeled composites. The third is a public pattern with independently observable markers.

The SaaS content machine (composite: "Meridian Analytics"). A 40-person SaaS company invested in AI content production. Monthly output went from 8 pieces to 35. Traffic increased 40%. Qualified pipeline did not move. The AI was optimizing the task — content production — without any connection to the layer that converted content readers into qualified buyers. The attribution system that would have revealed the disconnect had never been built. So the AI produced more of something that wasn't working, faster.

The AI-enabled sales team (composite: "Vectara Partners"). A services firm deployed AI call summaries and automated follow-up generation. Reps saved 4–6 hours per week. Deal velocity did not improve. The AI was automating the administrative wrapper around the sale, not the decision architecture that determined which deals to pursue. The integration between what reps learned in calls and what marketing was targeting never existed. The AI created no bridge because no one had identified where the bridge needed to go.

The engineering AI silo (public pattern, observable across multiple companies). Engineering teams at growth-stage companies are investing heavily in AI infrastructure: model evaluation pipelines, internal tooling, vector databases. Marketing teams at the same companies are investing in entirely separate AI toolsets. In many cases, both teams are building capabilities that would compound together if connected. Neither team has the brief, the authority, or the architectural context to connect them. Two AI strategies. One company. No integration layer between them.

What Right Looks Like

In 1996, the same structural problem existed in a different form. Direct mail campaigns generated signal. Online behavior generated different signal. The companies that figured out how to connect those two data streams before anyone had language for "attribution" built durable competitive advantages that persisted for years after their competitors caught up on the tactics. The integration layer was the moat — not the campaigns.

The architecture question for 2026 is identical in structure: where are the two streams of signal in your company that should be connected but aren't? The answer is almost always some version of: what your customers are telling your marketing function, and what your engineering function is building in response to a different interpretation of what customers want.

The integration question for 2026: What two systems in your company are generating signals that should compound together but currently can't reach each other? That gap is the real AI opportunity — not the tools on either side of it.
Section 03 The Panel Debate

Is the Integration Layer the Real AI Battleground?

Six voices. One argument. Genuine disagreement on the record.

EV
Dr. Eleanor Vance
Systems Architect
"The question isn't which AI tools to buy. It's what you're integrating them into."
MO
Marcus Okafor
Revenue Operator
"Architecture that can't close a deal by Q3 is theory, not strategy."
NC
Nadia Chen
AI Strategist
"Every company has an AI strategy. About one in ten has an integration architecture."
WS
William Stover
Devil's Advocate
"Tell me the three observable symptoms before I invest $300K in a diagnosis."
LP
Linda Park
Historian
"The gap between what technology can do and what an org can absorb is the oldest problem in business."
JK
John Kirker
CTMO & Practitioner
"The companies getting AI right noticed the integration problem before they had a name for it."
NC
Nadia Chen
Opening Position
The integration layer argument is correct. It is also, for most $10M–$100M companies, practically useless without a precondition the argument assumes but never names. The precondition is: you need someone in the room who can read both maps. Not a CTO who attended a marketing conference. Not a CMO who uses Copilot. Someone who has built both sides of the integration and understands why the handoff point is structural rather than behavioral. Most companies at this stage don't have that person. So the right framework, administered by the wrong diagnostician, produces a prescription that doesn't address the actual disease.
MO
Marcus Okafor
Counter
I'd go further. The integration layer is a red herring for most companies at this stage — not because the concept is wrong, but because the actual bottleneck isn't architectural. It's linguistic. Marketing and engineering don't share a language for what they're each trying to accomplish. You don't need an architect first. You need a translator who has budget authority and sits in the revenue meeting. Once that person exists, the architectural choices become obvious. Without translation, the most elegant architecture in the world sits unused because the two teams are still talking past each other while the system waits for someone to explain why it matters.
WS
William Stover
Challenge
Both of you are describing a solution. Here's what I actually need: how does a CEO diagnose whether this is their problem before committing budget, making a hire, or restructuring a reporting relationship? Give me the three observable symptoms. Not "your AI isn't compounding" — that's a conclusion. The behaviors. The things I can walk into my company on Monday morning and check inside of two hours.
EV
Dr. Eleanor Vance
Response: The Three Symptoms
Three. First: your engineering team and your marketing team each have AI tools the other team doesn't know about — and wouldn't know how to evaluate if they did. Second: your attribution system was built after your campaigns were already running — retrofitted, not architectural. Pull the data source for your most-cited marketing metric. If you trace it back, you find a manual step somewhere. Third: when you ask your CTO what the marketing team's biggest data need is, they give a different answer than your CMO does. If all three are true, you have an integration layer problem. If one is true, you may have a talent problem. If none are true, you're probably not in the room that matters.
LP
Linda Park
Historical Frame
This argument has an older version. When commercial printing scaled in the 15th century, the bottleneck wasn't the press. It was the absence of anyone who understood both the production process and the distribution economics simultaneously. Publishers who hired for one capability and then tried to coordinate with people who had the other ran into exactly what Nadia described: the right framework administered by the wrong diagnostician. The solution wasn't better coordinators. It was a new category of practitioner — someone for whom both sides were native language, not translation. The CTMO function is the current version of that practitioner category. The market always arrives at it. The question is whether your company arrives before your competitors do.
JK
John Kirker
Close
Marcus is partially right, and I want to say that clearly rather than paper over it. "Translator with budget authority" is a real description of what this function does day to day. Where I'd push back is on translator as a sufficient frame. A translator renders one language into another. What this function actually does is build the architecture that eventually makes translation unnecessary. The goal isn't to sit in every meeting and bridge the gap. It's to eliminate the gap structurally so the bridge doesn't have to be rebuilt every quarter. Marketing and engineering should eventually share enough language and enough infrastructure that the CTMO function compresses itself. That's how you know it worked.
Section 04 The Framework

The Two-Layer Test: Diagnosing Where Your AI Is Actually Working

Apply this in the next 24 hours. Two questions. Answer both honestly before drawing any conclusions about what to build, what to buy, or who to hire.

01
The Layer Question
At what layer is your primary AI deployment actually operating?
  • Drafting content or copy (task level)
  • Summarizing meetings or documents (task level)
  • Generating code suggestions (task level)
  • Routing customer signals between systems (integration layer)
  • Connecting what marketing learns to what engineering builds (integration layer)
  • Automating decisions that cross functional boundaries (integration layer)
02
The Handoff Question
What handoff point is your AI crossing — and is it architectural or human?
  • Identify the handoff: where does signal from your demand system need to reach your delivery system?
  • Is that handoff currently mediated by a system or by a meeting?
  • If the answer is "a meeting," your AI is not compounding
  • If the answer is "we haven't mapped this," you've found your build target
The diagnostic result: If your primary AI investments are in the top three rows of Question 01, and your answer to Question 02 involves a meeting or an unmapped handoff, you are in the Autonomy Trap. The fix isn't new tools. It is an architectural decision about which handoff point to build toward — and who in your company has the authority and the fluency to build it.
Section 05 The Provocation

The Case Against the CTMO Argument

WS
William Stover
Steel-Manned Counterargument — Written in Good Faith

The CTMO framework is a real description of a real capability gap. I don't dispute the diagnosis. What I dispute is the implied prescription.

The economics don't work at this stage for most companies. A full-time executive who genuinely bridges technology architecture, marketing systems, and revenue strategy at the experience level being described costs $300K–$500K annually. For a $15M or $25M company, that is 2–3% of revenue allocated to a function most of the board has never funded. The ROI math requires either a very fast payoff or a very patient cap table. Most companies have neither.

The org chart often can't support it. Giving a CTMO real authority over both the marketing budget and the engineering roadmap requires a CEO willing to make uncomfortable decisions about two of their most senior direct reports simultaneously. Most CEOs who recognize the gap continue managing it as a coordination problem — because the alternative requires a political restructuring they're not prepared to execute without evidence that it will work.

The fractional model often produces 70–80% of the outcome at 15% of the cost. A CEO who brings in a fractional strategist for critical integration sessions and personally sits in those sessions can achieve most of what the full-time function delivers, if that CEO is technically fluent enough to be the decision point between sessions. The question is not "do you need the CTMO capability." It's "what is the minimum viable form of that capability for your specific stage?"

"The CTMO function is real as a description of what good looks like. It is not always the right hire for companies under $50M — and confusing the function with the title is its own kind of trap."
William Stover · The Provocation · Q1 2026

The more useful question is not "do you have a CTMO?" It's "where is the integration decision being made in your company right now, and is the person making it qualified to make it?" Sometimes the answer is the CEO. Sometimes the answer is no one. That's the diagnosis worth running.

Section 06 The Evidence Locker

The Numbers Behind This Issue

Every claim in this issue has a labeled basis. Composites are identified as composites. Confidence levels are assigned to any figure that involves practitioner analysis or extrapolation from limited data.

1996
Primary — John Kirker
Pre-digital attribution system, built before UTM parameters or Google Analytics existed
Tracked offline direct mail campaigns to online conversion events using a proprietary tagging architecture. The system predated the industry category it was solving by roughly a decade. Outcome: multi-year competitive attribution advantage for a direct response business at scale.
5/5 — Primary source, direct practitioner
1998
Primary — John Kirker
Recruiting industry matching platform — same fundamental architecture as ZipRecruiter's launch, 12 years prior
Technology-marketing integration platform matching supply and demand using behavioral signals and automated distribution. ZipRecruiter went public in 2021 at a $4.8B valuation (public record). The concept John's team built preceded the market by over a decade. The architectural parallel is practitioner analysis, not a deposition.
4/5 — Primary for 1998 system; ZipRecruiter valuation is public record; architectural parallel is practitioner analysis
2002–12
Primary — John Kirker
Terminix eDeals Platform: $500M+ in attributed revenue over a decade of operation
Integrated customer acquisition technology with marketing execution at enterprise scale across a ten-year engagement. Revenue attribution documented throughout. This wasn't a campaign. It was a compounding system — the kind a CMO couldn't build alone and a CTO wouldn't know to build at all. Terminix is a publicly known Rollins, Inc. subsidiary.
4/5 — Primary source; revenue figure is practitioner-reported and has not been independently verified
2024–26
Observed Pattern
Dual-AI-strategy pattern: separate engineering and marketing AI investments with no integration architecture between them
Observable across companies in the $10M–$100M range through advisory conversations and publicly described organizational structures. Not a formal study. A pattern. Neither engineering AI nor marketing AI investments in isolation are producing the compounding returns that integrated deployment generates. This is the core observable symptom the Two-Layer Test is designed to diagnose.
3/5 — Advisory observation; not a formal study; high face validity with independently observable markers
Section 07 · The Trailing Question

If the companies pulling ahead in AI are doing so because they answered the integration question first — what does it mean that most companies don't know which integration question is theirs to answer?

Take this to your next leadership meeting. Ask your CTO separately from your CMO. Count how many different answers you get. The count is your diagnosis.