AI Is the Most Unpredictable Cost in Your Martech Stack

AI Is the Most Unpredictable Cost in Your Martech Stack

Milena Traikovich has spent her career at the intersection of analytics and lead generation, helping businesses turn data into high-quality, nurtured leads. As a demand generation expert, she has a ground-level view of how new technologies are adopted and the operational challenges they create. Today, she joins us to discuss one of the most pressing issues in Martech: the unpredictable and often invisible costs of scaling artificial intelligence. We’ll explore why traditional budgeting fails in the age of AI, the growing disconnect between individual productivity gains and enterprise-wide financial returns, and the practical steps marketing leaders must take to build a foundation for profitable AI-driven growth.

With 80% of enterprises reportedly missing AI infrastructure forecasts by over 25%, what makes these costs so fundamentally different from past technology budgets? Please walk us through the first steps a marketing leader should take to gain visibility and control over this new type of spending.

The core difference is that AI cost isn’t a fixed, predictable line item like a software license; it’s a dynamic, consumption-based expense that behaves in non-linear ways. In the past, I signed contracts where I knew my seat count and my features. Now, costs accumulate through millions of invisible inference events triggered by anyone, at any time. The fact that 80% of companies are missing their forecasts by such a wide margin tells you this isn’t a simple planning error—it’s a structural shift in how cost behaves. For a marketing leader, the first step is to demystify AI usage. You have to make it explicit. Start by inventorying every single workflow where AI is being used, from content creation to personalization and decisioning. Then, break those workflows down into their smallest component tasks. Don’t just say “we use AI for content”; say “we use Model A for headlines, Model B for outlines, and Model C for drafts.” This initial mapping is the only way to begin understanding the true cost drivers before you try to scale.

Many teams see individual productivity boosts from AI, yet enterprise-wide financial returns often lag. What causes this disconnect, and what organizational structures are needed to ensure that local team efficiency doesn’t quietly become a systemic margin issue? Please provide a specific example.

This is a classic “can’t see the forest for the trees” problem. An individual content writer might draft articles 50% faster, and that feels like a huge win. But at an enterprise level, that local efficiency is an island. The financial return gets lost because, in parallel, ten other teams are also experimenting, using different tools, racking up duplicative costs, and scaling usage without a clear connection to a larger business outcome. The recent McKinsey report highlights this perfectly: AI adoption is widespread, but very few companies have scaled it to deliver material financial impact. To fix this, you need a shared-platform, distributed-execution model. This means a central tech team owns the core AI infrastructure, but marketing has the autonomy to deploy and iterate agents on top of it. For example, instead of every brand manager spinning up their own “social media copy agent,” a central operational owner maintains a single, optimized agent that everyone uses. This prevents the agent sprawl that leads to inconsistent quality and replicated costs, turning isolated productivity pockets into a consolidated, efficient system.

As AI systems become more agentic, a single user request can trigger multiple model calls and tool invocations. Could you explain how this cost multiplication happens and provide a step-by-step on designing an orchestration layer that minimizes this effect without sacrificing performance?

This is where costs truly spiral out of control, and it’s often invisible to the end-user. Imagine asking an AI agent, “What were our top-performing campaigns last quarter, and what should we do next?” That simple prompt doesn’t trigger one model call; it triggers a cascade. First, the agent might call a model to interpret the question. Then, it invokes a tool to pull campaign data. Next, it calls another model to analyze the data, a third to synthesize a recommendation, and perhaps a fourth for a safety check. Each step consumes tokens and compute resources. This is how a single query fans out and multiplies costs exponentially. The key to controlling this is a strong orchestration layer. The first step is to define explicit rules: which agents are allowed to do what, and when they are allowed to call expensive tools. Second, invest in shared context and memory. A well-designed system shouldn’t have to re-fetch the same company background information for every single query. Caching that context dramatically reduces redundant calls. Finally, enforce a “reason before acting” protocol. The agent must determine if it truly needs to invoke a tool rather than just calling it by default. These architectural choices are critical; research shows that smart orchestration can slash operational costs by over 28% with almost no drop in performance.

A phenomenon known as “capability creep” can cause AI costs to spiral as agents are used for tasks they weren’t designed for. How should teams map specific workflows to the minimum required model, and what governance practices can prevent this expensive drift? Please share a metric for tracking this.

Capability creep is such a subtle but dangerous problem. It starts innocently: a team builds an agent to summarize customer feedback. It works well, so someone tries using it to draft email responses, and it kind of works. Before you know it, this simple summarization agent is being used for complex, nuanced tasks it was never optimized for, leading to higher costs and poorer results. The solution begins with rigorous workflow mapping. For every task, you must explicitly match it to the minimum viable model. A simple classification task doesn’t need a frontier model; a less powerful, cheaper one will do. This requires discipline. Governance is about creating clear ownership. You need a product owner who sets the roadmap for a class of agents, an analyst who reviews performance and cost patterns, and an operational owner who retires redundant or inefficient agents. A great metric for tracking this is “Task-Model Mismatch Rate.” This measures how often a high-cost, high-capability model is being invoked for a low-complexity task that a cheaper model could handle. If that rate starts climbing, you know capability creep is setting in.

For marketing teams, who are often early AI adopters, what does a practical ownership model look like without needing full-stack engineers for every change? How can a metric like Levelized Cost of AI (LCOAI) help surface costs directly to these teams to encourage efficiency?

Marketing teams are in the blast radius because they move fast and experiment constantly, often long before enterprise guardrails are in place. A practical ownership model is one where marketing isn’t responsible for building the foundational AI plumbing but has full control over deploying and iterating on top of it. Think of it like a content management system: the IT team maintains the core platform, but the marketing team can create pages, run campaigns, and analyze results without filing an engineering ticket for every change. This requires clear roles: product owners in marketing define the “what” and “why” for an AI agent, while analysts track its performance and costs. This is where a metric like Levelized Cost of AI, or LCOAI, is a game-changer. Instead of just seeing a giant, abstract cloud bill, LCOAI tells a marketer, “The true cost for one AI-powered personalized product recommendation is $0.02.” By surfacing the cost at the action level, it makes the economics tangible. It empowers the marketing team to ask the right questions—is this recommendation driving more than $0.02 in value? Could we redesign the workflow to get that cost down to $0.01? It shifts the mindset from just using AI to using it efficiently.

What is your forecast for AI cost management in marketing over the next 18-24 months?

Over the next 18-24 months, I believe we’re going to see a significant market-wide maturation from a phase of pure experimentation to one of economic accountability. The initial excitement of “what can AI do?” will be replaced by the more pragmatic question of “what is the ROI of this AI-powered workflow?” I predict that AI cost visibility will become a standard feature, not an afterthought, in Martech platforms. We will see the rise of tools and dashboards designed specifically for non-technical marketing leaders to understand and manage their AI spend, much like we have for ad spend today. Companies that fail to build this cost literacy will face a painful reckoning as their margins continue to erode, while those who treat AI as core operational infrastructure—with clear governance, ownership, and economic transparency—will be the ones who unlock its value sustainably and pull ahead of the competition. The winners won’t be the ones who adopt the most AI, but the ones who understand its economics the best.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later