Milena Traikovich is a seasoned expert in demand generation and marketing operations, specializing in bridging the gap between sophisticated technology and tangible revenue growth. With a deep background in analytics and performance optimization, she helps organizations navigate the complexities of lead nurturing and operational strategy. In this discussion, we explore the evolving landscape of marketing technology, focusing on the integration of agentic AI and the critical need for structural readiness in a shifting digital economy.
The conversation covers the high failure rates of autonomous AI projects, the collapse of ROI confidence across sectors like retail, and the necessity of separating experimental environments from revenue-critical operations. Milena also addresses the changing role of human judgment in an automated world and how to adapt to buyers who utilize personal AI assistants to bypass traditional marketing funnels.
Many autonomous AI projects face cancellation by 2027 due to escalating costs and weak business cases. How should leaders vet agentic systems during the demo phase to avoid these pitfalls, and what specific guardrails prevent these tools from simply scaling existing workflow dysfunctions at machine speed?
The primary issue is that a demo often exists in a vacuum where data is pristine and decision paths are linear, which is rarely the case in a live environment. To avoid the 40% cancellation rate predicted by Gartner, leaders must look past the “wow” factor of a system that plans campaigns autonomously and instead test it against their messiest, most undocumented spreadsheets. You have to ask the vendor how the agent handles “secret workarounds” or missing data inputs that your team currently manages manually. Guardrails start with process auditing; if you haven’t fixed a broken workflow, the AI will only execute that failure faster and at a much higher price point. It is essential to define clear decision authority within the software so the agent doesn’t spiral into costly, unoptimized spend without a human-in-the-loop checkpoint.
ROI confidence for AI initiatives has dropped significantly, particularly in the retail sector, as early gains in content speed lose their luster. What measurement infrastructure is required to connect AI outputs to actual revenue growth, and how can teams translate these results into language finance departments trust?
We’ve seen ROI confidence in retail plummet from 54% to 38% because the initial excitement over faster content production didn’t translate into a healthier pipeline. To fix this, you need a measurement infrastructure that moves beyond superficial metrics like “volume of assets created” and instead instruments the entire journey to capture revenue contribution. This means layering AI insights onto robust attribution models that are actually functional, rather than relying on the same broken reporting processes that existed pre-AI. When speaking to finance, you must stop talking about “engagement” and start talking about “pipeline velocity” and “customer acquisition cost reduction.” Finance departments trust data that shows a direct line from a specific AI-driven intervention to a dollar sign in the bank.
Successful marketing operations often separate experimental “laboratory” work from the “factory” that runs revenue-critical programs. Why does using a single set of KPIs for both environments typically lead to failure, and how should teams restructure to ensure strategists and analysts are no longer working in silos?
Using one set of KPIs is a recipe for disaster because the “Laboratory” is designed for high-risk experimentation where failure is a data point, whereas the “Factory” must prioritize efficiency and predictable revenue. When you force a laboratory project to meet factory-style ROI targets immediately, you kill innovation; conversely, if the factory operates with laboratory-style looseness, your core revenue stream becomes unstable. To break down silos, you need to restructure so that analysts aren’t just building reports in a vacuum, but are embedded with strategists to understand the “why” behind a campaign. This ensures that the data being collected actually informs the strategy, rather than just serving as a post-mortem autopsy of a failed initiative. It requires a fundamental shift where organizational goals dictate the tool usage, rather than the tools dictating the team’s daily tasks.
As AI agents begin drafting positioning and messaging, middle-layer marketing roles are facing a crisis of purpose. What specific skills define the human judgment needed to identify the 20% that AI gets wrong, and how can leaders manage the “quiet disengagement” spreading through teams during this transition?
The human element is now defined by the ability to spot the “uncanny valley” of marketing—that 20% of AI output that is technically correct but emotionally or strategically tone-deaf. Marketers need to evolve from content creators into high-level editors and strategic auditors who can interpret why a specific AI-generated message might alienate a specific segment of the audience. To combat quiet disengagement, leaders must stop framing AI as a replacement and start framing it as a shift in capability, where the “clicking buttons” part of the job is gone, but the “business outcome” part is more critical than ever. We must invest in training that prioritizes judgment and interpretation of surprising data results, rather than just teaching people how to use the latest SaaS features. If people feel their expertise is being replaced by a machine, they will check out; if they feel empowered to steer that machine, they will lean in.
Modern buyers often use personal AI assistants to shortlist brands before they ever visit a company website. How must traditional lead nurturing sequences evolve to address buyers who skip the standard conversion funnel, and which metrics should replace old attribution models that no longer capture these interactions?
The traditional funnel is being bypassed because a buyer’s AI assistant can aggregate reviews, pricing, and features before a human ever clicks a “Contact Us” button. Our nurturing sequences have to become much more sophisticated, focusing on providing high-value, technical, or highly personalized content that an AI assistant can consume and summarize accurately for its user. We need to move away from “last-click” or even standard “multi-touch” models that rely on website visits, as those are becoming lagging indicators. Instead, we should look at metrics like “AI visibility” or “mention share” within AI-driven search environments to understand if we are even making it onto the shortlist. The goal is to be the brand that the assistant recommends, which requires a data structure that is easily readable by third-party agents, not just humans.
Organizations frequently spend more on unused software features than on building the skills of the people running them. What are the practical steps for identifying a broken manual workflow that should be fixed before buying new technology, and how does operational muscle determine the ultimate value of a platform?
The first step is to identify any workflow that requires constant “manual intervention” or a secondary spreadsheet to make it function—these are the red flags of a broken process. You have to ask your team which tasks they hate because of the “workarounds” they’ve had to invent; if you buy a sophisticated platform to “fix” that without addressing the underlying logic, you’ll just end up with an expensive, automated mess. Operational muscle is the internal ability to take an adequate tool and make it perform exceptionally through rigorous process and skilled execution. A company with high operational muscle will always outperform a company with a billion-dollar tech stack but no clear strategy. Before your next purchase, pick one workflow, remove the workarounds, and ensure the team can run it flawlessly on your current setup.
What is your forecast for martech?
I believe 2026 will be the “Year of Accountability,” where the gap between organizations that invested in human capability and those that merely bought tools will become an unbridgeable chasm. We are moving away from the era of “AI experimentation” and into a period where every project will be scrutinized for its contribution to the bottom line, leading to a massive consolidation of the 40% of projects that fail to prove their worth. Success will belong to the “Martech Healers”—those who can fix the underlying process dysfunction and build the operational muscle necessary to turn agentic AI from a flashy demo into a revenue-generating engine. My advice for readers is to stop looking for the next “silver bullet” feature and start investing in the judgment and strategic depth of the people who will have to steer these autonomous systems.
