Why Do Companies Keep Failing at AI Implementation?

Why Do Companies Keep Failing at AI Implementation?

I’m thrilled to sit down with Milena Traikovich, a seasoned expert in demand generation who has helped countless businesses craft impactful campaigns to nurture high-quality leads. With her deep expertise in analytics, performance optimization, and lead generation, Milena brings a unique perspective on how technology, particularly AI, intersects with business operations. Today, we’re diving into the often rocky road of AI implementation, exploring why promising rollouts sometimes fall short, the real-world consequences of these missteps, and how companies can better navigate the integration of AI tools to avoid disruption and build trust.

How do you see the disconnect between AI’s potential and its real-world performance playing out in businesses today?

I think the disconnect often stems from overblown expectations and a lack of understanding of what AI can realistically achieve. Companies get sold on the idea of AI as a magic bullet—something that will instantly cut costs or boost efficiency. But in practice, AI needs a ton of groundwork, like clean data and clear objectives, to deliver. I’ve seen this gap most prominently in industries like healthcare and real estate, where the stakes are high, and the data is messy or unpredictable. Without proper alignment between the tech and the business goals, you’re just setting yourself up for disappointment.

What can we learn from major AI setbacks, like the challenges faced with IBM’s Watson for Oncology?

The Watson case is a classic example of overpromising and under-delivering. One big issue was that it performed well with synthetic data in controlled environments but struggled with real patient data, which is often incomplete or inconsistent. This led to recommendations that weren’t just inaccurate but sometimes unsafe. The lesson here is clear: real-world testing is non-negotiable. Companies can’t just rely on lab results. After investing over $5 billion and eventually selling the health division at a fraction of that, it’s a stark reminder that even big players can falter if they don’t prioritize practical validation and transparency with users.

Another well-known AI stumble was with Zillow Offers. Can you break down what went wrong there from your perspective?

Zillow’s AI model for predicting home values was built on historical data, but it couldn’t adapt to rapid market shifts, like the cooling housing market at the time. The algorithm kept overvaluing properties, leading Zillow to overpay massively. The result? A staggering half-billion-dollar loss, mass layoffs, and the program shutting down in under a year. It shows how critical it is for AI to be dynamic and responsive to real-time changes. If your model can’t pivot, it’s not just a technical failure—it’s a business disaster.

Let’s talk about more recent AI rollouts, like the issues some businesses faced with QuickBooks’ AI-powered update. How do forced updates like that impact operations?

Forced updates, like the one with QuickBooks, can be a nightmare for businesses, especially small ones that rely on these tools for core functions like payroll and invoicing. When you’re pushed into a new AI version without a choice or proper warning, it disrupts workflows instantly. I’ve heard of businesses struggling with basic tasks because the system wasn’t ready for prime time. It’s frustrating because you’re not just dealing with a learning curve—you’re firefighting issues that shouldn’t exist in a mission-critical tool. That kind of rollout erodes trust fast.

What specific challenges have you seen or experienced with AI tools miscategorizing data or making unpredictable decisions?

I’ve come across cases where AI, like in accounting software, makes bizarre decisions—say, categorizing payments based on arbitrary factors like invoice amounts rather than actual context. This isn’t just a minor glitch; it messes up financial records, which can have downstream effects on taxes or client billing. One business I worked with had to spend hours manually correcting random category assignments that their AI tool kept making. It’s ironic because the promise of AI is to save time, but when it hallucinates data like that, you’re stuck doing more work to fix it.

Why do you think communication falls short during so many AI implementations, and how does that affect businesses?

Communication often falls short because companies are so focused on the tech rollout that they forget the human element. They don’t explain what’s changing, why it’s happening, or how to adapt. For instance, with something like the QuickBooks update, a simple heads-up and some guidance on troubleshooting could have saved businesses a lot of grief. Poor communication leaves users feeling blindsided and unsupported, which amplifies frustration when things go wrong. It’s not just about the tool—it’s about ensuring people feel prepared and valued during the transition.

There’s a notion that AI sometimes creates more work for humans rather than less. Have you seen this in action?

Absolutely, and it’s a huge irony. I’ve worked with teams where AI tools made errors—like misclassifying leads or generating inaccurate reports—and fixing those mistakes took way more time than doing the task manually in the first place. One example was a marketing automation tool that kept flagging high-value leads as low-priority due to a glitch in its scoring system. My team spent days re-evaluating everything by hand. It’s a reminder that AI isn’t a set-it-and-forget-it solution. If the system isn’t rock-solid, you’re just trading one kind of work for another.

How crucial is thorough testing and real-world validation in ensuring AI tools don’t backfire on businesses?

Testing and validation are everything. AI can look flawless in a lab, but the real world is messy—data isn’t perfect, and user behavior is unpredictable. If you don’t test with real customer scenarios, you’re rolling the dice. I always advise businesses to run pilots and involve end-users early to catch issues before a full rollout. It’s not just about avoiding errors; it’s about building confidence in the tool. When users see that it’s been vetted in conditions like theirs, they’re far more likely to embrace it.

What’s your forecast for the future of AI implementation in business operations over the next few years?

I’m cautiously optimistic. I think we’ll see smarter, more user-centric AI tools as companies learn from past missteps. The focus will shift toward better integration—tools that adapt to specific business needs rather than forcing a one-size-fits-all solution. But it’s going to take a cultural shift too. Businesses need to prioritize transparency, real-world testing, and user feedback over rushing to market. If that happens, AI could truly transform operations without the growing pains we’ve seen so far. If not, we’ll keep seeing these costly stumbles.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later