Artificial intelligence has firmly established itself as a cornerstone in various technological advancements, yet Chief Information Officers (CIOs) continue grappling with accurately defining its business value. They invest heavily in AI-related projects, but the true measure of AI’s success remains elusive. A recent survey by Gong, a revenue intelligence provider, unveils this predicament, with many CIOs still trying to determine the most appropriate metrics to evaluate AI’s effectiveness.
The Dual Metrics of AI Success
Balancing Productivity Gains and Revenue Growth
One of the principal findings from the Gong survey indicates that 53% of CIOs place equal importance on productivity gains and revenue growth when assessing AI’s success. However, this focus on quantitative metrics often overlooks qualitative aspects such as employee satisfaction, which only 42% of CIOs actively monitor. The prevailing confusion underscores a broader issue: uncertainty about where AI can deliver the most business value. This ambiguity complicates the decision-making process and makes it challenging for CIOs to justify AI investments to stakeholders who demand clear, tangible outcomes.
The prioritization of productivity gains and revenue growth reflects a pragmatic approach to leveraging AI capabilities. For instance, AI can automate routine tasks, thereby freeing up employees’ time for more strategic activities and driving productivity. In the context of sales, AI-driven insights can help close deals more efficiently, directly impacting revenue. Yet, many companies lack robust systems to measure both metrics simultaneously, as acknowledging the combined impact of revenue growth and time savings remains an uphill task. Reliance on self-reported benchmarks or crude proxies like the number of deals closed by representatives underscores the need for more sophisticated measurement tools.
The Global Perspective on Measuring AI Returns
Globally, the perception of AI’s value is skewed towards financial metrics, with 61% of CIOs believing that increased revenue alone justifies the costs associated with AI initiatives. Similarly, 60% of CIOs think that time savings by themselves are sufficient to warrant AI investments. However, this polarized view highlights a critical gap: only 32% of companies measure both revenue growth and time savings, suggesting an incomplete assessment of AI’s overall business impact. This incomplete measurement framework presents a significant challenge for CIOs who need to create a holistic picture of AI’s value proposition.
Moreover, the complexity of measuring AI’s impact on both fronts stems from the multifaceted nature of AI applications. Revenue growth is easier to quantify in financial terms, while time savings translate indirectly into cost reductions and operational efficiencies. As a result, CIOs face the daunting task of quantifying time saved in monetary terms to provide a comprehensive evaluation. This necessitates advanced analytical tools and methodologies capable of integrating disparate data sources to offer a coherent assessment. Until such systems become commonplace, CIOs may continue to struggle with a fragmented understanding of AI’s true value.
Trends in AI Adoption
Rising Interest in Predictive AI
While generative AI currently receives widespread attention, there is a burgeoning interest in predictive AI among CIOs. According to the survey, 54% of tech leaders prioritize generative AI, closely followed by 51% who emphasize automation, and another 31% focusing on predictive AI. This growing interest in predictive AI reflects its potential to transform business operations by providing foresight into future trends and outcomes. Predictive AI models are increasingly deployed to support workflow automation and perform predictive analytics to anticipate market shifts, customer behavior, and other critical business variables.
However, successful deployment of predictive AI requires models that go beyond off-the-shelf solutions. Whereas generative AI models are beneficial for generic tasks like content creation and data synthesis, predictive AI demands customized, proprietary solutions with substantial data integration. These models need to be tailored to specific business contexts, drawing upon extensive datasets to enhance their accuracy and reliability. This bespoke approach ensures that predictive AI can deliver actionable insights, enabling businesses to make informed decisions and stay ahead of the competition.
Customization and Data Integration Challenges
Creating proprietary predictive AI models involves substantial customization and data integration challenges. Unlike generic AI solutions, predictive models must be deeply embedded into the existing business infrastructure, ensuring they pull relevant data from multiple sources. This integration process is often complex and resource-intensive, requiring robust data governance frameworks to maintain data quality and integrity. Moreover, the success of these models hinges on continuous learning and adaptation—AI systems need to evolve based on new data inputs and changing business dynamics.
Therefore, businesses must invest in building an AI architecture that supports seamless data integration and real-time analytics. Such an infrastructure not only enhances the performance of predictive models but also ensures they remain aligned with strategic business objectives. By addressing these integration challenges, companies can fully leverage the capabilities of predictive AI, driving innovation and achieving sustainable competitive advantage.
ROI Priorities: Small vs. Large Companies
Strategic Differences Based on Company Size
The approach to proving AI’s return on investment (ROI) varies significantly between smaller and larger companies. For smaller U.S. firms (250 to 500 employees), 40% are prepared to halt AI projects that do not demonstrate clear ROI, reflecting a heightened focus on immediate and measurable returns. This conservative stance stems from budget constraints, compelling smaller companies to prioritize projects with tangible benefits. In contrast, only 19% of larger firms adhere to this stringent ROI criterion, suggesting they have more latitude for long-term experimentation without immediate financial proof of success.
Smaller companies must adopt a disciplined approach to AI experimentation, setting clear boundaries for timelines, financial expenditures, and success metrics. By defining these parameters upfront, they can manage risks and allocate resources more efficiently. This strategy also enables them to iterate rapidly, learning from each experiment to refine their AI initiatives continually.
Long-Term Experimental Approaches
Artificial intelligence has undeniably become foundational to various technological advancements, yet Chief Information Officers (CIOs) still struggle to quantify its business value accurately. They pour substantial resources into AI-related projects, but a clear metric for AI’s success remains difficult to pin down. This challenge is highlighted by a recent survey conducted by Gong, a revenue intelligence provider. The survey reveals that many CIOs are in a quandary over identifying the best metrics to evaluate AI’s effectiveness. Despite AI’s widespread adoption and its assumed benefits, CIOs find it difficult to measure the technology’s true impact on their businesses. They are continually seeking dependable methods to determine whether their investments in AI yield the promised returns or merely serve as costly experiments. This ongoing dilemma underscores the need for developing more precise evaluation criteria and measurable outcomes for AI initiatives to better capture its tangible contributions to business operations and overall growth.