
AI Strategy Is Failing—And It's Not the Tech's Fault | Image Source: sloanreview.mit.edu
NEW YORK, April 12, 2025 - Artificial Intelligence (AI) has long been hailed as a transformative force in all industries. From the automation of routine processes to the improvement of complex decision-making, AI tools – particularly AI generators - keep an unprecedented promise. However, despite years of investment and experimentation, most organizations do not benefit from the benefits of seismic trade once planned. And contrary to popular belief, the bottleneck is not technical, it is cultural, strategic and deeply human.
According to the state of 2024 of the IOC survey by Foundry, 85% of IT leaders recognize their growing role as organizers of change. However, only 28% of respondents consider transformation a top priority. In the meantime, 91% of data leaders in large companies cite the challenges of culture and change management as the most important obstacle to becoming truly data-based. The technology is ready. However, people and processes are delayed.
Why? Because the implementation of AI continues to be treated as a computer project rather than as a global transformation of the organization. It is a disconnection that costs millions of companies and endangers their reputation. Take Zillow, for example. Its misuse of AI’s property valuations cost society more than $300 million and resulted in a significant decline in the price of the share. The lesson is clear: AI is not just a technological problem. It is a leadership challenge, a cultural puzzle and an emotional calculation.
What kind of leadership does AI require?
Modern AI systems not only perform tasks, but also shape the way organizations think and function. These tools challenge existing workflows, decision-making hierarchies and employment functions. So why do organizations always name only the traditional ICOs and CTOs to lead this position? As the authors Faisal Hoche and Thomas H. Davenport say, the time has come to play a new leadership role: the Chief Innovation and Transformation Officer (CITO).
CITO is not just a technical visionary. This role is based on emotional intelligence, behavioral science and strategic foresight. It’s about managing the man-machine association. In organizations where IA is truly successful, leaders rethink traditional roles, re-elect workers, and promote psychological safety. They know that while machines can automate tasks, they are people who adapt and innovate. Leadership must therefore go beyond technology to cultivate a culture in which human and machine collaboration is perfect and meaningful.
As Rasmus Hougaard and Jacqueline Carter claim in their book, “More Humans,” the goal is not only to make AI smarter, it is to make leadership more human. By increasing awareness, wisdom and compassion, leaders can delegate tasks to the CEW without delegating the human essence of their work. Amnesty International must strengthen leadership and not replace it.
What is the effect of large-scale AI transformations?
Despite the massive adoption of generic IA – which has more than 100 million users in a few months - most companies have not undergone a widespread transformation. Why? Because they point too big, too fast. As MIT Sloan Management Review Melissa Webster and George Westerman pointed out, real progress is being made in “small” transformations.
These are incremental and targeted projects that improve processes without trying to restructure the entire business model. Think of AI co-drivers who help real-time call centre representatives or back-office tools that automate paperwork. According to Paul Bayer and John J. Sviokla, such implementations can create an aggravating effect on organizational learning. When machines and humans collaborate, they generate knowledge that feeds future innovation.
Yet, many organizations are taking this crucial step. They dream of IV by revolutionizing every corner of their operations but do not build the basic culture and workflows necessary to support even a modest transformation. Essentially, they are trying to build a skyscraper without putting the bricks first.
Why Culture Eats AI Strategy for Breakfast
“Culture eats a breakfast strategy,” said Peter Drucker, and nowhere is more obvious than in adopting AI. Leaders often underestimate how deeply rooted behaviours and beliefs need to change to take root. Gane Kesari, in his article for MIT SMR, noted that more than 57% of companies are struggling to build a data-based culture despite investment in AI tools.
Creating this kind of culture involves much more than buying software. An environment in which data are reliable and used regularly to inform decisions must be promoted. This means rewarding data-based behaviour, investing in education, and creating feedback loops that reinforce the value of IA outcomes. And what is important is to ensure that every level of the organization – from front-line workers to C-suite executives - understands and embraces change.
Sohrab Rahimi, McKinsey’s QuantumBlack senior technologist, echoed this sentiment, emphasizing the value of designing AI solutions in collaboration with end users. When AI tools are integrated into daily workflows and represent real edge cases, adoption rates increase. According to Rahimi, stopping until co-design with users is what leads to long-term success. Reliability is not just a feature, he said. “It’s a mentality.”
What should businesses do differently now?
- Appoint cross-functional AI leaders who understand both technology and human behavior. The Chief Innovation and Transformation Officer is not just a new title—it’s a necessary evolution.
- Focus on “small t” wins to build momentum and trust in AI. These projects are the training wheels for larger transformation.
- Invest in culture change as much as you invest in tools. Without a data-driven mindset, even the best AI tools will gather dust.
- Evaluate and iterate. As Rama Ramakrishnan noted, underinvesting in evaluation processes can derail even the most promising GenAI applications.
Companies must also accept that AI’s strategy is an evolving goal. GenAI’s capabilities are changing rapidly. What seemed impossible last spring could be table issues for the fall. Flexibility, therefore, is not optional and is the backbone of a sustainable AI.
How should leaders manage the ethics and risks of AI?
Risk management and ethical concerns remain among the most challenging challenges of the IA strategy. Managers must now deal with issues ranging from bias in algorithms to regulatory compliance and confidentiality of employee data. As Koenraad Schelfaut and Prashant P. pointed out in Accenture Shukla, tackling the debt of technology is not about eliminating it, but about managing it intelligently.
Similarly, AI risk management is not a question of tool prohibition or stifling innovation. According to Nick van der Meulen and Barbara H. Wixom, banning BYOAI (Bring Your Own AI) tools can increase the risk by pushing employees to work around governance frameworks. Instead, organizations should focus on developing clear and adaptable policies that encourage responsible experimentation. Trust again plays a central role. While employees trust the organization to support the thoughtful use of AI, they are more likely to engage in ways that stimulate innovation without compromising security.
Leaders must also invest in causal LM – a form of machine learning that responds “and if”. As Stefan Feuerriegel and his colleagues have pointed out, the causal LM can guide decision-making in scenarios where results are uncertain, such as pricing strategies or effective marketing. This is another example of how philosophy – yes, philosophy - becomes a critical component of AI strategy.
Does AI do more or less human work?
This could be the most important issue of all. As Rasmus Hougaard and Jacqueline Carter point out in “More Human”, the impact of AI on work depends entirely on how we use it. When used in a thoughtful manner, AI can remove repetitive tasks, speed up its use and release employees to focus on what really matters. But without emotional intelligence and man-centred conception, AI risks becoming a dehumanizing force.
As one McKinsey leader recalled during a strong deployment, he was not only the technician who had to repair, he was the team’s mentality. By investing in psychological security and open communication, they could turn a failure almost into success. Race? The transformation of IA is not a fingerprint. It’s a marathon, and the winners will be those who put people at the heart of their strategy.
So if your AI strategy feels stuck, don’t blame the tools. Look inside. Examine your culture. Renew your leadership. Because in 2025, success with AI is not only what your machines can do, it is what your people can become.