Photo of a broken piggy bank, business concept illustration of a broken business

AI Won’t Fix a Broken Business. It Will Just Break It Faster

Before we can integrate AI-based systems in our business, we need to have a much deeper and more implicit understanding of how our business actually runs.

Gee Ranasinha  /   March 6, 2026   /   Marketing

Most of the AI-implementation conversations happening right now are focused around what we might gain. Speed, efficiency, cost reduction, competitive advantage, and so on. I don’t think many people who’d disagree these are clear tangible business benefits, and organizations treating AI as a passing fad are clearly making an expensive mistake.

But there’s another version of this question that almost never gets asked: What might we be giving away in the process, and would we even know?

Fast is only an advantage if you were already headed in the right direction

One of the things AI can do really well, and it can do many, is that it can accelerate whatever is already happening. Feed it a well-understood, clearly documented process and it will run that process faster, at scale, with less friction. But if we feed it something poorly defined, inconsistently executed, or built around assumptions nobody has tested in years, and it will do that at scale too.

AI amplifies both good and bad inputs. An error a human makes once can be amplified thousands of times before anyone notices, because the machine never has an off day that prompts a second look. AIIM’s industry research showed that 80% of organizations believed their data was AI-ready before deployment; but 95% of those same businesses ran into significant data problems when they actually tried to do it.

We tend to evaluate AI through an economic lens. Faster execution, lower costs, improved margins. FTI Consulting’s analysis of AI transformation describes this as the dominant framing among organizations that later discover they were asking the wrong questions. We measured the speed of the output without first checking whether the output was worth producing quickly. The businesses that were already operating with process fragility they’d been managing around for years find that AI removes the humans who were, it turned out, actually doing all the managing.

McKinsey’s State of AI research found that 88% of organizations now regularly use AI in at least one business function, while only around 5.5% are driving significant value from the deployment. Is underperforming tech the root cause of that huge gap? Nope – the real reason is that businesses deployed AI into processes they hadn’t understood clearly enough for acceleration to help.

A question to ask before deploying AI into a workflow isn’t which tool to use. It’s whether we understand the underlying business processes – our business “operating system’, if you will – well enough to explain it to a machine. Because if the answer is no, the machine will fill in the blanks itself, consistently, and at scale.

The sequence most businesses have backwards

For most of the past century, businesses built themselves around a particular logical process. We hire the people, design the workflow around them, then buy the systems and/or software needed to support it. This is still how most organizations operate, which is precisely the problem.

The reason is because the sequence that served us so well over the past 100+ years has now flipped, and today is running the other way. Today, it makes far more sense to start with what the technology can actually reliably do, and work backwards from there. What that capability makes possible in the real world, determines how the workflow should be structured, which in turn determines what kind of organization is needed to run it. We can then see clearly the types of people we need to hire, how to train them, and what kind of underlying culture glues the whole thing together.

Most businesses are skipping straight from “here’s the tool” to, “How do we fit this into what we already do?” That’s the path of least resistance, and explains why so many AI deployments produce incremental gains rather than structural ones. Making a broken process 10% faster isn’t transformational. It’s a more efficient version of the original problem. Trying to emulate with tech the way we did things with people, is doing things exactly the wrong way round.

All those AI-deployment success stories we read about are by businesses who looked at implementing AI capability as a reason to ask a much more fundamental business question: What would this organization look like if we were building it now, knowing what the technology can actually do? That’s a very different question from “where can we shoehorn AI into what we’re already doing?” It requires thinking about the shape of the operation itself. Which functions look different when AI is doing the reasoning-intensive work? Which roles exist now because the process required them, rather than because the outcome requires them? Where are we staffing for process friction that AI removes?

This is both an organizational design question, and a cultural question. Businesses that treat it primarily as a technology question tend to end up with expensive software sitting inside structures that were never designed to use it well. Meanwhile, the characteristics that make AI deployment actually work, such as the willingness to rethink assumptions, or being comfortable with uncertainty, are all cultural. You won’t find that kind of stuff in the release notes.

The economists and finance people won’t like me saying this (since they’re looking at AI purely as a way to reduce costs and maximize profits), but the more AI capability a business deploys, the more the quality of its people matters. A team that’s clear on what it’s trying to accomplish and understands the strategy well enough to make good decisions about what to hand to a model and what to keep human, will use AI capability in ways that a disengaged, confused, or poorly led team simply won’t. The gap between those two outcomes is larger with AI than it was without it.

The knowledge nobody thought to write down

Whether they’ll admit it or not, most businesses are running on knowledge and process that has never been formally documented. There’s a huge difference between the way a client relationship actually gets managed, for example, compared to how the account management template says it should. Research estimates that around 90% of the total knowledge in an organization exists in this tacit form. It lives in people, rather than documents, because it was developed over the years through experience rather than taught through formal instruction. That’s also what makes it hard to replicate. While the competition can copy our pricing model, our service structure, or our sales spiel, it’s much harder to emulate 15 years of learned judgment about how to handle a particular kind of client problem.

So when businesses feel forced to implement AI in some part of their organization, it obliges them to ask questions and deliberate processes that most of them haven’t fully worked through. When we automate a process that runs on undocumented expertise we’re making a choice, probably without realizing we’re making it. We can do the slow, painful work of surfacing and documenting what we actually know before we automate, or we can let the AI infer the process from available data and just get on with it. Yes, option 2 gets us to launch faster, but it’s also how institutional knowledge gets replaced by an approximation of itself that the organization no longer owns, and didn’t notice when it left the building.

Research into knowledge management at engineering and science companies found that 40-60% of valuable insights were not being effectively reused, mainly because nobody could locate them when needed. In those cases, the knowledge existed somewhere in the organization, but was inaccessible enough to be practically useless. AI solves the accessibility problem, which is a real benefit. but what it also does, in the wrong conditions, is solve it on behalf of a vendor’s platform rather than the organization itself.

What AI does to a business model that nobody warned us about

Deploying AI doesn’t just change how efficiently we execute a business model. In many cases, it changes the model itself, as a side effect. MIT Sloan’s research on how AI reshapes organizational value creation draws a distinction between businesses that design such shifts deliberately, and those that arrive there by accident. The difference tends to show up much later than you’d expect, which is part of what makes the accidental version so costly.

As AI makes operational efficiency standard rather than differentiating, pricing compression follows, and margin defense shifts entirely toward proprietary assets, domain expertise, and brand. The businesses with something left to defend in that environment are the ones whose competitive advantage can’t be replicated by a model available to everyone. The businesses in trouble are the ones whose advantage was, without them realizing it, a proprietary way of operating that they handed to a shared system.

A big part of the client value of many B2B businesses, especially those selling services, lies in how they think about a client problem, not just what they produce in response to it. That accumulated thinking isn’t a feature of what they’re selling, it’s the actual product. When it gets encoded into a third-party AI platform through data submitted in prompts, those businesses haven’t just adopted a tool. They’ve spent their own money training something that, unless they’re careful, will eventually be available to their competitors on the same terms.

This is where the vendor dependency question becomes strategic rather than technical. CTO Magazine’s analysis of AI platform risk describes the practical reality: proprietary APIs, closed data formats, and training pipelines that can’t be extracted without significant cost. When a vendor changes its terms, alters pricing, or simply fails, organizations built on closed systems have limited options. The language used in enterprise AI architecture research to describe this point is pretty ominous: organizations have outsourced not just their infrastructure, but their competitive intelligence itself. The keys to the car we’re driving now belong to someone else.

Asking the harder question first

Organizations gaining headway in AI deployment first interrogated their own processes, in detail, and without concern for what skeletons they might unearth. The goal was to work out whether the underlying (manual) processes were actually sound, and whether they could articulate what made them work without relying on people whose institutional memory would leave with them. That kind of interrogation surfaces things that most of us would rather sweep under the rug. Workflows that held together with Scotch tape and string, running on Jurassic machines because the software doesn’t support a newer OS, or because of a person who left 5 years ago. We pretend to customers, and ourselves, that we’re squeaky clean and state-of-the-art. The reality is decision logic that was never as consistent as we’d all assumed, or stages in the customer experience that worked because of relationships nobody bothered to formalize.

Finding those things before AI touches them isn’t fun, but it’s a whole lot better than finding out this stuff after the fact once the dysfunction is automated and running continuously. MIT Technology Review’s survey on AI and industry disruption found that 60% of respondents expect generative AI to substantially reshape their industry within five years. To me, for most businesses, that disruption is already internal. It’s the exposure of process fragility that was always there, now visible because a machine is running it faster than the workarounds can keep up.

The sovereignty question deserves a mention here. Every piece of organizational knowledge fed into a closed AI system is a potential asset transfer. Not necessarily large, not necessarily intentional, but it’s still a transfer. Before deploying, it’s worth knowing which parts of our business model are defensibly proprietary, which can survive commoditization, and which would quietly disappear into a vendor’s platform and not come back. A tiny number of businesses are even bothering to have that conversation right now. And the pressure toward commoditization doesn’t wait until the CEO or the Board get comfortable with the question.

To be clear: I’m not arguing against AI deployment – far from it, in fact. We’ve been helping B2B business implement AI-powered systems intelligence and customer value tools for some years. But FTI Consulting’s framing of “AI intuition debt” highlights a real risk on the other side: organizations that sit this out are accumulating a compounding deficit that will be expensive to close.

The case for moving forward is clear. But the case for moving with our eyes wide open is even clearer.



Blowing our own trumpet

Many thanks to the team at Feedspot for selecting us as one of the Top Small Business Blog in 2026. It’s really appreciated.

ABOUT THE AUTHOR

photo of Gee Ranasinha, CEO of marketing agency KEXINO

Gee Ranasinha is CEO and founder of KEXINO. He's been a marketer since the days of 56K modems and AOL CDs, and lectures on marketing and behavioral science at two European business schools. An international speaker at various conferences and events, Gee was noted as one of the top 100 global business influencers by sage.com (those wonderful people who make financial software).

Originally from London, today Gee lives in a world of his own in Strasbourg, France, tolerated by his wife and teenage son.

Find out more about Gee at kexino.com/gee-ranasinha. Follow him on on LinkedIn at linkedin.com/in/ranasinha or Instagram at instagram.com/wearekexino.