
Unfortunately, most corporate strategy meetings follow a predictable script.
A business team walks into the meeting room with a 127-slide deck packed with regression models, sensitivity analyses, or Monte Carlo simulations. The only goal seems to be to justify decisions that could be tested in a couple of weeks using real people, people who represent our buyers. Somewhere we became convinced that “difficulty” equals “rigor”. We’ve concluded that the more complexity there is makes decisions trustworthy, and that anything achieved without visible struggle lacks any real value.
Behavioral scientists call this The Effort Heuristic. When we can’t measure quality directly, we use effort as a proxy for it. What started as a mental shortcut has metastasized into organizational doctrine. We’ve built entire company structures to reward performance over results, systematically punishing the fast iterative thinking that actually wins in markets.
In the real world, buyers don’t care how we arrived at our pricing strategy. As far as they’re concerned, a two-year regression analysis is the same as a quick sketch drawn on a napkin. What they actually want to know is if the price feels right. Shareholders don’t pay for elegant methodology, they pay for outcomes. Yet inside most businesses, we behave in the exact opposite way.
Why we build complexity when simplicity works
An 18-month digital transformation involving multiple vendors and endless stakeholder meetings looks safer than it is. When something fails, responsibility starts to diffuse faster than a fart in an elevator. It’s the vendor’s problem. Or the consultant screwed up. Or the cross-functional team that couldn’t align. At the end of the day, nobody’s career takes a direct hit, since we’re all looking to pin the blame elsewhere.
Compare that to a three-week pricing test done in the real world. If it fails, there’s nowhere to hide. The failure sits there as clear as day for all to see, immediately traceable to specific people who made specific bets.
So what do we do? We avoid running those kinds of tests!
We avoid the small, reversible decisions that teach us faster than competitors. Instead we build elaborate models and justify them through meetings. A customer observation like “people abandon carts because they can’t find the checkout button” gets elevated into heat maps and journey funnels and engagement matrices. Instead of adding clarity, we’re adding bloat.
What happened when a brilliant model met reality
While their competitors poured millions into predictive modeling, Airbnb decided to zig when everyone else was zagging. Instead of running one ginormous experiment with a gazillion variables, they ran thousands of tiny, quick-to-run tests. They watched what customers actually did instead of modeling what they might do. When the experiments revealed something significant, they adjusted. When they didn’t, they left things alone and moved on. The judging hierarchy was amazingly flat – there were not lengthy committee meetings full of people with little else to do than to validate the model, or six-month projection cycles. All they had was real-world user learning that compounded quarter after quarter. The rest is history.
Starbucks didn’t scale because of running endless customer surveys. Amazon didn’t build customer obsession through mathematical models. These businesses won because they made fast, reversible bets (Amazon’s famous two-way door theory) and learned from actual user behavior instead of projected behavior.
The tyranny of quantitative thinking
Too many businesses continue to treat quantitative data as evidence and qualitative data as anecdote. Interviews, observations, and direct customer feedback all get filed under “soft”, because they’re difficult to quantify and measure. Finance demands five-year models before approving experiments, even though those models rest on “waving a wet finger in the air” assumptions more fragile than the actual initiatives they were meant to evaluate. Meanwhile, the actual competitive advantages that move markets often can’t be put in a spreadsheet.
Supposing the marketing team discovers through three customer conversations that a specific phrase increases the propensity to result in a conversion. They bring this to the leadership meeting and watch as everyone nods politely. Then they bring up a dashboard with engagement metrics from 100,000 users and watch as all the people in the room lean forward. The dataset is noisier, and the insight is weaker. But because it’s got a bunch of big numbers and it’s been done is a pseudo-scientific manner (look! it’s got pivot tables and everything!) it looks more legitimate and is taken more seriously.
What rigor actually means
Sure, there are many decisions that warrant complexity. I don’t want my airline pilot to take a punt on doing something different this time “…just to see what happens.” Safety systems, regulatory environments, or bets we can’t reverse are all instances where ‘giving it a go’ probably isn’t the best move. But the thing is, most business decisions aren’t like that. We’re applying nuclear-reactor level rigor and process complexity to banner headlines and pricing adjustments. We treat every choice like it’s irreversible when most of them aren’t.
Real rigor isn’t about data volume or methodological sophistication. It’s about clear assumptions and fast learning. A hypothesis such as, “This banner increases sign-ups by 15% because it removes confusion about the next step” is more rigorous than a 100-slide model built on stacked assumptions. The first one tests in days, while the other one locks us into thinking in circles for months. Businesses that act on incomplete information, make reversible bets quickly, and commit to irreversible bets deliberately are the ones that move faster than those waiting for a level of certainty that never arrives.
The real cost of modeling instead of testing
Six months modeling a decision doesn’t just cost budget, it costs learning cycles. A competitor running twelve small experiments in that same window learns more than we can do. Their judgment improves, as does their competitive position. The iterations we can do in a predetermined space of time, the more we can learn. Fail fast, fail often, right?
Markets don’t reward thorough analysis or impressive methodology. Speed and timing matter far more. Every dollar we spend justifying decisions through “complexity theater” is a dollar spent preventing us from learning what works. The business that figures things out faster, wins.
Not because they’re smarter, but because they’ve had more practice.
ABOUT THE AUTHOR
Gee Ranasinha is CEO and founder of KEXINO. He's been a marketer since the days of 56K modems and AOL CDs, and lectures on marketing and behavioral science at two European business schools. An international speaker at various conferences and events, Gee was noted as one of the top 100 global business influencers by sage.com (those wonderful people who make financial software).
Originally from London, today Gee lives in a world of his own in Strasbourg, France, tolerated by his wife and teenage son.
Find out more about Gee at kexino.com/gee-ranasinha. Follow him on on LinkedIn at linkedin.com/in/ranasinha or Instagram at instagram.com/wearekexino.
Recent articles:
The “Brand vs. Activation” Debate Is a Capital Allocation Failure

How Behavioral Science Thinking Improves Marketing Effectiveness

Dark Social: The Hidden Conversations Marketers Can’t See

Marketing In A Recession: How To Weather The Storm

How To Convince A Marketing Skeptic

Privacy Protection: Why Ad Tracking Must End


