Back to Writing
Deep-Tech

Selling Something No One Understands

Lessons from enterprise AI.

The hardest part of building RootCause hasn't been the algorithms. It hasn't been the engineering. It hasn't even been competing with companies a thousand times our size.

The hardest part has been this: explaining what we do to people who don't have the conceptual machinery to understand it.

I don't mean they're stupid. I mean they literally lack the mental category for what we've built. They can't reject it, because rejection requires understanding. They can't evaluate it, because evaluation requires a frame of reference. They just... can't see it.

And that's far harder to overcome than skepticism.

The Translation Problem

I've had hundreds of meetings where the same pattern plays out.

I start explaining what causal AI actually does-how we discover the structure of cause and effect in enterprise data, how we can simulate interventions before you make them, how we answer "why" questions that traditional analytics can't touch.

About five minutes in, someone interrupts.

"So it's like a dashboard?"

"So it's predictive analytics?"

"So it's an AI copilot for business intelligence?"

And from that moment, the conversation is effectively over. Not because they've rejected the idea-because they've translated it into something familiar so their brain can stop working.

This isn't a character flaw. It's how human cognition operates. We anchor new concepts to existing ones. The problem is that some things genuinely don't fit existing categories, and forcing them to fit destroys exactly what makes them valuable.

To protect confidentiality, I've intentionally replaced specific terms, company details, and industries in the stories that follow.

I watched this happen in real-time during a pilot. We'd built a causal model of their business and found something counterintuitive: their new channel wasn't cannibalizing their traditional business. The traditional business was declining due to secular trends, and the new channel was backfilling lost customers.

Their analyst looked at our output and asked: "But what's the correlation coefficient?"

He wasn't being difficult. He was genuinely trying to understand. But he only had one mental model for analyzing relationships in data-regression-and our output didn't fit that model. He kept asking questions that assumed we'd done a fancier version of what he already knew how to do.

We hadn't. We'd done something categorically different. But I couldn't get him to see it, because seeing it required abandoning the frame he'd used his entire career.

Why Investors Pass

For the first year of fundraising, I thought investors were passing because they didn't believe in the market, or didn't trust the team, or had concerns about the technology.

Then I realized something more frustrating: they were passing because they didn't understand what they were looking at.

When I said "we solved the computational complexity problem that's limited causal AI for forty years," they heard "we have better algorithms." When I said "we achieved O(n log n) where the field has been stuck at O(n²)," they heard "it's faster." When I said "we can process datasets that competitors can't even load," they heard "it scales."

All of these translations are technically true. None of them capture what actually happened.

What actually happened is that we broke through a fundamental barrier that the entire field had assumed was permanent. We didn't make causal discovery incrementally better-we made it possible at enterprise scale for the first time. That's not a feature improvement. That's a category creation.

But VCs pattern-match. They have to-they see thousands of companies. And there's no existing pattern for "solved a problem everyone assumed was unsolvable." So they reach for the closest available template: "claims to have better technology than competitors." And that template comes with a built-in skepticism, because everyone claims to have better technology.

One investor told me, after passing, that our positioning was confusing. "You're either a BI tool or a data science platform," he said. "Pick one."

But we're neither. We're something that doesn't have a name yet. And I couldn't pick a lane without lying about what we'd built.

The Regression Mental Model

Here's the specific shape of the problem in our space.

Almost everyone who works with data professionally has been trained to think in terms of regression. When they hear "analyzing relationships between variables," their brain automatically loads the regression framework: dependent variable, independent variables, coefficients, p-values, R-squared.

This framework is so deeply embedded that it's invisible. It's not a conscious choice-it's the water they swim in.

Causal inference is a fundamentally different framework. It asks different questions. It makes different assumptions. It produces different outputs. You can't translate between the two without losing essential information.

When I show someone a causal graph, they often ask: "What are the coefficients?" There aren't coefficients. That's not how this works.

When I explain that we can simulate interventions, they ask: "So you're predicting what would happen if we change X?" Sort of, but not in the way you're thinking. We're computing the causal effect of an intervention, which is mathematically distinct from predicting a counterfactual based on observed correlations.

When I say we can identify confounders, they ask: "So you're controlling for them like in a regression?" No. Controlling for a variable in regression and identifying a confounder in a causal model are completely different operations with completely different implications.

Each of these conversations requires me to first unteach what they think they know, then teach a new framework, then show how our system implements that framework. This takes hours. Sometimes days. And at the end, many people nod politely and go back to thinking in regression terms, because that's what they know.

The Real Product

When you're building something in an established category, you're selling features. Faster, cheaper, easier, more accurate. The buyer already knows what the thing is-you're just arguing about specifications.

When you're building something in a new category, you're not selling features. You're selling a different way of understanding the problem.

This is why our best customers aren't the ones who were already looking for causal AI. Our best customers are the ones who had a problem they couldn't solve, tried everything else, failed, and were desperate enough to consider that maybe the problem wasn't the tools-maybe it was the entire approach.

A large manufacturer came to us because they had a production line problem that had been unsolved for months. Their data science team had thrown every standard technique at it. Nothing worked. When we showed them a causal graph of their process and identified the root cause in an hour, they didn't ask about coefficients. They asked: "How did you find that?"

That's the conversation you want. Not "how does this compare to what I already use," but "how is this even possible."

The problem is, most buyers aren't desperate. They have existing tools that sort of work. They have processes that are good enough. They're not looking for a paradigm shift-they're looking for incremental improvement.

And when you offer them a paradigm shift, it registers as friction, not value.

The Consultant Problem

Here's something that took me too long to understand: consultants are not our competitors. They're our obstacle.

McKinsey and BCG and the rest have spent decades training executives to think in a particular way. They've created a shared language-frameworks, two-by-twos, driver trees-that feels sophisticated but is actually quite shallow.

This language optimizes for one thing: making complex situations feel manageable. It does this by imposing structure that may or may not reflect reality. The map is clean. The territory is messy. The consultant's job is to make the map feel authoritative enough that no one looks too closely at the territory.

Causal AI does the opposite. It exposes the actual structure of the territory. And that structure is often uncomfortable. It reveals that the things executives think are driving outcomes aren't. It shows that interventions they've invested in don't work. It surfaces confounders they didn't know existed.

This is valuable-genuinely, enormously valuable - but it's not what executives have been trained to want. They've been trained to want clarity and confidence. We provide uncertainty quantification and conditional answers.

"This intervention will increase revenue by 8%, with 95% confidence the true effect is between 6% and 11%, assuming the competitive environment doesn't shift significantly."

That's an honest answer. It's also much harder to act on than "implement Strategy A to achieve 10% growth."

The consultant's answer is wrong. Ours is right. But theirs is easier to put in a board deck.

The Emotional Reality

No one tells you how isolating this is.

You spend years building something you believe in deeply. You know it works-you've seen it work, you have the case studies, you have the benchmarks. But most of your conversations aren't about whether it works. They're about what it even is.

You explain the same concepts hundreds of times. You watch people nod along and then ask questions that reveal they didn't understand anything. You hear your own positioning described back to you in ways that make you wince.

And the worst part: sometimes you start to doubt yourself. Maybe the market is right. Maybe if this many people don't get it, the problem is the product, not the market.

This is where most deep-tech founders give up. Not because they stop believing, but because they get tired. Tired of explaining. Tired of being misunderstood. Tired of watching worse solutions win because they're easier to comprehend.

The temptation to simplify is enormous. Just call it a dashboard. Just say it's predictive analytics. Just give them the story they already know how to process.

But if you do that, you've killed the thing that made it valuable in the first place.

Why Deep-Tech Fails

Most deep-tech startups don't fail because their technology doesn't work.

They fail because the market doesn't have the language to describe what they've built. Buyers don't have the frameworks to evaluate it. Investors don't have the comparables to value it. And the founders don't have the patience to wait for the world to catch up.

So they pivot. They simplify. They rebrand into something familiar.

And the original insight-the thing that was genuinely new, genuinely valuable-dies quietly. Sometimes another company reinvents it five years later and gets all the credit, because by then the market is ready.

Timing isn't just about market conditions. It's about whether the conceptual infrastructure exists for people to understand what you're offering.

We were fortunate. The AI hype cycle, for all its downsides, created an opening. Executives started asking questions about causation versus correlation. Regulators started demanding explainability. The failures of black-box ML became visible enough that people started looking for alternatives.

Three years earlier, no one would have listened. Three years later, someone else might have built it first.

What I've Learned

After years of selling something no one understands, here's what I know:

The first conversation is never about closing. It's about planting a seed. You're trying to create enough cognitive dissonance that they go home and think about it. The sale happens later, after they've realized their existing tools can't answer a question they now know to ask.

Find the desperate ones. People with unsolved problems are willing to learn new frameworks. People with "good enough" solutions aren't. Don't waste time on the latter.

The demo is everything. Abstract explanations fail. Showing someone their own data, with relationships they didn't know existed, creates the paradigm shift that words can't.

Don't fight the translation. When someone says "so it's like a dashboard," don't say "no, it's completely different." Say "yes, but imagine the dashboard could tell you why the numbers are what they are, and what would happen if you changed them." Start from their frame and expand it.

Protect the core. You can simplify the message without simplifying the product. You can use familiar words without building familiar things. The goal is translation, not corruption.

Patience is a competitive advantage. Most founders in new categories give up too early. The ones who win are the ones who can hold a complex model of reality in their heads long enough for the world to catch up.

Why I Keep Doing It

Despite everything, I still prefer building things no one understands.

Because once they do understand-once the shift happens, once the frame changes-you've done something that actually matters. You haven't given them a better tool for the same job. You've changed what jobs they think are possible.

And that kind of product lasts. Everything else is just UI on top of assumptions that will eventually be obsolete.

The world catches up eventually. The question is whether you can survive long enough for it to happen.

So far, we have.

Ayman Elhalwagy is the CEO and co-founder of RootCause.ai. He's spent four years explaining causal AI to people who think in regression, and he's getting better at it.

Back to Writing