👋 Hi, I’m Andre and welcome to my newsletter Data Driven VC which is all about becoming a better investor with Data & AI.
📣 Secure one of the last 33% discounted SUPER EARLY BIRD tickets for our Virtual DDVC Summit 23-25th March here to learn how leading investment firms transform their operations with alternative data, AI, and automation
Brought to you by Affinity - 7 Modern Workflows to Win Deals Faster
Most private capital firms are sitting on networks worth millions in deal flow. The firms pulling ahead have built systems to actually activate those relationships.
In this new guide, Affinity breaks down 7 real workflows used by firms like BlackRock ($13T AUM), Bessemer Venture Partners, SpeedInvest (€1.2B), and Notable Capital to win proprietary deals faster, prevent critical relationships from going cold, and reclaim hundreds of hours per year.
The guide goes beyond theory, showing exactly what worked, what didn’t, and why these systems matter heading into 2026.
Every investor I talk to right now is asking some version of the same question: where do moats come from in the age of AI?
Traditional SaaS defensibility was built on data silos, switching costs, and learned interfaces. All three are eroding.
So what's left?
I've been thinking about this a lot, both as an investor evaluating software deals and as someone who has spent the past months rebuilding my own workflows from scratch with AI-native tools.
And I think the answer requires understanding an evolution that's playing out in three distinct stages, each one dissolving the moats of the stage before it.
Here's my framework.
Stage 1: Vertical SaaS and the Era of Data Silos
The first generation of enterprise software created value by capturing and organizing data that was previously trapped in spreadsheets, emails, and people's heads. CRMs, ERPs, HRIS platforms, deal flow tools, portfolio monitoring systems.
Each one became a system of record for a specific domain.
The moat was straightforward: once your data lived inside Salesforce or HubSpot or Carta, switching was painful.
Not because the interface was hard to learn (it was, but that's a weak moat), but because the data itself was locked in. Your pipeline history, your customer interactions, your portfolio metrics, all structured in a proprietary format that didn't talk to anything else.
That’s what we call backward-looking lock-in effects.
But it also created a fundamental problem: every tool became an island. Your CRM didn't know what was happening in your calendar. Your deal flow tool didn't know what your partners were discussing on Slack. Your portfolio monitoring system couldn't see the LP emails that provided context for why certain metrics mattered more than others.
The data was captured, but the context was fragmented.
For years, this didn't matter much. Humans were the integration layer. You, the investor or operator, were the one who held the full picture in your head and moved between tools to synthesize information.
The value of each tool was measured by how well it handled its specific domain, not by how well it connected to everything else.
AI changed that equation entirely.

Join 900+ investors in our free Slack group as we automate our VC job end-to-end with AI. Live experiment. Full transparency.

Stage 2: The AI Context Problem
When vendors started integrating AI into their vertical solutions (and nearly every software company has by now), they hit a wall that most didn't anticipate: AI is only as good as the context it can see.
A CRM with an AI copilot can summarize your pipeline and draft outreach emails. But it can only work with what's inside the CRM.
It doesn't know that the founder you're about to email just posted something on LinkedIn that changes the conversation.
It doesn't know that your partner mentioned concerns about the space in yesterday's IC meeting notes.
It doesn't know that an LP asked about your exposure to this sector last week.
The AI lives in the silo. And a siloed AI, no matter how capable the underlying model, produces incomplete and often misleading outputs.
This is the core tension of the current moment. Every vertical software company is racing to add AI features. But the AI features are constrained by the same data boundaries that defined the product in the first place.
The CRM's AI only sees CRM data. The deal flow tool's AI only sees deal flow data. The portfolio monitoring AI only sees portfolio data.
The result is that users end up doing the same thing they always did: manually synthesizing across tools. Except now they're synthesizing across AI-generated outputs from multiple tools, which is arguably worse than synthesizing raw data because each AI confidently presents its partial view as if it were the full picture.
The industry tried to solve this with integrations. Connect your CRM to your email to your calendar to your Slack. And to be fair, this works for narrow, well-defined use cases within specific industries.
But anyone who has spent time with enterprise integrations knows the reality: they're brittle, they break, they cover 80% of the data but miss the 20% that matters most, and maintaining them is a full-time job.
Integrations are plumbing. They move data between systems.
What they don't do is create understanding.
Stage 3: The Horizontal AI Layer and Full Context
This is where the current shift gets interesting, and where I think most people underestimate what's happening.
Tools like Claude Code, Cowork, ChatGPT and alikes represent something genuinely new: a horizontal AI layer that sits on top of all your vertical tools, connects to all your data sources through protocols like MCP, and for the first time gives an AI system full context across your entire workflow.
This isn't just another integration platform. The difference is fundamental.
An integration platform moves data between systems. A horizontal AI layer with full context can actually reason across systems.
It can connect the dot between an LP email, a portfolio company's latest metrics, a market report you saved last week, and a Slack conversation from this morning, and produce an output that none of your individual tools could generate on their own.
I've experienced this firsthand.
When I rebuild and automated my own workflows with tools like Claude Code, the shift wasn't incremental. It was qualitative. For the first time, I had an AI that could see across my GDrive, Notion, Granola, emails, Slack (..) - all at once. The outputs weren't just faster versions of what I was doing before. They were different in kind, because the AI could draw connections I wouldn't have made myself.
And here's where the evolution takes its next step.
The logical endpoint of "AI that sees everything" is that the AI doesn't need its own interface at all. If the agent has full context and can take actions across all your tools, why would you open a separate app to interact with it? Why not just talk to it where you already are: WhatsApp, Slack, Telegram, iMessage?
That's the OpenClaw model. That's the agentic abstraction. The AI becomes ambient. It's not a tool you switch to; it's a layer that operates continuously in the background and surfaces through whatever messenger you prefer.
You describe what you need in natural language, and the agent orchestrates across all your systems to deliver it. If needed, with human-in-the-loop.
So the evolution looks like this:
Vertical SaaS (data silos) → Horizontal AI layer (full context) → Agentic abstraction (messenger-native)
Each stage dissolves the moats of the previous one. Vertical SaaS moats erode when context can flow freely. The horizontal AI layer's advantage erodes when the interface itself becomes commoditized. Which brings us to the question that matters most.
So Where Does the Moat Actually Come From?

Here's the uncomfortable truth: in this three-stage evolution, the most obvious positions are also the least defensible.
Being the "horizontal integration layer" sounds powerful. But MCP is open, APIs are standardizing, and Anthropic, Google, OpenAI, and Microsoft are all racing to be the connective tissue between your tools. The integration layer is a commodity race among the best-funded companies on earth. Building a startup moat there is like opening a coffee shop between three Starbucks locations.
Being the "messenger-native interface" sounds sticky. But WhatsApp and Slack are platforms, not products you control. And the foundation model companies will inevitably embed agentic capabilities directly into the messaging platforms themselves. The interface layer is thin and vulnerable.
So where does the moat actually form?
I see four positions worth investing in.
1. Compounding context over time
Not access to data (that's commoditized), but learned, persistent understanding of how a specific person, team, or organization works.
Every correction, every preference, every pattern of behavior that the system absorbs and uses to get better. Two systems can both connect to your email, CRM, and documents. But the one that has six months of interaction history with you, that knows your decision patterns, your communication style, which outputs you edited and how: that system is meaningfully better for you.
And that advantage compounds with every interaction. That’s the data flywheel, the most underappreciated moat in AI right now. It's not about having context once. It's about context that gets smarter over time.
2. Orchestration reliability at the domain level
Full context is necessary but not sufficient. You also need to know what to do with it. In finance, getting things right 90% of the time is the same as getting them wrong 100% of the time. In legal, a missed clause can invalidate an entire deal. In healthcare, an incorrect interpretation can harm a patient.
The companies that build deep, domain-specific orchestration (knowing when to trust the model and when to check, what order to feed data, how to format outputs for specific roles and firms) create a moat that's genuinely hard to replicate.
A horizontal layer can't fake this. It requires the grinding, firm-by-firm, team-by-team work of learning how professionals actually do their jobs. Better models don't erode this advantage; they amplify it, because better models make the orchestration layer more powerful, not less relevant.
3. Network effects around shared AI workflows
If an entire team's collaboration patterns are mediated through and learned by a specific system, replacing it means losing collective institutional memory. Not individual preferences, but shared workflows, team-specific templates, collaborative patterns that developed over months of usage.
This is the Bloomberg effect applied to AI: shared tooling creates shared language, and shared language creates lock-in that's social, not technical.
The companies that become the coordination point (not just the execution point) for teams will build this kind of network effect. It's achievable at the horizontal layer, but only if the agent becomes the place where teamwork happens, not just where individual tasks get completed.
4. Trust as a moat in high-stakes domains
In finance, legal, healthcare, and government, the marginal value of accuracy is enormous and the cost of errors is equally extreme.
The companies that earn trust in these critical domains (through audit trails, compliance frameworks, provable reliability, and institutional relationships) create a moat that no amount of technical capability can shortcut.
Trust takes time. And in domains where getting things wrong can cost millions or end careers, the time it takes to earn trust is itself a barrier to entry.


My Investment Thesis
Simple take: Don't invest in layers, invest in compounding advantages.
The integration layer (connecting everything) will be commoditized within months. The interface layer (messenger-native, voice-native, ambient) will be commoditized shortly after. Neither is a durable moat on its own.
What's durable is the accumulated intelligence about how work actually gets done: at the individual level, the team level, and the organizational level. Whether that intelligence lives in a vertical tool, a horizontal platform, or a messenger-native agent matters less than whether it compounds with every interaction and becomes harder to replicate over time.
The winning companies in this next phase likely won't be the ones with the best model access (everyone will have that), the most integrations (that's table stakes), or the slickest interface (that's a feature, not a business).
They'll be the ones that build learning systems: systems that get measurably better for each user and each organization with every passing week, creating a gap that widens over time rather than narrowing.
The question I ask myself for every startup I evaluate: are you building a pipe, or are you building a memory? Pipes get commoditized. Memory compounds.
That's where the moats are. At least that’s what I believe ;)
Stay driven,
Andre
PS: Reserve your seat for our Virtual DDVC Summit 2026 where 30+ expert speakers will share their workflows, tool stacks, and discuss the latest insights about AI for VC





