

How Atomico evaluates AI startups
Sarah Guemouri sits at an unusual vantage point. As Head of Insights at Atomico, she sees how hundreds of companies are being built, pitched, and evaluated in real time. Her perspective is less about any single product trend and more about the deeper shift underneath: in the AI era, advantage is moving away from features and toward judgment, adaptability, trust, and distribution.
Main Takeaways
- In AI, shipping fast is no longer a signal. It’s a hygiene factor.
- Product differentiation is getting weaker, so founder quality matters more than ever.
- AI is great at compressing work, but it still struggles with the most important judgment calls.
- The winners won’t just add AI. They’ll use it to create operational leverage and more human connection.
- As products get easier to copy, brand, trust, and distribution get harder to ignore.
Venture is becoming more human, not less
I lead insights at Atomico, and my job is basically to help the firm make better decisions based on data rather than feelings. In practice, that can mean anything from a broad strategic question about the future of venture to a very tactical diligence on a company we’re evaluating. It’s varied, which I love. No two days look the same. I use frameworks a lot, but honestly, I also make them up on the go. There’s a lot of creativity in the work.
What’s interesting is that venture has always had this boutique core to it. You have partners with deep networks, real pattern recognition, strong instincts. That still matters. But data, and now AI, let you scale some of those superpowers. You can consolidate more information, narrow an opportunity set faster, and give more people access to insights that used to live with a few insiders.
That sounds like progress, and it is. But it also creates a different problem. As access to information gets democratized, venture gets more consensus-driven. More people are looking at the same signals, chasing the same companies, and reaching similar conclusions. So the question becomes: where does edge come from when everyone has access to the same machine?
The founder profile is getting cleaner. I’m not sure that’s a good thing.
One thing I keep coming back to is how much the founder archetype has changed.
Especially in Europe, being a founder used to feel like a strange choice. It was risky. It wasn’t the obvious high-status path. The people who did it often had a slightly irrational level of conviction. Now it’s different. Founder has become a status job. More people want it, which is great on one level. But it also means more people are learning how to game the system.
They know what investors want to hear. They know how to present the perfect profile. They know how to tick every box.
So now you have this cohort of founders who look amazing on paper. But I’m not sure they’re always the archetype you need to build a moonshot company. Sometimes it feels like they want the upside of being a founder without really taking on the downside. And that changes the texture of what we’re underwriting.
In early-stage AI, the product matters less than it used to
At seed and Series A, you used to look at the team, the product, and the market. The product was still a real signal. You looked at velocity. You asked how fast they shipped. You wanted proof that they could execute.
Now, in AI, shipping fast is table stakes. It’s a hygiene factor.
Everyone is shipping fast. And because companies like Anthropic keep releasing something mind-blowing every week, the benchmark keeps moving. So even teams that are objectively moving much faster than startups did a few years ago can still look slow by comparison.
At the same time, product differentiation is weaker. You look at a space and there are ten companies doing roughly the same thing, plus incumbents, plus the foundation models themselves pushing down into the stack. So it’s genuinely harder to know which product is going to win.
That pushes much more weight onto the team. Their adaptability. Their judgment. Their ability to learn faster than everyone else. I think founding team assessment has never been more important.
And that’s the part AI doesn’t really solve. It can help consolidate intel. It can summarize the market. It can surface patterns. But at the end of the day, the bet is still deeply human.
The growth is real. The defensibility often isn’t.
The other thing we’re all looking at is this crazy adoption curve. You see companies going from zero to $100 million in revenue in what feels like no time. That changes the psychology of the whole market. Suddenly, everyone is asking why this company has done it and another one hasn’t.
But when you drill into some of those businesses, the picture can feel fragile.
The revenue is there. The usage is more brittle as the actual workflow differentiation often isn’t quite there. A lot of it still looks like a wrapper around a large language model. The vision is much bigger, of course. Everyone wants to be the copilot for some vertical or function. But in practice, many of these products are still mostly chat. They’re not yet changing the day-to-day job in a way that feels structurally different from what a general-purpose model can do.
That doesn’t mean there won’t be great companies built here. There will be. We already see the shift from wrapper to harness, and the step-up in output quality despite running on the same underlying model. But the bar is much higher than people think.
The real question is: what job does this tool actually own?
This is something I think about a lot, both as an investor and as a user.
We use tools like Claude internally and the productivity gain is obvious. It’s real. People love it. But I’ve found it surprisingly hard to define what the tool actually is in the stack. What job does it own? What does it replace? Where does it really sit in the workflow?
That’s where a lot of AI products still feel unresolved to me. They’re powerful, but the surface area isn't quite right. They help you do many things a bit faster, but they don’t always own a clear, durable job.
Right now, tools like Claude often feel like a kickstarter for thinking and doing. A launchpad. They help with research, drafting, synthesis, shaping an idea and creating outputs. But then you still end up in Docs. You still end up in Sheets. You still go back to the tools that anchor the workflow.
So the gain is real, but the displacement is slow.
And that’s why I think a lot of AI companies are going to struggle. It’s not enough to have a nice set of capabilities that gets you 80% there. You need to solve a deep enough problem end-to-end so that someone will keep paying for your product even when general-purpose models keep getting better.
We are still designing for today’s workflow, not tomorrow’s
What feels unfinished right now is that most AI tools compress steps. They don’t remove them.
They help you do the same job faster. They help you get a better first draft, produce a quick prototype, generate a brief, summarize a meeting, structure a memo. All of that is valuable. But it still assumes the old workflow is the right workflow.
I’m not convinced it is.
The more interesting question is not how I do my current job faster. It’s how the job itself changes. Do I still need the same sequence of steps? Do I still need the same tools? Do I still need the same role boundaries?
If you push it to the extreme, right now it feels like the human is doing more of the mechanical work. Moving output from one place to another. Passing context between stages. Acting like the API between systems that don’t quite connect yet.
That can’t be the end state.
I think the next shift is that we stop optimizing each step and start redesigning the whole flow. The winners will be the teams that rethink the system, not just the task. And then maybe we can start talking about a "platform shift".
AI is a consensus machine, so proprietary insight matters more
One thing I worry about, especially in venture, is that AI can flatten thinking if you let it.
It’s very good at giving you the plausible view. The market map. The summary. The rational case. But investing is not about reproducing consensus. It’s about deciding when consensus is wrong.
So you need proprietary data. You need differentiated insight. You need judgment. You need to actively challenge the machine rather than let it close the loop for you.
That’s true for investors, but honestly it’s true for product teams too. If everyone uses the same models, the same prompts, the same public information, then a lot of output starts to feel similar. The thing that matters is what you bring to it. Your taste. Your weightings. Your perspective. What you decide is important that others don’t.
That’s where non-consensus decisions still come from.
Competitive advantage is shifting toward trust, distribution, and resilience
If products are easier to copy, then advantage has to come from somewhere else.
I still think the strongest companies are the ones that can articulate value clearly and get customers to it faster than competitors. Execution still matters. Getting to better outcomes still matters. But in AI, I think there are a few things becoming even more important.
The first is trust. Especially in regulated markets, trust is not a soft thing. It’s a barrier to entry. It’s why we spend time looking at industries where credibility, compliance, and reliability really matter.
The second is distribution. I think this only gets harder. There is more noise, more competition, more products promising the same thing. The ability to establish a brand, build relationships, and earn attention will matter even more. And that too will look very different in the agentic era.
The third is resilience. Not just whether the product works today, but whether it still has relevance in future workflows. Does it sit somewhere meaningful in the stack? Does it have switching costs, network effects, proprietary data, real depth? Or is it just a neat layer that gets absorbed as the stack consolidates?
That’s the scrutiny now.
The best use of AI might be to make companies feel more human
I actually think one of the biggest mistakes companies can make right now is to assume customers want more AI in every visible interaction.
A lot of people are uneasy about it. Some are actively resisting it. So when companies market AI in a way that feels deceptive or cold, I think they’re underestimating the backlash.
For me, the better story is the opposite. AI should help companies create a more high-love experience. It should remove admin, reduce friction, and give people more time to care.
Some of the most thoughtful founders I’ve spoken to are saying exactly that. They’re not planning to hire huge engineering teams because AI covers part of that. They want to invest more in support and success teams, because that human layer is what will differentiate them.
I find that really compelling.
We think about AI in a similar way internally. If it can reduce administrative burden, then our team has more time to spend with founders, with companies, with people. That’s where the value is. The brand becomes a reflection of how you show up, how responsive you are, how much trust you build.
If you use AI to deepen relationships, it strengthens the company. If you use it to hide from customers, it weakens it.
There are still businesses you simply can’t fake your way into
For all the talk about how easy it is to build now, there are still categories where depth really matters.
If you’re building in an industry that is highly regulated, operationally complex, and hard to access, you can’t just spin up a product over the weekend and compete. You need domain expertise. You need the right relationships. You need to understand workflows at a painful level of detail. You need trust.
That’s why I’m still drawn to companies where AI is not the product story on its own. It’s a tool that gives operational leverage inside a business that is already solving a serious problem.
In those cases, AI can be incredibly powerful. It can improve the revenue line, the cost line, the service quality, the speed. But it’s not the whole moat. It’s an amplifier.
That feels more durable to me than a product whose entire proposition is just access to intelligence.
My own job has already changed
On a personal level, AI has absolutely changed how I work.
I now have this very dynamic workforce on demand. A researcher, a writer, an analyst, a thought partner. I can use it as a context agent, brief generator, presentation builder, speech writer. That part is real, and I use it all the time.
Recently, I used it to help amplify some of our content work. We do a lot around the State of European Tech, and I found I could basically turn the material into presentations and event prep with very little lift. It was kind of wild. It made me realize that the mechanics of producing polished output are getting easier very quickly.
Which brings me back to brand.
If anyone can generate the polished output, then what people are really buying is not just the content. They’re buying the person, the perspective, the credibility, the signal attached to it. The messenger matters more.
I think that’s true for funds. I think it’s true for founders. And I think it’s true for product leaders too.
What matters now is still what mattered before, just under more pressure
I don’t think AI changes everything. But I do think it exposes what was already true.
Good companies still need to learn faster than others. They still need to build trust. They still need to understand what job they own. They still need clear positioning, real distribution, strong judgment, and the ability to say no to things that look impressive but won’t last.
What’s changed is the pressure. The market moves faster. Product edges erode faster. Consensus forms faster. Noise compounds faster.
So the work now is to get clearer, not louder.
The teams that win won’t be the ones with the most features. They’ll be the ones that know what matters, what doesn’t, and what still feels true when everyone else is reacting to the latest wave.
Perspectives: Honest conversations on crafting great products over a cup of coffee. I sit down with friends across design, data, engineering, ops, and more—people who work closely with product leaders, from PMs to CPOs. We talk about how teams really work, where things break down, and how AI and new ways of working are reshaping the future of product.