Winner-take-all requires a winner
The software industry spent three decades producing winner-take-all outcomes. The entire VC model depends on it. What if AI applications simply don't generate them?
Every previous technological cycle came with its own set of startup challenges, and venture was the right tool to combat them: winning new surfaces like the browser, rewiring procurement, proving reliability long before the market bought in. SaaS was disruptive to the business model of software. Cloud was foreign, requiring enterprises to abandon the comfort of their own data centers. Mobile demanded entirely new interaction paradigms. These cycles were contrarian, misunderstood, or structurally disruptive. Venture capital thrived because incumbents were slow to respond and startups had the window to build compounding advantages before anyone else showed up.
AI is none of these things. It works. Every CEO wants to adopt it now. For the first time in decades, startups and incumbents are chasing the same territory with the same urgency. It's an execution race. And if all that is being funded is a go-to-market apparatus, the advantage evaporates the moment a well-run competitor watches the first mover navigate the product maze, and then walks the cleared path at half the cost.
Risk capital, to be justified, has to offer asymmetric returns. To paraphrase Howard Marks: for great returns it's not enough to be right, everyone else has to be wrong. In AI, everyone is right. So even in the best case, the results of these investments will be average. Average is not how venture funds work.
The product isn't a product
The cost of building software is collapsing toward zero. This single fact restructures everything about product competition in AI. When anyone can build anything, product advantages don't compound. Without network effects or deep operational complexity, all you are doing as a first mover is charting the path for everyone behind you.
AI application companies positioned themselves as middleware: sitting between foundation models and enterprise customers, filling gaps that raw models couldn't handle. Limited context windows, hallucinations, no access to private data. Early movers like Harvey, Hebbia, and Legora deserve credit for identifying and solving them. But these gaps are closing with every model generation and labs are now delivering comparable experiences with far less boilerplate.
Speed of execution is not a moat. Being first to market, as Thiel would put it, means you are paid to make the expensive mistakes that help everyone else. Unless you hold the last technological advantage in a chain, you are perpetually exposed to whoever builds next. And in AI, "next" arrives every few months.
Scale doesn't save you either. In a world where generating code costs nearly nothing, it becomes viable to build narrowly scoped, domain-expert agents that outperform general-purpose tools on every metric that matters. A specialized regulatory compliance agent will always beat a do-everything legal AI for the customers who care most.
Even when you do capture a market, there is no pricing power. The knowledge that your software can be replicated shifts leverage to the buyer. They don't need to actually switch.
It's the distribution, stupid
So the product isn't durable. What about distribution?
Distribution has always been the decisive advantage in vertical software: the company that pushes through procurement barriers, in a category where innovation is exhausted and the channel carries friction, occupies a position that is nearly impossible to dislodge. This is last mover's advantage.
The counterarguments
Some will say go-to-market and product are inseparable and that FDEs are really de facto product managers that embed with customers, extract tacit knowledge, and feed those insights back into the software over time. Yet, any feature that proves critical for a particular customer is trivial to copy. The real question is not whether FDEs contribute to product development, but whether that contribution translates into pricing power. It doesn't. Customers that know what they want can increasingly get those modifications built directly with AI coding agents.
Others point to switching costs. Once an enterprise has spent months implementing one vendor's system, trained workflows around its quirks, and built institutional knowledge about its failure modes, the appetite for starting over dwindles. But the calculus is shifting. Many buyers already view AI products as simultaneously expensive and easy to replace. As costs continue falling, the ROI equation for building and customizing internally only becomes more attractive. Switching costs matter less when the alternative is not switching to a competitor but building your own.
The outcome-based pricing thesis is the strongest case: land with a customer, iterate on their specific workflows, then transition to pricing tied to measurable results. If executed well,, this is genuinely disruptive. But the implication is non-trivial. Outcome-based pricing means assuming responsibility for a function that was previously strategic to the buyer. At that point, one is no longer just a vendor, but a direct competitor to the customer.Any provided solution must be costly enough to justify outsourcing while remaining sufficiently non-strategic that the buyer is comfortable handing it over. Navigating this balance at scale will prove challenging.