ChatGPT is the new browser and memory is the new cookie

OpenAI launched an Apps SDK on Monday (see here). This effectively means that developers can now build apps inside ChatGPT:
A smartphone screen displaying the ChatGPT interface with a Bookingcom search result Two hotel images are visible one showing a modern indoor lobby with plants and seating the other depicting an outdoor Parisian scene with a bridge and river Text overlays include ChatGPT at the top time 1130 and hotel details like names ratings and prices in USD

This was an obvious play by OpenAI and it was only a matter of time until more dynamic experiences made it into chat. After all its very difficult to do some of the most profitable internet activities (ie ecommerece, gaming etc) with text only. The writing was on the wall here ever since MCP-UI came out and Shopify started incorporating it earlier this year:

Adding apps within the chat experience does 2 things:
1. it helps preserve the user within ChatGPT for longer
2. it gives OpenAI a significant amount of context on user behavior

From a strategic point of view, 2 is incredibly valuable. Users would traditionally leave ChatGPT and go to booking.com once they generated their trip plan. That prevented OpenAI from learning important information about the user - did they book the hotel that was presented to them? did they decide to do all of the suggested activities? This is personalization context that can help OpenAI be more useful to the user in the future.

It wasn't too long ago that ChatGPT gained the ability to reference prior conversations. That particular feature is now commonly referred to as memory. Memory is the personalization layer that helps ChatGPT answer questions better for you in the future, and in a world where model capabilities across labs are converging more and more, what prevents users from switching LLMs is the huge cost of moving their "memories". 

Once ChatGPT remembers the hotel that you like to stay in, the type of food that you order on weekends and the type books you like to read before bed, it is very hard to move away without losing a huge amount of personalization. As far as OpenAI is concerned, the more activities the user can do within the chat app, the harder it becomes to leave. 

In the same way that cookies allowed Google to become a default transaction layer for the internet, memory helps OpenAI own the personalization layer. In a world where Google owns the browser and Apple owns the device, it is very hard to stand out as the aggregator of personalization information across multiple applications - and yet that is exactly what memory + apps allows OpenAI to do, despite not owning the device or the browser. It will be interesting to see if Apple will allow ChatGPT "apps", because it technically violates their mega-app policy. 

The timing here is perhaps notable. There is no real advantage in being a first mover. As the MCP-UI experience shows, there are a lot of hard problems to figure out and there is a big chance that some of this blows up in OpenAI's face. Google will inevitably add this capability to Gemini and it has all the right relationships to do so. Launching early doesn't help OpenAI tremendously, so why now ? 

Perhaps one of the most obvious reasons to get going here is that OpenAI's devices efforts are very real and the sooner they can stand up an ecosystem (that spans all the apps customers already use today), the sooner they can have a viable standalone device.

Notably, there are more platform plays coming to ChatGPT soon. Sam Altman brought up "Sign In With ChatGPT" in May:What better way to ensure that you have context on your users than allowing them to bring their OpenAI identity with them everywhere they go? 

So..there it is. The inklings of a new platform emerging. Whether app publishers will be willing to do this is a totally different question. Giving your app away in this new modality has its risks and seeing how OpenAI is trying to do everything everywhere I would be cautious to move in this direction. Ultimately, users will decide for them. If chat is the way people browse the internet from now on, app publishers won't have a choice.







Amazon's Fat Peculiar Ways

Amazon is an outlier in how it enters markets, ships products, and builds flywheels. You might assume it embodies “lean startup” dogma. After two years inside, I can say it doesn’t. The advantage comes from doing almost the opposite of what the Silicon Valley echo chamber preaches.

There is no “build an MVP, ship fast, measure and validate, iterate, stay efficient.” We aim for an MLP—a Minimum Lovable Product. Where others “measure and validate,” Amazon “dives deep” and “invents and simplifies.” Where others chase efficiency, Amazon is “customer obsessed.” I can’t speak to whether this was designed in opposition to SV, but I can tell you the Amazon way starts with an uncompromising vision and then figures out how to execute it.

MLP vs. MVP

We don’t try to prove an idea is viable; we try to delight the customer. You can’t beta-test your way into a product—you need vision. MVP thinking strips anything that threatens viability. MLP thinking asks what we can add to make the experience unmistakably great.

When the first Kindle was built, Jeff Bezos insisted on cellular connectivity even though the accepted pattern was syncing via a PC. An MVP would have cut wireless. Wireless was the riskiest piece and could have sunk the program. It stayed anyway, because the vision required it.

This is the opposite of hypothesis-testing your way to a product. Customers shouldn’t carry the burden of our viability experiments. Our job is to innovate on their behalf, not hand them half-baked ideas.

“Measure and validate” vs. “dive deep” and “invent and simplify”

Every product starts with a crisp vision, ruthless execution, and good judgment. Instead of “ship quick and fail fast,” we “insist on the highest standards.” Instead of “incorporate user feedback,” we “invent and simplify.” Instead of “iterate,” we “dive deep.” We focus on inputs we control to solve the customer problem, not just outputs.

The PRFAQ is famous; what’s less known is how many brutal revisions it goes through. You sharpen clarity by rewriting and debating until the story is airtight. That written culture extends to technical designs, security reviews, and operational readiness. The bar is high on purpose.

Speed vs. quality

Quality is non-negotiable, but it doesn’t have to slow you down. The one-way door vs. two-way door framework keeps decisions fast when they’re reversible and careful when they aren’t. “Disagree and commit” turns debate into forward motion.

We still run segments, VOC, user studies, alphas/betas, and A/B tests. They’re tools for refinement, not for steering the product away from its vision. A/B tests pick the button that converts better; CSAT confirms the app worked as intended. Occasionally data exposes an outlier that deserves a strong response, but the vision doesn’t swing with every experiment. We don’t “find” product-market fit; we forge it.

Nimble at scale

A common argument for “lean” is that shared components and tight efficiency keep you fast. I thought so too. When our team grew from 10 to 100, I pushed hard on shared infrastructure and libraries to boost reuse across sub-teams. It slowed us down. Growth got gated by the slowest dependency.

The Amazon answer is counterintuitive: in the short term, even teams building similar things should often duplicate. Coordination costs and external interfaces drag more than duplication does. At Amazon’s scale, any service might serve millions, and the engineering bar is high; duplication feels expensive. What actually kills products is being late and getting bogged down. Duplication is an investment in speed you can consolidate later. You can fix organizational and technical inefficiencies; you can’t buy back time with customers you failed to delight.

Startup lessons

Should startups copy this? Sometimes. It’s easier to run “fat” when your survival doesn’t hinge on a single bet. Many startups throw ideas into the world to see what sticks. That can be necessary, but speed and leanness are not substitutes for vision and execution. Limited resources don’t force you to abandon a point of view.

There was a phase in the digitization cycle where moving offline workflows to the browser was enough. Adoption risk was low; the risk was speed and distribution. In that world, iterating quickly made sense. When you’re actually inventing, vision matters more than velocity theater.

Building products

Culture moves like fashion, and Silicon Valley isn’t immune. You won’t win an argument with a VC about the “right” way to build; the dominant culture reinforces itself. What does endure is craft, vision, and the timeless rules of customer choice. If you’re looking for an alternative to lean orthodoxy, here’s a live data point: Amazon commits to a strong vision, sets a high bar, duplicates when it buys speed, and forges product-market fit. It’s another way to build—and it works.