Not subscribed? Sign up to get it in your inbox every week.

ChatGPT launched in November 2022. By the following month at Copy.ai, our PLG dashboard was broken.
We had spent two years building a clean PLG motion. Marketers signed up, used the product to draft blog intros and ad copy, and converted to paid plans like clockwork. Within a few weeks, our cancel reasons had quietly shifted from "I forgot to use it" to "I just use ChatGPT now."
The board meeting was two weeks out. We had to make one big call: defend the PLG business and bleed slowly into a competitor that was free, or pivot the company into something the model providers could not trivially eat. We picked pivot. The eighteen months after that were a different company with a different ICP, a different go-to-market motion, and a different revenue line.
I think about that quarter a lot. The version of the problem we faced in 2022 was the easy version.
The cycles got shorter and the bites got bigger
Today the earthquakes come every few weeks, and they are bigger.
A model release that would have been a 10% benchmark jump in 2023 is now a step change in coding, browsing, voice, agentic tool use, or some other vertical that an entire startup is built around. Each launch expands the native-capability surface of the frontier models themselves. What I called the Copy.ai moment back in 2022 has since happened to image generation companies, a coding-agent company, etc.
If you build on top of frontier models, you are running the same race we ran in 2022, except the gun fires every month and the pack is faster.
Execution stopped being the bottleneck
For most of the last decade, the binding constraint on a startup was execution. Could you build it, could you ship it, could you sell it. Strategy mattered, but the team's ability to physically move fast enough was usually what gated the outcome.
That has flipped.
Most teams I talk to can build something close to whatever they decide to build in weeks rather than quarters. AI-native engineering organizations move at speeds that would have been considered impossible in 2021.
Distribution is harder than building, but it has also gotten cheaper and more measurable in the same window. What is actually gating outcomes now is whether the team is pointed at the right problem in a market where "right" has a half-life of about one model release.
The strategy I mean here is operational: a recurring discipline of asking a few questions honestly, every few weeks. Three of them in particular:
Is what we are building still defensible against the next model release? The interesting question is whether the underlying model provider could ship our wedge as a checkbox in their next launch and zero us out.
Is the customer we serve still the right customer? ICPs are moving fast. The team that paid you in 2024 might be doing the work themselves in Cursor now. The team that wouldn't take your call in 2024 might urgently need exactly what you sell.
What would we build if we were starting today, knowing what we know now? When the answer is meaningfully different from what we are doing, that gap is the most important thing on the agenda.
What flawless strategy looks like at this size
I keep using the word flawless because the margin for error is genuinely smaller than it used to be. A bad call in 2018 cost you a quarter of misallocated engineering. A bad call now can cost you the company, because the model providers and your better-strategized competitors are both moving on you at the same time.
A few habits I see working at smaller companies right now:
Tight strategic loops. Quarterly is too slow. The teams I see staying ahead are running a real strategic review every four to six weeks with the founder, the head of product, and the head of go-to-market in a room. The output is a re-derivation of the bet, with the option to keep it or change it on the spot.
A short, written thesis the leadership team can recite. One paragraph: who we serve, what problem we solve, why we win against both the legacy incumbents and the model providers. If three different leaders would write three different versions of that paragraph, you do not have a thesis yet.
A pre-mortem on every major model release. When OpenAI, Anthropic, or Google ships, the leadership team should already have a draft answer ready for the obvious questions: what does this change for us, what do we cut, what do we accelerate, what changes about pricing or positioning. The teams that wing this read the release notes, hold a meeting two weeks later, and arrive at decisions a competitor already made.
Permission to kill things on the spot. The asymmetry now is that you can build the wrong thing very quickly. Killing it has to be just as fast. Founders who treat shipped product as sacred are losing to founders who treat shipped product as evidence.
The November 2022 lesson
The thing that saved Copy.ai in late 2022 was that we faced the question hard, made a real call inside a tight window, and the call turned out to be roughly correct.
What felt like a once-in-a-cycle event in November 2022 has become a roughly monthly event, and most planning cadences have not caught up.
The Bottleneck Talent Network
Searching for your next role? Fill this form out, and we’ll intro you to the best companies in the world
Hiring? just respond to this email! We’ve got dozens of vetted operators standing by.
