Mercor grew 46x in 2025, Claude Code’s Unusual Playbook and a 15× AI Research Platform
PMF must be re-earned every quarter
Claude Code’s Unusual Playbook
Claude Code Product Lead Cat Wu recently shared how the Claude team actually works in an interview with Peter Yang. I learned a lot from the interview.
Many of her points closely mirror what Elena Vera, Head of Growth at Lovable, previously described in her breakdown of Lovable’s $200M ARR growth.
Compared with traditional software teams, the Claude Code team operates very differently: minimal documentation, rapid prototyping, heavy dogfooding, tight feedback loops, and constant validation in real user contexts.
1. No grand strategy — the product grew bottom-up
Claude Code was never a top-down “strategic initiative.” It began in a grassroots manner. An engineer named Boris built a small internal tool to understand Anthropic’s APIs better. There was no roadmap, no OKRs, and no explicit intention to ship a product.
But the tool turned out to be genuinely useful. It spread organically across engineering teams, and eventually across research, data, and product roles as well. By the time Claude Code launched publicly, it had already gone through full internal market validation.
Cat Wu joined early and helped translate raw engineering prototypes into clear user value, while guiding the product from a loose set of tools toward stable workflows. She emphasizes that Claude Code is not about “building a new IDE,” but about embedding AI into the highest-leverage moments of real developer and knowledge-worker workflows.
2. No PRDs — prototype first
Traditional PRDs barely exist on the Claude Code team. Most members are product-minded engineers with end-to-end ownership. When an idea emerges, the first step isn’t discussion or documentation — it’s building a prototype and pushing it into dogfooding.
With over 1,000 Anthropic employees using Claude Code internally, feedback arrives almost instantly: what’s confusing, what’s broken, and what’s worth continuing.
As Cat Wu puts it:
Our best features are usually built by engineers first — then we watch how people actually use them.
This doesn’t mean the process is ignored. For large, long-cycle efforts like IDE integrations, the team still runs more formal reviews. The key is defaulting to speed, and only slowing down when the stakes truly require it.
3. Product decisions driven by negative feedback
Unlike most teams that chase praise, Claude Code explicitly prioritizes negative feedback. They maintain two high-intensity channels:
An internal feedback group with over 1,000 users, generating actionable input every few minutes
Deep relationships with ~10 enterprise customers who are encouraged to be brutally honest
Blocking issues are surfaced quickly, fixed fast, and shipped back into production. This creates an extremely short iteration loop.
4. Code is the documentation
The team relies very little on Google Docs. Context, decisions, and tradeoffs live in pull requests, commit history, and the code itself.
When someone asks why something was built a certain way, they don’t look for an outdated document — they query the repository directly using Claude Code.
With modern code-understanding AI, the codebase itself becomes a living, queryable document, something teams simply couldn’t do a few years ago.
5. Forget long-term roadmaps
When asked about Claude Code’s one- or two-year vision, Cat Wu was blunt: that horizon is too far. She prefers focusing on the next few months.
In a space where model capabilities leap forward every few months, long-term roadmaps often become constraints rather than guidance. Staying flexible and continuously re-evaluating direction is a more effective strategy — very similar to Lovable’s belief that PMF must be re-earned every quarter.
Peter Yang made a related point: Anthropic’s product success comes from its focus on enterprise use cases, especially coding and real work. They didn’t chase consumer apps, hardware, or multimodal arms races — they stayed focused. I 100% agree with the point.
Explosive revenue growth in 2025: Mercor grew 46x, and a 15× AI Research Platform
Recent year-end updates from several AI companies show just how extreme this growth can be.




