Frontier AI labs are no longer just technology companies. They are positioning themselves as cultural brands, and the choices they make reveal a lot about what kind of future they’re building.

Anthropic: The Pacifist Lab

Anthropic is currently going head to head with the Pentagon. Defense Secretary Pete Hegseth threatened to designate them a “supply chain risk”, a label normally reserved for foreign adversaries. Anthropic refused to let Claude be used for autonomous weapons and mass surveillance. Dario Amodei is meeting with Hegseth today at the Pentagon.

I think they have a really valid point here. If we do get some form of AGI, and that AGI is a pacifist, that is probably the best outcome for humanity. They are genuinely trying to encode into the weights the least amount of war effort possible. That is huge. That is valuable.

Then there’s the ads thing. Anthropic ran Super Bowl ads mocking ChatGPT’s ad model with dark comedy about chatbots steering users toward cougar dating sites and height-boosting insoles. The tagline: “There is a time and place for ads, and AI chats aren’t it.” Claude jumped from #41 to #7 on the App Store. Sam Altman called the Super Bowl ad “clearly dishonest”.

Their blog post “Claude is a space to think” makes the argument clearly: advertising creates misaligned incentives. A user asking about sleep problems would get answers optimized for a transaction rather than what is genuinely helpful. This positions Anthropic with the more intellectual, left-leaning crowd: we should not be modifying the attention of users, and that business model leads to really problematic outcomes. I agree.

OpenAI: The People’s Model

On the other side, OpenAI is positioning itself as the model of the people. Their plans are more subsidized, it’s cheaper to use, the ChatGPT interface never tells you “hey, you already used all the amount that you have.” They launched a free tier called ChatGPT Go specifically to reach more users. That is a valid approach — they are servicing that market.

But the cost is ads baked into the conversation. They started testing sponsored content in January 2026, with initial partners like Mercedes-Benz and JPMorgan Chase, at a $200,000 minimum commitment. They’re even exploring “generative ads” where ChatGPT itself writes the ad copy. As Ossama Chaib puts it in “The A in AGI stands for Ads”: these are ads you can’t even block because they are baked into the streamed probabilistic word selector, purposefully skewed to output the highest bidder’s marketing copy. That is a terrifying sentence. The unit economics for OpenAI adding ads make sense on paper acording to Ossama. They have hundreds of millions of users, that’s a lot of attention real estate. But it’s also the most bullish case (“what SoftBank is praying for”).

OpenAI previously called ads a “last resort” before launching them. I think we are too saturated as a society for the attention-and-ad-sponsoring model to continue working. But here we are, that’s reality right now.

The Numbers

This is where it gets interesting. Anthropic’s valuation is around $380B with a revenue run rate that reached $14B annualized in February 2026. Their cash burn dropped from $5.6B in 2024 to ~$3B in 2025, and internal projections say they stop burning cash in 2027. They are on a path to break even. Claude Code alone hit $2.5B in annualized revenue.

OpenAI is seeking an $830B valuation with $13.1B in 2025 revenue while it also projected to lose $14B in 2026, tripling their losses year over year. Not expected to turn a profit any time soon. More recent projections show $25B in cash burn in 2026 and $57B in 2027.

Why This Matters

I’m biased. I’m a fan of Anthropic’s approach. Things are expensive and we have to pay the bills, but they are not hiding the cost behind the misappropriation of attention that ads lead us to. But that’s a privileged stance. It’s super expensive; a max subscription sets you back for $100 or $200, in the eyes of a student, probably the biggest expense after tuition, rent and food.

This is also why I’m a fan of Kagi, the paid search engine that operates on the same principle: you are the customer, not the product. No tracking, no sponsored results, no algorithmic manipulation for advertiser benefit. It’s small, but it’s honest.

The question is whether the market can sustain companies that refuse to monetize attention. Anthropic is proving it might be possible. Their revenue is growing faster than OpenAI’s in relative terms, and they’re approaching profitability while their competitor burns through tens of billions. The rational good guys might also be the rational business guys.