Clarity Signal Field Notes
May 3, 2026 · Field Notes

Saying Yes Is
Not a Plan.

A builder's read on what's happening at OpenAI. Not a takedown — a flare. The cracks are early enough that course correction is still possible.

I've been building software for thirty years. You develop a sense for when a system is sound and when it's running on momentum. When I look at OpenAI lately, I get the second feeling.

OpenAI matters. The mission matters. The people inside who signed up to build AI carefully deserve a company that can deliver on it. The cracks I'm seeing are still early enough to fix, but they get worse if nobody names them. So here's what I see.

A company saying yes to everything that walks in the door. Yes to a $50 billion Amazon investment that required tearing up the Microsoft contract. Yes to ads in ChatGPT, then yes to a steady erosion of the firewall meant to protect users from those ads. Yes to a Public Wealth Fund proposal in a sweeping 13-page policy paper. Yes to over $1.4 trillion in total AI infrastructure commitments. Yes to a "third era" as an infrastructure provider. Yes to a Jony Ive hardware project. Yes to a Disney partnership, then no. Yes to Sora, then no.

Saying yes isn't a plan. It's what a company does when it has no plan, or when the cash burn is high enough that every yes feels like survival. They're throwing things at the wall to see what sticks. That's not even hope. It's desperation dressed up as ambition.

The founder is asking for help

At Stripe Sessions on April 29, Sam Altman said he's "definitely not a hands-on manager" while admitting he messages a few hundred employees a day, and that he might need to hire new leaders or even build an AI manager to handle the company's scaling demands. He's said versions of this before — that he's not sure he's the right fit for where the company is headed.

Founders who can do the zero-to-one rarely have the same instincts for the ten-to-a-hundred, and the honest ones say so. He's saying so. The problem is that the org around him isn't responding the way the situation calls for. Friar runs finance. Simo runs Applications. Lightcap is COO. On paper the executive layer exists. But Altman is still messaging hundreds of employees a day. A functioning executive layer means the CEO isn't the integration point for every decision. He doesn't need new titles. He needs leaders with the authority to say no on his behalf.

That authority has to come from somewhere. At OpenAI, the board already tried to exert authority once and got rolled within a weekend by Microsoft, the employees, and Altman himself. Any COO walking in knows the board can't actually back them. The only authority they'd have is whatever Altman personally delegates — which makes them a chief of staff with a bigger title. That's not the help he needs.

And the foundational agreements keep getting rewritten. The Microsoft partnership has been renegotiated twice between October 2025 and April 2026. On April 27, the AGI clause was scrapped, Azure exclusivity ended, Microsoft's IP license became non-exclusive, and OpenAI is now free to serve customers on AWS and Google Cloud. That's not stability adapting to growth. That's a company continuously rebuilding its foundations while trying to operate at hyperscale.

Watch how the firewall drifts

On February 9, OpenAI started running ads in ChatGPT — Free and Go tiers only, sponsored units that appear below the response, visually separated, clearly labeled. "Ads do not influence the answers ChatGPT gives you." It's the Google search ad model. Defensible.

Watch what's happened in the eighty days since.

March 2: Criteo became the first ad-tech partner to integrate with the pilot. By late March, StackAdapt joined as a DSP — a demand-side platform selling placements based on "prompt relevance." Adobe is running ads for Acrobat Studio and Firefly through their agency WPP. April 30: OpenAI updated its U.S. privacy policy to formalize data sharing with what it now calls "marketing partners" — explicitly acknowledging receipt of purchase data from advertisers, sharing user information for third-party ad targeting, and using user data to market OpenAI's own products. The vendor disclosure category was renamed.

In parallel, the ad industry is publishing guides describing future ChatGPT formats: Sponsored Comparison Tables embedded in structured comparison responses, "Contextual Native Recommendations" that "blend into the conversational fabric more seamlessly, functioning more like a trusted recommendation than a traditional ad unit." That language is in marketing playbooks today. The drift toward in-conversation placement isn't hypothetical — it's what advertisers are being trained to expect, while OpenAI's official position remains that ads are separate from the answer.

The "ads don't influence answers" line is the first promise that gets renegotiated when revenue pressure meets ad-tech demands.

Trust is the product (and Google already owns ads)

Here's the harder problem. Google is going to win the AI ad space. They built a $300 billion-a-year business teaching users to expect sponsored results next to organic ones. ChatGPT's users came for the opposite contract — direct answers without commercial bias.

At Davos in January, Demis Hassabis said flatly that Gemini had no ad plans. No qualifiers, no hedge. On Alphabet's April 28 earnings call, Chief Business Officer Philipp Schindler was asked the same question and the answer had transformed: "specifically on monetization in the Gemini app, our focus right now is on AI Mode... at the right moment, we'll share any plans, as we have said, but we're not rushing anything here."

That's not a denial. That's a $2 trillion ad company pretending it's still figuring out how to sell ads. "Right now." "At the right moment." "Not rushing." Three hedges from the company that wrote the playbook on monetizing commercial intent. They've figured it out. They're waiting for the air to clear. Hassabis's flat denial in January was the before. Schindler's hedging in late April was the after. The decision happened somewhere in between, almost certainly the moment OpenAI confirmed the February 9 ad launch and the sky didn't fall. Google was waiting for cover. OpenAI gave it to them.

Google already runs ads in AI Overviews and AI Mode. The ad-tech stack, the advertiser relationships, the auction infrastructure — all already operational. Gemini is the last holdout, kept clean for premium positioning. They'll ship into a market they already own.

So OpenAI is taking the asset that makes their product defensible — trust — and putting it in tension with a revenue stream they're reaching for under financial pressure, in a market segment where the incumbent has every structural advantage. The Wall Street Journal reported on April 27 that they expect to burn $25 billion in cash in 2026 against $30 billion in revenue, that they missed an internal target of one billion weekly active users by end of 2025, and that subscriber defections to Gemini and Anthropic are now a real concern. CFO Sarah Friar has reportedly warned colleagues the company might not be able to fund future compute contracts. They need money. But the cure may be worse than the disease — and Google already filled the prescription.

The Ive project, in miniature

The Jony Ive partnership is the whole pattern in one project. OpenAI bought Ive's company io for $6.5 billion in May 2025. Altman told staff it was "the coolest piece of technology the world will have ever seen." Ship date: 2026.

Then the Financial Times reported they were struggling with compute shortages, privacy questions about always-on cameras and microphones, and basic disagreements about the device's "personality." They lost the name "io" in a trademark fight with iyO, an audio computing startup spun out of Google's moonshot factory — a fight Ive's spokesperson had called "utterly baseless" and vowed to fight "vigorously." Court filings in February 2026 confirmed the device won't ship before February 2027. No packaging exists. No marketing exists. On March 24, OpenAI killed Sora six months after launch and ended a planned $1 billion Disney partnership.

Yes to a $6.5B acquisition before they had the compute. Yes to a hardware product before they had the operational layer to ship one. Yes to a name without clearing the trademark.

Ive himself put real meaning behind it. He said "everything I've learned over the last 30 years has led me to this place and to this moment." That's a man at the end of his career staking his legacy. He deserved a partner with the operational maturity to ship it. He got a partner that says yes to everything.

The headwinds don't care

The macro environment is brutal for everyone. HBM memory is sold out through 2027. Power-ready data center sites are scarce, permitting runs three to five years. Nuclear restarts won't deliver electrons until 2027 at the earliest. None of this gets meaningfully better before 2030.

A company with clear strategy can navigate that. A company that says yes to everything cannot. OpenAI has committed more than $1.4 trillion in infrastructure spending against $30 billion of revenue in 2026. The math doesn't have to fail. But it has to be allocated, and allocation requires a plan.

Consolidation is coming

When an industry has demand outrunning supply for five years, brutal capital requirements, and a leader visibly losing operational coherence, consolidation happens. Elon Musk made an unsolicited $97 billion bid for OpenAI's nonprofit assets in February 2025 and was rejected. He took the stand in Oakland in late April, testifying against the for-profit conversion and seeking up to $134 billion in damages plus a forced unwinding. He runs xAI with its own compute, its own capital, and a personal grievance about being pushed out. I would not be shocked to wake up in eighteen months and find Musk has acquired OpenAI in some form — a fire-sale rescue, a forced merger, a structured deal extracted under financial duress. He'd bring capital and operational will, but also his own chaos and a different relationship to safety than the original mission promised.

I don't know who buys whom. I do know an industry burning $25 billion a year to chase $30 billion in revenue, against a fixed supply curve and a leader asking for help that isn't coming, is not an industry that stays at seven independent labs forever.

A different foundation

Anthropic isn't free of contradictions, but the structure is more coherent. Public Benefit Corporation. Long-Term Benefit Trust with actual board authority. Founders who are mostly still there. A research-led culture where the safety framing isn't bolted on after the fact — it's the founding premise. When Sutskever, Murati, Schulman, and a long list of others left over 2024, that's not bad luck. The pattern is what matters. The same brain trust is building the same thing somewhere with a different foundation.

The case for local-first

I build local-first software — mini apps that run without a cloud dependency, work offline, and keep user data on the user's own machine. The pitch has usually been ideological: your data should belong to you. But when the foundations of centralized AI keep getting rewritten every six months, when the firewall between your conversations and an ad-tech auction can be redefined in a privacy policy update overnight, when the leading lab might be acquired by anyone within a few years — the case stops being ideological and starts being practical. Build so that whatever happens upstream, your work and your customers' data don't go with it. That's not a political statement. It's risk management.

What I hope

I want OpenAI to succeed. I want Altman to get the help he's been asking for, in the form of real leaders with real authority, before the headwinds or the consolidation does the deciding for him. There's an honest defense: this is a company in the steepest part of an S-curve, building something that may genuinely be the most important technology in human history, on a timeline they didn't choose. Eighteen months from now, this piece may read as too pessimistic. I would be glad if it did.

But hope is not a plan either. Right now, from outside, it doesn't look like a plan is what's happening. It looks like a company throwing things at the wall, very fast, very expensively, hoping something sticks before the compute, the cash, or the trust runs out.

I hope I'm wrong. I'm telling you what I see.