BoSacks Speaks Out: Ads, AI, and Why Publishers Should Be Paying Attention
By Bob Sacks
Wed, Feb 18, 2026

When a former insider walks out the door and says, “We are repeating Facebook’s mistakes,” editors should lean in, not shrug. That is exactly what Zoë Hitzig, a former OpenAI researcher, just did in a New York Times guest essay warning that OpenAI’s move toward advertising risks the same incentive drift that warped social platforms.
Most coverage frames this as an ethics story. It is not. This is a business model story. And for publishers, it is a competitive threat disguised as a philosophical debate.
The Story the Times Is Telling
The Times follows a familiar arc.
First comes the Facebook analogy. A company that once promised restraint feels the gravitational pull of advertising and engagement, then gradually trades idealism for optimization.
Then comes the twist. ChatGPT is not just another feed. It is intimate. People disclose medical fears, financial stress, relationship breakdowns, and existential dread. Place ads in that environment and you are not monetizing clicks. You are monetizing confession.
Finally, Hitzig argues that AI platforms do not have to choose between two bad options. A luxury subscription for the few or surveillance advertising for the many. She points to cross-subsidies, public interest funding, and governance structures as ways to soften the incentives.
That is the moral scaffolding. What it leaves out is the group with the most to lose. Publishers.
Where the Argument Slips
Ads Are Not the Automatic Villain
Let us start with a basic truth. Advertising is not inherently immoral, and even Hitzig acknowledges that. Advertising does not automatically mean spying.
Publishers know how to do this without crossing the line. Contextual ads work. Sponsorships work. Underwriting works. Newspapers ran car ads next to auto reviews long before cookies existed. Medical journals still sell pharma sponsorships without harvesting reader confessions. Newsletters sell category-based placements without building surveillance dossiers.
The real fault line is simple. Does conversational data become ad targeting data?
If chat histories are truly walled off from the ad stack, the risk shrinks. If they quietly inform targeting models, even indirectly, the danger multiplies. Right now OpenAI says ads appear below responses and do not influence answers. That is a claim, not a guarantee.
Publishers should focus on that distinction.
Engagement Is Not a Sin. It Is a Symptom
There is another sleight of hand in treating engagement as an advertising disease.
Facebook chases engagement. So do subscription newsrooms. So do streamers optimizing watch time, SaaS companies tracking daily active users, and game studios measuring retention. Anyone with a churn chart lives and dies by engagement.
Engagement is a metric, not a moral failing. The real question is what the model rewards and what guardrails exist when the numbers start climbing.
A subscription newsroom can still be tempted by outrage headlines. A paywalled publisher can still optimize for anxiety and habit. Ads may accelerate bad incentives, but subscriptions do not magically cleanse them.
Publishers live with this tension daily. They know how easily KPIs become editorial policy if no one is paying attention.
The Economics Are Not Theoretical
This is not a thought experiment.
ChatGPT now serves hundreds of millions of weekly users across consumers, businesses, and governments. That scale turns a product into infrastructure.
Running frontier models at that level requires tens of thousands of graphics processing units (GPUs), massive energy consumption, and capital expenditures that look more like utilities than startups. Losses in the billions are not a surprise. They are the cost of scale.
Cross-subsidies and governance boards sound noble. They do not pay for silicon. At this scale, advertising is not a philosophical detour. It is an almost inevitable line item on the P and L.
What Publishers Should Actually Be Worried About
This is where abstraction ends.
If ChatGPT becomes a serious advertising platform, it will become a serious media platform. That puts it in direct competition with publishers.
Consider a few realistic examples.
Pharma sponsors placed beneath explanations of diabetes treatments or cholesterol management.
Travel brands integrated into itinerary planning, visa questions, or “best time to visit” conversations.
Banks and fintech firms attached to retirement planning, tax strategy, debt consolidation, or student loan advice.
This is not banner spam. This is moment-of-need inventory. High intent. High trust. Publishers built entire verticals around this exact behavior in health, travel, personal finance, careers, and B2B services.
Now imagine those conversations moving into AI assistants at scale while OpenAI refines targeting, measurement, and pricing. Budgets that once funded publisher verticals will begin to leak. Quietly at first. As tests. Then suddenly, once performance data arrives.
This is not just a trust issue. It is competitive displacement.
To be fair, the Times essay nails an essential point. Incentive drift is real.
Once revenue rewards a behavior, that behavior expands until something forces a correction. Facebook’s morals did not collapse overnight. It became the logical outcome of its own engagement-driven model.
The essay is also right that conversational AI handles a different class of data. A confession about depression or debt is not the same as liking a meme. Monetizing vulnerability deserves far more scrutiny than selling ads next to cat videos.
Publishers should not mock this warning. They should amplify it. Tomorrow’s AI advertising regulations may be the only thing preventing a world where every reader’s concern is monetized before they ever reach a publisher’s site.
The Questions That Actually Matter
Instead of shouting “ads or no ads,” publishers should be asking sharper questions.
Will conversational histories ever be used for ad targeting, directly or via derived profiles?
Will there be independent audits of how conversational data is segmented, stored, and monetized? (I am laughing as I write this… independent audits: haha)
Those answers matter far more than whether ads appear above or below a response.
This is not a morality play. It is an infrastructure moment.
AI is becoming a global distribution layer for answers and advice. If that layer is funded by advertising, it competes directly with the intent-based revenue publishers rely on in health, finance, travel, careers, and B2B.
The Facebook analogy should be read as a warning label, not a prophecy. The real mistake would be treating this as an abstract ethics seminar while brand dollars quietly test, then normalize, AI-native ad buys.
History does not repeat, but it rhymes loudly for anyone willing to listen.
Follow the incentives and you will see where the money and the power are headed.
The question is not whether AI will sell ads.
The question is whether publishers will have a strategy before those budgets move.
This is how disruption actually happens, quietly, compliantly, and paid for by budgets that used to belong to you.
