
Yesterday The Information scooped my well laid plans for today’s health and AI-related predictions. If you’ve been following along this past week, you know I decided to write one prediction post a day for the first working week of the year. Today marks #5, which predicts that health will become a central player in society’s debate around AI, and #4, which predicts OpenEvidence will be acquired. I knew that OpenAI was working on health-related product offerings – the company said as much when it hired Fidji Simo from Instacart. But I didn’t know OpenAI would announce its health product so early in the year. Oh, and by the way, Google is expected to quickly do the same.
That said, I think there’s a lot more room to run in this story. OpenAI’s announcement is just the prelude. Health offers the perfect test case of just about every crucial limitation – and every massive opportunity – that AI represents in society today.
The first rule of journalism in a capitalist age is follow the money. The healthcare industry accounts for nearly 20 percent of the US GDP – and that number is only increasing. Healthcare marketing spend is projected to be more than $32 billion this year alone. If you are running (or building) an advertising platform – and OpenAI most certainly will be – you can’t afford to ignore such a huge chunk of money. Google, Apple, Amazon, and Meta certainly don’t!
Offerings like Instagram and Google Search can happily take Big Pharma’s marketing budgets for one simple reason: The companies themselves aren’t in the business of directly offering medical advice. Established law protects these platforms from liability related to the content they host (for more, read up on Section 230). The wicket gets a bit stickier when the product carrying advertising has an editorial point of view on, say, the benefits of taking GLP1s. And that’s exactly the kinds of questions that ChatGPT Health will be fielding, millions upon millions of times each day.
Things only get stickier when you contemplate the tsunami of personal health data already being collected by consumer applications like Apple Health, Google Health, Oura, Peloton, and the thousands of smaller health apps like Flo or Mindspace. OpenAI has already announced plans to integrate with these kinds of health apps, and it’s promised to ring-fence how that data will be used. But those fences have holes, and the tech industry doesn’t have a strong track record of protecting personal data, to put it mildly. Add in Big Tech’s reliance on AI and data-driven advertising systems and the gravitational pull of healthcare marketing dollars and… well, it’s not hard to imagine a fair bit of controversy brewing in the coming year.
At the core of that controversy is the fundamental concept of trust. Can we trust an AI system to diagnose medical conditions and offer appropriate remedies? If something goes wrong, can we examine the AI’s chain of logic to explain and correct its shortcomings? And who is liable if harm is done? These questions are central across all potential applications of AI, but nowhere more pressing than with the life or death consequences of medical decisions.*
With all that in mind, I am paying close attention to OpenEvidence, a five-year old startup recently valued at a staggering $12 billion. Its product is elegantly simple: The company has built a custom AI research tool designed for medical clinicians, leveraging a verified and rigorous corpus of scientific literature. All my doctors already use OpenEvidence, as do millions more around the world.
The company’s business model is equally simple: It sells advertising, and lots of it. The Information reports that the company is selling an estimated $12 million a month of advertising, mostly to pharmaceutical companies, and it’s tripled revenue since last summer. Oh, and the company’s executives claim they’ve only monetized ten percent of available inventory. That’s a lot of potential gold in them thar hills!
In OpenEvidence we find a present-day example of an advertising-driven AI health application at scale – but one operating in the relatively safe world of business-to-business marketing. It’s one thing to advertise drugs to doctors and other medical professionals. It’s quite another to open those floodgates to consumers.
All that said, this is a prediction post, so I best get to the actual predictions. When it comes to AI and Health, I’ve got two: First, 2026 will be the year that health takes center stage in the societal debate around AI. And second, OpenEvidence will be acquired by either OpenAI, Google, Apple, Microsoft or another advertising-driven big tech player (let’s not forget that Microsoft owns LinkedIn). To my mind, a better suitor would be Anthropic, which already understands the enterprise markets and has corporate DNA I’d trust, but I think that company might shy from entering the messy world of advertising this year. Regardless, in 2026, everyone will have a point of view on how health and AI interact. (This will be a major theme of DOC this year, so if you want to follow along, sign up for our newsletter here!)
*I’ll be writing a lot more about trust in future predictions….
—
This is the sixth in a series of post I’ll be doing on predictions for 2026. The first five are here, here, here, here and here. When I get to #1, I’ll post a roundup like I usually do.
