As generative AI reaches a fever pitch of investment, product releases, and hype, most of us have ignored a profound flaw as we march relentlessly toward The Next Big Thing. Our most dominant AI products and services (those from OpenAI, Google, and Microsoft, for example) are deployed in the cloud via a “client-server” architecture – “a computing model where resources, such as applications, data, and services, are provided by a central server, and clients request access to these resources from the server.”
Now, what’s wrong with that? Technically, nothing. A client-server approach isn’t controversial; in fact, it’s an efficient and productive approach for a company offering data-processing products and services. The client – that’s be you and your device – provides input (a prompt, for example) which is relayed to the server. The server takes that input, processes it, and delivers an output back to the client.
Non-controversial, right? Well, sure, if the “server” in question is a neutral platform that’s only in the business of processing your data so you can use the services it offers. Banks, for example, use neutral client-server architectures to provide online financial services, as do most health care providers. The data you share with them isn’t used for anything other than the provision of services.
But that’s not how most consumer-facing technology companies view your input data. For Apple, Google, Microsoft, Amazon, Netflix, Spotify, and countless other consumer apps, the “input” you send to their servers is used for far more than just providing a specific service. As you are probably well aware, input data – and a lot of other data – comprise the critical mass driving the tech industry’s most profitable business models: Lock-in (more on that soon), and advertising.
If you know my work, you know I’m a huge fan of advertising – I’ve built my career in related fields of publishing, media, and advertising technology. But that doesn’t mean I’m a huge fan of how the technology industry has eclipsed traditional advertising models for what critics call “surveillance capitalism.” I love the idea that personal data can make advertising work for … people. But so far, we’ve not built a system that works that way. Instead, we’ve built an unchecked engine of extractive capitalism that seeks to lock us in through data capture. Core to this lock-in is the aforementioned client-server model of data input, as well as a thicket of legal contracts, known as “terms of service” and “privacy policies,” which govern how companies can use that data.
Enter Your AI Identity
This week, well-known technology analyst Ben Thompson posted on OpenAI’s recent news that it was adding “memory” to its ChatGPT product. OpenAI hasn’t documented exactly how this memory feature works, but Thompson suggests it “seems to be some sort of RAG search over structured summaries of previous chats.” If that last sentence left you a bit puzzled, don’t fear. I think of it this way: OpenAI is taking all your previous input data, processing it in various ways, and combining it with whatever you ask it next so as to give you better output.
So far, so good, right? Well…maybe. But maybe not. Thompson notes that ChatGPT’s new memory feature only works for “memories” that are on OpenAI’s servers – memories from Google’s Gemini or Anthropic’s Claude may as well have been written in sand. Thompson calls this fact a “strategic moat” for OpenAI, because, as with Google, Meta, Apple, and countless others in the tech world, OpenAI has built a product that will own the identity you’re building as you interact with it. He presses his point further, delivering this stunning conclusion: “What is interesting about OpenAI’s gambit here is that the identity they are seeking to own is not your identity but rather the identity of your AI.”
Thompson continues his analysis by theorizing that once OpenAI establishes dominance as a central repository of our AI identities, it can, through its API, become something of a master connector for all other AI services. He mentions a new AI chatbot called Auren that he’s been testing. Wouldn’t it be cool if Auren, which calls itself a “superhuman emotional intelligence, memory, and therapeutic capability,” could connect with and be informed by your OpenAI identity? “Presumably,” he continues, “the details of [our conversations with Auren] could then flow the other way to my AI identity [at OpenAI].”
It’s always the presumptions, damnit. As soon as I read that line, I realized it was time to read OpenAI’s terms of service. Yes, it’d be great if data flowed bi-directionally between all manner of apps – AI and otherwise. But – as I’ve written over, and over, and over again – that’s not how the tech industry works. Driven as it is by data capture and rivalrous economic incentives, the tech industry has built its business models on “lock in” – making it very difficult for consumers to leave one service for another. Ever tried to leave the iPhone for Android, Google for Bing, or the Mac for a PC? That’s lock-in at work.
Put another way, Thompson is praising OpenAI’s move toward “owning AI identity” as consistent with the standard technology industry playbook of locking us into using its services, forever. And what is OpenAI’s method of lock-in? You guessed it: Your data. How do I know this? Because I did, in fact, read OpenAI’s terms of service and its other data-related policies. And they’re depressingly similar to the the rest of the technology industry’s. To wit:
- Users may not “automatically or programmatically extract data or Output.” In other words, folks like you and me cannot use our own AI agents to pull our own information from OpenAI’s servers. Goodbye, my dreams of genies!
- OpenAI “may use Content to provide, maintain, develop, and improve our Services.” This is an industry-wide catchall that allows the company to do pretty much anything it wants to with your data, within applicable law (which in the US, ain’t much.)
- And as with all other tech companies, OpenAI “may share your Personal Data, including information about your interaction with our Services, with government authorities, industry peers, or other third parties in compliance with the law.”
I don’t know if you find that last one unsettling given headlines like “How DOGE led a ‘hostile takeover’ at the IRS to use taxpayer data,” but I find the idea that my “AI identity” is locked in a private company where it is subject to potentially authoritarian government decree…a bit disturbing of late.
OpenAI’s policies did have one bright spot – as of today, the company promises to not get directly into the advertising business. From its privacy policy: “We do not “sell” Personal Data or “share” Personal Data for cross-contextual behavioral advertising, and we do not process Personal Data for “targeted advertising” purposes.”
Comforting, right? Well, kind of. Let’s not forget that Apple, which makes similar promises, rakes in more than $20 billion a year from its relationship with Google, and it also operates a massive advertising business of its own. Once OpenAI owns our identities, will it ignore that kind of a bonanza? I doubt it. In fact, I’m quite sure the company is counting on it. AI may well be The Next Big Thing in tech, but one thing’s certain: There’s nothing new about how it plans on making money from us, and that’s a damn shame.

The very definition of enshitification…
This excerpt stresses quite a serious point: The memory of AI could be an enabler of good user experience but it actually centralizes control on your AI identity. Until users decide the fate of their data, these systems are more platform-centric than people-centric.