Predictions 2020: Facebook Caves, Google Zags, Netflix Sells Out, and Data Policy Gets Sexy

A new year brings another run at my annual predictions: For 17 years now, I’ve taken a few hours to imagine what might happen over the course of the coming twelve months. And my goodness did I swing for the fences last year — and I pretty much whiffed. Batting .300 is great in the majors, but it kind of sucks compared to my historical average. My mistake was predicting events that I wished would happen. In other words, emotions got in the way. So yes, Trump didn’t leave office, Zuck didn’t give up voting control of Facebook, and weed’s still illegal (on a federal level, anyway). 

Chastened, this year I’m going to focus on less volatile topics, and on areas where I have a bit more on-the-ground knowledge — the intersection of big tech, marketing, media, and data policy. As long time readers know, I don’t prepare in advance of writing this post. Instead, I just clear a few hours and start thinking out loud. So…here we go.

  1. Facebook bans microtargeting on specific kinds of political advertising. Of course I start with Facebook, because, well, it’s one of the most inscrutable companies in the world right now. While Zuck & Co. seem deeply committed to their “principled” stand around a politician’s right to paid prevarication, the pressure to do something will be too great, and as it always does, the company will enact a half-measure, then declare victory. The new policy will probably roll out after Super Tuesday (sparking all manner of conspiracies about how the company didn’t want to impact its Q1 growth numbers in the US). The company’s spinners will frame this as proof they listen to their critics, and that they’re serious about the integrity of the 2020 elections. As with nearly everything it does, this move will fail to change anyone’s opinion of the company. Wall St. will keep cheering the company’s stock, and folks like me will keep wondering when, if ever, the next shoe will drop. 
  2. Netflix opens the door to marketing partnerships. Yes, I’m aware that the smart money has moved on from this idea. But in a nod to increasing competition and the reality of Wall St. expectations, Netflix will at least pilot a program — likely not in the US — where it works with brands in some limited fashion. Mass hysteria in the trade press will follow once this news breaks, but Netflix will call the move a pilot, a test, an experiment…no big deal. It may take the form of a co-produced series, or branded content, or some other “native” approach, but at the end of the day, it’ll be advertising dollars that fuel the programming. And while I won’t predict the program augurs a huge new revenue stream for the company, I can predict that what won’t happen, at least in 2020: A free, advertising-driven version of Netflix. Just not in the company’s culture. 
  3. CDA 230 will get seriously challenged, but in the end, nothing gets done, again. Last year I predicted there’d be no federal data privacy legislation, and I’m predicting the same for this year. However, there will be a lot of movement on legislation related to the tech oligarchy. The topic that will come the closest to passage will be a revision to CDA 230 —the landmark legislation that protects online platforms from liability for user generated content. Blasphemy? Sure, but here we are, stuck between free speech on the one hand, massive platform economics on the other, and a really, really bad set of externalities in the middle. CDA 230 was built to give early platforms the room to grow unhindered by traditional constraints on media companies. That growth has now metastasized, and we don’t have a policy response that anyone agrees upon. And CDA 230 is an easy target, given conservatives in Congress already believe Facebook, Google, and others have it out for their president. They’ll be a serious run at rewriting 230, but it will ultimately fail. Related…
  4. Adversarial interoperability will get a moment in the sun, but also fail to make it into law. In the past I (and many others) have written about “machine readable data portability.” But for the debate we’re about to have (and need to have), I like “adversarial interoperability” better. Both are mouthfuls, and neither are easy to explain. Data governance and policy are complicated topics which test our society’s ability to have difficult long form conversations. 2020 will be a year where the legions of academics, policy makers, politicians, and writers who debate economic theory around data and capitalism get a real audience, and I believe much of that debate will center on whether or not large platforms have a responsibility to be open or closed. As Cory Doctorow explains, adversarial interoperability is “when you create a new product or service that plugs into the existing ones without the permission of the companies that make them.” As in, I can plug my new e-commerce engine into Amazon, my new mobile operating system into iOS, my new social network into Facebook, or my new driving instruction app into Google Maps. I grew up in a world where this kind of innovation was presumed. It’s now effectively banned by a handful of data oligarchs, and our economy – and our future – suffers for it. 
  5. As long as we’re geeking out on catchphrases only a dork can love, 2020 will also be the year “data provenance” becomes a thing. As with many nerdy topics, the concept of data provenance started in academia, migrated to adtech, and is about to break into the broader world of marketing, which is struggling to get its arms around a data-driven future. The ability to trace the origin, ownership, permissions, and uses of data is a fundamental requirement of an advanced digital economy, and in 2020, we’ll realize we have a ton of work left to do to get this right. Yes, yes, blockchain and ledgers are part of the discussion here, but the point isn’t the technology, it’s the policy enabling the technology. 
  6. Google zags. Saddled with increasingly negative public opinion and driven in large part by concerns over retaining its workforce, Google will make a deeply surprising and game changing move in 2020. It could be a massive acquisition, a move into some utterly surprising new industry (like content), but my money’s on something related to data privacy. The company may well commit to both leading the debate on the topics described above, as well as implementing them in its core infrastructure. Now that would really be a zag…
  7. At least one major “on demand” player will capitulate. Gig economy business models may make sense long term, but that doesn’t mean we’re getting the execution right in the first group of on demand “unicorns.” In fact, I’d argue we’re mostly getting them wrong, even if as consumers, we love the supposed convenience gig brands bring us. Many of the true costs of these businesses have been externalized onto public infrastructure (and the poor), and civic patience is running out. Plus, venture and public finance markets are increasingly skeptical of business models that depend on strip mining the labor of increasingly querulous private contractors. A reckoning is due, and in 2020 we’ll see the collapse of one or more larger players in the field.
  8. Influencer marketing will fall out of favor. I’m not predicting an implosion here, but rather an industry wide pause as brands start to ask the questions consumers will also be pondering: who the fuck are these influencers and why are we paying them so much attention? A major piece of this — on the marketing side anyway — will be driven by a massive increase in influencer fraud. As with other fast growing digital marketing channels, where money pours in, fraud fast follows — nearly as fast as fawning New York Times articles, but I digress. 
  9. Information warfare becomes a national bogeyman. If we’ve learned anything since the 2016 election, it’s this: We’ve taken far too long to comprehend the extent to which bad actors have come to shape and divide our discourse. These past few years have slowly revealed the power of information warfare, and the combination of a national election with the compounding distrust of algorithm-driven platforms will mean that by mid year, “fake news” will yield to “information warfare” as the catchphrase describing what’s wrong with our national dialog. Deep fakes, sophisticated state-sponsored information operations, and good old fashioned political info ops will dominate the headlines in 2020. Unfortunately, the cynic in me thinks the electorate’s response will be to become more inured and distrustful, but there’s a chance a number of trusted media brands (both new and old) prosper as we all search for a common set of facts.
  10. Purpose takes center stage in business. 2019 was the year the leaders of industry declared a new purpose for the corporation — one that looks beyond profits for a true north that includes multiple stakeholders, not just shareholders. 2020 will be the year many companies will compete to prove that they are serious about that pledge. Reaction from Wall St. will be mixed, but I expect plenty of CEOs will feel emboldened to take the kind of socially minded actions that would have gotten them fired in previous eras. This is a good thing, and likely climate change will become the issue many companies will feel comfortable rallying behind. (I certainly hope so, but this isn’t supposed to be about what I wish for…)
  11. Apple and/or Amazon stumble. I have no proof as to why I think this might happen but…both these companies just feel ripe for some kind of major misstep or scandal. America loves a financial winner — and both Amazon and Apple have been runaway winners in the stock market for the past decade. Both have gotten away with some pretty bad shit along the way, especially when it comes to labor practices in their supply chain. And while neither of them are as vulnerable as Facebook or Google when it comes to the data privacy or free speech issues circling big tech, both Apple and Amazon have become emblematic of a certain kind of capitalism that feels fraught with downside risk in the near future. I can’t say what it is, but I feel like both these companies could catch one squarely on the jaw this coming year, and the post-mortems will all say they never saw it coming. 

So there you have it — 11 predictions for the coming year. I was going to stop at 10, but that Apple/Amazon one just forced itself out — perhaps that’s me wishing again. We’ll see. Let me know your thoughts, and keep your cool out there. 2020 is going to be one hell of a year. 

Leave a comment on Predictions 2020: Facebook Caves, Google Zags, Netflix Sells Out, and Data Policy Gets Sexy

Sign up for the Newsletter

It’s Not Facebook’s Fault: Our Shadow Internet Constitution

Those of us fortunate enough to have lived through the birth of the web have a habit of stewing in our own nostalgia. We’ll recall some cool site from ten or more years back, then think to ourselves (or sometimes out loud on Twitter): “Well damn, things were way better back then.”

Then we shut up. After all, we’re likely out of touch, given most of us have never hung out on Twitch. But I’m seeing more and more of this kind of oldster wistfulness, what with Facebook’s current unraveling and the overall implosion of the tech-as-savior narrative in our society.

Hence the chuckle many of us had when we saw this trending piece suggesting that perhaps it was time for us to finally unhook from Facebook and – wait for it – get our own personal webpage, one we updated for any and all to peruse. You know, like a blog, only for now. I don’t know the author – the editor of the tech-site Motherboard – but it’s kind of fun to watch someone join the Old Timers Web Club in real time. Hey Facebook, get off my lawn!!!

That Golden Age

So as to not bury the lead, let me state something upfront: Of course the architecture of our current Internet is borked. It’s dumb. It’s a goddamn desert. It’s soil where seed don’t sprout. Innovation? On the web, that dog stopped hunting years ago.

And who or what’s to blame? No, no. It’s not Facebook. Facebook is merely a symptom. A convenient and easy stand in  – an artifact of a larger failure of our cultural commons. Somewhere in the past decade we got something wrong, we lost our narrative – we allowed Facebook and its kin to run away with our culture.

Instead of focusing on Facebook, which is structurally borked and hurtling toward Yahoo-like irrelevance, it’s time to focus on that mistake we made, and how we might address it.

Just 10-15 years ago, things weren’t heading toward the our currently crippled version of the Internet. Back in the heady days of 2004 to 2010 – not very long ago – a riot of innovation had overtaken the technology and Internet world. We called this era “Web 2.0” – the Internet was becoming an open, distributed platform, in every meaning of the word. It was generative, it was Gates Line-compliant, and its increasingly muscular technical infrastructure promised wonder and magic and endless buckets of new. Bandwidth, responsive design, data storage, processing on demand, generously instrumented APIs; it was all coming together. Thousands of new projects and companies and ideas and hacks and services bloomed.

Sure, back then the giants were still giants – but they seemed genuinely friendly and aligned with an open, distributed philosophy. Google united the Internet, codifying (and sharing) a data structure that everyone could build upon. Amazon Web Services launched in 2006, and with the problem of storage and processing solved, tens of thousands of new services were launched in a matter of just a few years. Hell, even Facebook launched an open platform, though it quickly realized it had no business doing so. AJAX broke out, allowing for multi-state data-driven user interfaces, and just like that, the web broke out of flatland. Anyone with passable scripting skills could make interesting shit! The promise of Internet 1.0 – that open, connected, intelligence-at-the-node vision we all bought into back before any of it was really possible – by 2008 or so, that promise was damn near realized. Remember LivePlasma? Yeah, that was an amazing mashup. Too bad it’s been dormant for over a decade.

After 2010 or so, things went sideways. And then they got worse. I think in the end, our failure wasn’t that we let Facebook, Google, Apple and Amazon get too big, or too powerful. No, I think instead we failed to consider the impact of the technologies and the companies we were building. We failed to play our hand forward, we failed to realize that these nascent technologies were fragile and ungoverned and liable to be exploited by people less idealistic than we were.

Our Shadow Constitution

Our lack of consideration deliberately aided and abetted the creation of a unratified shadow Constitution for the Internet – a governance architecture built on assumptions we have accepted, but are actively ignoring. All those Terms of Service that we clicked past, the EULAs we mocked but failed to challenge, those policies have built walls around our data and how it may be used. Massive platform companies have used those walls to create impenetrable business models. Their IPO filings explain in full how the monopolization and exploitation of data were central to their success – but we bought the stock  anyway.

We failed to imagine that these new companies – these Facebooks, Ubers, Amazons and Googles – might one day become exactly what they were destined to become, should we leave them ungoverned and in the thrall of unbridled capitalism.  We never imagined that should they win, the vision we had of a democratic Internet would end up losing.

It’s not that, at the very start at least, that tech companies were run by evil people in any larger sense. These were smart kids, almost always male, testing the limits of adolescence in their first years after high school or college. Timing mattered most: In the mid to late oughts, with the winds of Web 2 at their back, these companies had the right ideas at the right time, with an eager nexus of opportunistic capital urging them forward.

They built extraordinary companies. But again, they built a new architecture of governance over our economy and our culture – a brutalist ecosystem that repels innovation. Not on purpose – not at first. But protected by the walls of the Internet’s newly established shadow constitution and in the thrall of a new kind of technology-fused capitalism, they certainly got good at exploiting their data-driven leverage.

So here we are, at the end of 2018, with all our darlings, the leaders not only of the tech sector, but of our entire economy, bloodied by doubt, staggering from the weight of unconsidered externalities. What comes next?

2019: The Year of Internet Policy

Whether we like it or not, Policy with a capital P is coming to the Internet world next year. Our newly emboldened Congress is scrambling to introduce multiple pieces of legislation, from an Internet Bill of Rights  to a federal privacy law modeled on – shudder – the EU’s GDPR. In the past month, I’ve read draft policy papers suggesting we tax the Internet’s advertising model, that we break up Google, Facebook, and Amazon, or that we back off and just let the market “do its work.”

And that’s a good thing, to my mind – it seems we’re finally coming to terms with the power of the companies we’ve created, and we’re ready to have a national dialog about a path forward. To that end, a spot of personal news: I’ve joined the School of International and Public Affairs at Columbia University, and I’m working on a research project studying how data flows in US markets, with an emphasis on the major tech platforms. I’m also teaching a course on Internet business models and policy. In short, I’m leaning into this conversation, and you’ll likely be seeing a lot more writing on these topics here over the course of the next year or so.

Oh, and yeah, I’m also working on a new project, which remains in stealth for the time being. Yep, has to do with media and tech, but with a new focus: Our political dialog. More on that later in the year.

I know I’ve been a bit quiet this past month, but starting up new things requires a lot of work, and my writing has suffered as a result. But I’ve got quite a few pieces in the queue, starting with my annual roundup of how I did in my predictions for the year, and then of course my predictions for 2019. But I’ll spoil at least one of them now and just summarize the point of this post from the start: It’s time we figure out how to build a better Internet, and 2019 will be the year policymakers get deeply  involved in this overdue and essential conversation.

2 Comments on It’s Not Facebook’s Fault: Our Shadow Internet Constitution

Andrew Yang Deserves to Be Heard. Will Our Politics Let Him Speak?

Let’s be honest with ourselves, shall we? We’re in the midst of the most significant shift in our society since at least the Gilded Age – a tectonic reshaping of economic systems, social mores, and political institutions. Some even argue our current transition to a post-digital world, one in which technology has lapped our own intelligence and automation may displace the majority of our workforce within our lifetimes, is the most dramatic change to ever occur in recorded history. And that’s before we tackle a few other existential threats, including global warming – which is inarguably devastating our environment and driving massive immigration, drought, and famine – or income inequality, which has already fomented historic levels of political turmoil.

Any way you look at it, we’ve got a lot of difficult intellectual, social, and policy work to do, and we’ve got to do it quickly. Lucky for us, two major political events loom before us: The midterm elections this November, and a presidential election two years after that. Will we use these milestones to effect real change?

Given our current political atmosphere, it’s hard to imagine that we will. I fervently hope that the midterms will provide an overdue check on the insane clown show that the White House has delivered to us so far, but I’ve little faith that the build up to the 2020 Presidential election will be much more than an ongoing circus of divisive theatrics. Will there be room for serious debate about reshaping our fundamental relationship to government? If we are truly in an unprecedented period of social change, shouldn’t we be talking about how we’re going to manage it?

We could be, if Andrew Yang can poll above 15 percent in time for the Democratic debates next year.
Wait, who?!

Andrew Yang currently labors in near obscurity, but he is one of only two declared democratic candidates for president so far, and he’s been spending a lot of time in Iowa and New Hampshire lately. Yang is smart, thoughtful, and has the backing of a lot of folks in the technology world. He’s the founder of Venture for America, a program that trains college grads to work as entrepreneurs in “second cities” around the country like St. Louis, Pittsburgh, and Cleveland. He’s in no way a typical presidential candidate, but then again, we seem to be tired of those lately.

If you have heard of Yang, it might be as the “UBI candidate,” though he rankles a bit at that description. Yang is a proponent of what he calls the “Freedom Dividend,” a version of universal basic income that he argues will fundamentally reshape American culture. To get there, we’ll need to radically rethink our current social safety net, adopt an entirely new approach to taxation (he argues for a European-style value added tax), and get over our uniquely American love affair with the Horatio Alger mythos.

Can a candidate like Yang actually win the Democratic nomination for president, much less the presidency itself? I’ve not met a political professional who thinks he can, but then again, much stranger things have already happened.  Regardless, it’s critical that we debate the ideas his campaign represents during the build up to our national elections in 2020, and for that reason alone I’m supporting Yang’s candidacy.

I met Yang two weeks ago at Thrival, an event that NewCo helps to produce in Pittsburgh (the video of that event will be up soon, when it is, I’ll post a link here). For nearly an hour on stage at the Carnegie museum, I grilled Yang about his economic theories, his chances of actually becoming president, and his agenda beyond the Freedom Dividend. I do a lot of interviews with well known folks, and I must say, if the reaction Yang got from the Pittsburgh audience is any indication, the man’s platform resonates deeply with voters.

For anyone who wants to get know Yang better, I recommend his recently published book The War on Normal People. But read it with this caveat: The thing is damn depressing. Yang lays out how structurally and fundamentally broken our society already is. He persuasively argues that we’re already in the midst of a “Great Displacement” across tens of millions of workers, a displacement that we’ve failed to identify, much less address. Echoing the recent work of Anand Giridharadas, Rana FooroharEdward Luce, and Andy Stern, Yang cites example after example of how perilously close we are to social collapse.

It’s hard to win a presidential election if fear is your primary motivator. But we live in strange, fearful times, and despite the pessimism of his book, I found Yang an optimistic, genuine, and actually pretty funny guy. He calls himself “the opposite of Trump – an Asian man who likes numbers.”

For Yang to actually shift the dialog of presidential politics, he’ll need to poll at or above 15 percent by early next year. That’s going to be a long shot, to be sure. But I for one hope he makes it to the debate stage, and that as a society, we will seriously discuss the ideas he proposes. We can no longer afford politics as usual – not the politics we have now, and certainly not a return to the cliché-ridden blandishments of years past. The time to traffic in new ideas – radically new ideas – is upon us.

 

(Cross posted from NewCo Shift)

Leave a comment on Andrew Yang Deserves to Be Heard. Will Our Politics Let Him Speak?

Governance, Technology, and Capitalism.

Or, Will Nature Just Shrug Its Shoulders?

If you pull far enough back from the day to day debate over technology’s impact on society – far enough that Facebook’s destabilization of democracy, Amazon’s conquering of capitalism, and Google’s domination of our data flows start to blend into one broader, more cohesive picture – what does that picture communicate about the state of humanity today?

Technology forces us to recalculate what it means to be human – what is essentially us, and whether technology represents us, or some emerging otherness which alienates or even terrifies us.  We have clothed ourselves in newly discovered data, we have yoked ourselves to new algorithmic harnesses, and we are waking to the human costs of this new practice. Who are we becoming?

Nearly two years ago I predicted that the bloom would fade from the technology industry’s rose, and so far, so true. But as we begin to lose faith in the icons of our former narratives, a nagging and increasingly urgent question arises:  In a world where we imaging merging with technology, what makes us uniquely human?

Our lives are now driven in large part by data, code, and processing, and by the governance of algorithms. These determine how data flows, and what insights and decisions are taken as a result.

So yes, software has, in a way, eaten the world. But software is not something being done to us. We have turned the physical world into data, we have translated our thoughts, actions, needs and desires into data, and we have submitted that data for algorithmic inspection and processing. What we now struggle with is the result of these new habits – the force of technology looping back upon the world, bending it to a new will.  What agency – and responsibility – do we have? Whose will? To what end?

  • ••

Synonymous with progress, asking not for permission, fearless of breaking things – in particular stupid, worthy-of-being-broken things like government, sclerotic corporations, and fetid social norms – the technology industry reveled for decades as a kind of benighted warrior for societal good. As one Senator told me during the Facebook hearings this past summer, “we purposefully didn’t regulate technology, and that was the right thing to do.” But now? He shrugged. Now, maybe it’s time.

Because technology is already regulating us. I’ve always marveled at libertarians who think the best regulatory framework for government is none at all. Do they think that means there’s no governance?

In our capitalized healthcare system, data, code and algorithms now drive diagnosis, costs, coverage and outcomes. What changes on the ground? People are being denied healthcare, and this equates to life or death in the real world. 

In our public square, data, code and algorithms drive civil discourse. We no longer share one physical, common square, but instead struggle to comprehend a world comprised of a billion Truman Shows. What changes on the ground? The election results of the world’s most powerful country.

Can you get credit to start a business? A loan to better yourself through education? Financial decisions are now determined by data, code, and algorithms. Job applications are turned to data, and run through cohorts of similarities, determining who gets hired, and who ultimately ends up leaving the workforce.

And in perhaps the most human pursuit of all – connecting to other humans – we’ve turned our desires and our hopes to data, swapping centuries of cultural norms for faith in the governance of code and algorithms built – in necessary secrecy – by private corporations.

  • ••

How does a human being make a decision? Individual decision making has always been opaque – who can query what happens inside someone’s head? We gather input, we weigh options and impacts, we test assumptions through conversations with others. And then we make a call – and we hope for the best.

But when others are making decisions that impact us, well, those kinds of decisions require governance. Over thousands of years we’ve designed systems to insure that our most important societal decisions can be queried and audited for fairness, that they are defensible against some shared logic, that they will  benefit society at large.

We call these systems government. It is imperfect but… it’s better than anarchy.

For centuries, government regulations have constrained social decisions that impact health, job applications, credit – even our public square. Dating we’ve left to the governance of cultural norms, which share the power of government over much of the world.

But in just the past decade, we’ve ceded much of this governance to private companies – companies motivated by market imperatives which demand their decision making processes be hidden. Our public government – and our culture – have not kept up.

What happens when decisions are taken by algorithms of governance that no one understands? And what happens when those algorithms are themselves governed by a philosophy called capitalism?

  • ••

We’ve begun a radical experiment combining technology and capitalism, one that most of us have scarcely considered. Our public commons – that which we held as owned by all, to the benefit of all – is increasingly becoming privatized.

Thousands of companies are now dedicated to revenue extraction in the course of delivering what were once held as public goods. Public transportation is being hollowed out by Uber, Lyft, and their competitors (leveraging public goods like roadways, traffic infrastructure, and GPS).  Public education is losing funding to private schools, MOOCs, and for-profit universities. Public health, most disastrously in the United States, is driven by a capitalist philosophy tinged with technocratic regulatory capture. And in perhaps the greatest example of all, we’ve ceded our financial future to the almighty 401K – individuals can no longer count on pensions or social safety nets – they must instead secure their future by investing in “the markets” – markets which have become inhospitable to anyone lacking the technological acumen of the world’s most cutting-edge hedge funds.

What’s remarkable and terrifying about all of this is the fact that the combinatorial nature of technology and capitalism outputs fantastic wealth for a very few, and increasing poverty for the very many. It’s all well and good to claim that everyone should have a 401K. It’s irresponsible to continue that claim when faced with the reality that 84 percent of the stock market is owned by the wealthiest ten percent of the population.

This outcome is not sustainable. When a system of governance fails us, we must examine its fundamental inputs and processes, and seek to change them.

  • ••

So what truly is governing us in the age of data, code, algorithms and processing? For nearly five decades, the singular true north of capitalism has been to enrich corporate shareholders. Other stakeholders – employees, impacted communities, partners, customers – do not directly determine the governance of most corporations.

Corporations are motivated by incentives and available resources. When the incentive is extraction of capital to be placed in the pockets of shareholders, and a new resource becomes available which will aide that extraction, companies will invent fantastic new ways to leverage that resource so as to achieve their goal. If that resource allows corporations to skirt current regulatory frameworks, or bypass them altogether, so much the better.

The new resource, of course, is the combination of data, code, algorithms and processing. Unbridled, replete with the human right of speech and its attendant purchasing of political power, corporations are quite literally becoming our governance model.

Now the caveat: Allow me to state for the record that I am not a socialist. If you’ve never read my work, know I’ve started six companies, invested in scores more, and consider myself an advocate of transparently governed free markets. But we’ve leaned far too over our skis – the facts no longer support our current governance model.

  • ••

We turn our worlds to data, leveraging that data, technocapitalism then terraforms our world. Nowhere is this more evident that with automation – the largest cost of nearly every corporation is human labor, and digital technologies are getting extraordinarily good at replacing that cost.

Nearly everyone agrees this shift is not new – yes yes, a century or two ago, most of us were farmers. But this shift is coming far faster, and with far less considered governance. The last great transition came over generations. Technocapitalism has risen to its current heights in ten short years. Ten years. 

If we are going to get this shift right, we urgently need to engage in a dialog about our core values. Can we perhaps rethink the purpose of work, given work no longer means labor? Can we reinvent our corporations and our regulatory frameworks to honor, celebrate and support our highest ideals? Can we prioritize what it means to be human even as we create and deploy tools that make redundant the way of life we’ve come to know these past few centuries?

These questions beg a simpler one: What makes us human?

I dusted off my old cultural anthropology texts, and consulted the scholars. The study of humankind teaches us that we are unique in that we are transcendent toolmakers – and digital technology is our most powerful  tool. We have nuanced language, which allows us both recollection of the past, and foresight into the future. We are wired – literally at the molecular level – to be social, to depend on one another, to share information and experience. Thanks to all of this, we have the capability to wonder, to understand our place in the world, to philosophize. The love of beauty,  philosophers will tell you, is the most human thing of all.

Oh, but then again, we are uniquely capable of intentional destroying ourselves. Plenty of species can do that by mistake. We’re unique in our ability to do it on purpose.

But perhaps the thing that makes us most human is our love of story telling, for narrative weaves nearly everything human into one grand experience. Our greatest philosophers even tell stories about telling stories! The best stories employ sublime language, advanced tools, deep community, profound wonder, and inescapable narrative tension.  That ability to destroy ourselves? That’s the greatest narrative driver in this history of mankind.

How will it turn out?

  • ••

We are storytelling engines uniquely capable of understanding our place in the world. And it’s time to change our story, before we fail a grand test of our own making: Can we transition to a world inhabited by both ourselves, and the otherness of the technology we’ve created? Should we fail, nature will indifferently shrug its shoulders. It has billions of years to let the whole experiment play over again.

We are the architects of this grand narrative. Let’s not miss our opportunity to get it right.

Adapted from a speech presented at the Thrival Humans X Tech conference in Pittsburgh earlier this week. 

Cross posted from NewCo Shift. 

 

Leave a comment on Governance, Technology, and Capitalism.

Facebook, Twitter, and the Senate Hearings: It’s The Business Model, Period.

“We weren’t expecting any of this when we created Twitter over 12 years ago, and we acknowledge the real world negative consequences of what happened and we take the full responsibility to fix it.”

That’s the most important line from Twitter CEO Jack Dorsey’s testimony yesterday – and in many ways it’s also the most frustrating. But I agree with Ben Thompson, who this morning points out (sub required) that Dorsey’s philosophy on how to “fix it” was strikingly different from that of Facebook COO Sheryl Sandberg (or Google, which failed to send a C-level executive to the hearings). To quote Dorsey (emphasis mine): “Today we’re committing to the people and this committee to do that work and do it openly. We’re here to contribute to a healthy public square, not compete to have the only one. We know that’s the only way our business thrives and helps us all defend against these new threats.”

Ben points out that during yesterday’s hearings, Dorsey was willing to tie the problems of public discourse on Twitter directly to the company’s core business model, that of advertising. Sandberg? She ducked the issue and failed to make the link.

You may recall my piece back in January, Facebook Can’t Be Fixed. In it I argue that the only way to address Facebook’s failings as a public square would be to totally rethink its core advertising model, a golden goose which has driven the company’s stock on an six-year march to the stratosphere. From the post:

“[Facebook’s ad model is] the honeypot which drives the economics of spambots and fake news, it’s the at-scale algorithmic enabler which attracts information warriors from competing nation states, and it’s the reason the platform has become a dopamine-driven engagement trap where time is often not well spent.

To put it in Clintonese: It’s the advertising model, stupid.

We love to think our corporate heroes are somehow super human, capable of understanding what’s otherwise incomprehensible to mere mortals like the rest of us. But Facebook is simply too large an ecosystem for one person to fix.”

That one person, of course, is Mark Zuckerberg, but what I really meant was one company – Facebook. It’s heartening to see Sandberg acknowledge, as she did in her written testimony, the scope and the import of the challenges Facebook presents to our democracy (and to civil society around the world). But regardless of sops to “working closely with law enforcement and industry peers” and “everyone working together to stay ahead,” it’s clear Facebook’s approach to “fixing” itself remains one of going it alone. A robust, multi-stakeholder approach would quickly identify Facebook’s core business model as a major contributor to the problem, and that’s an existential threat.

Sandberg’s most chilling statement came at the end of of her prepared remarks, in which she defined Facebook as engaged in an “arms race” against actors who co-opt the company’s platforms. Facebook is ready, Sandberg implied, to accept the challenge of lead arms producer in this race: “We are determined to meet this challenge,” she concludes.

Well I’m sorry, I don’t want one private company in charge of protecting civil society. I prefer a more accountable social structure, thanks very much.

I’ve heard this language of “arms races” before, in far less consequential framework: Advertising fraud, in particular on Google’s search platforms. To combat this fraud, Google locked arms with a robust network of independent companies, researchers, and industry associations, eventually developing a solution that tamed the issue (it’s never going to go away entirely).  That approach – an open and transparent process, subject to public checks and balances – is what is desperately needed now, and what Dorsey endorsed in his testimony. He’s right to do so. Unlike Google’s ad fraud issues of a decade ago, Facebook and Twitter’s problems extend to life or death, on-the-ground consequences – the rise of a dictator in the Philippines, genocide in Myanmar, hate crimes in Sri Lanka, and the loss of public trust (and possibly an entire presidential election) here in the United States. The list is terrifying, and it’s growing every week.

These are not problems one company, or even a heterogenous blue ribbon committee, can or should “fix.” Facebook does not bear full responsibility for these problems – anymore than Trump is fully responsible for the economic, social, and cultural shifts which swept him into office last year.  But just as Trump has become the face of what’s broken in American discourse today, Facebook – and tech companies more broadly – have  become the face of what’s broken in capitalism. Despite its optimistic, purpose driven, and ultimately naive founding principles, the technology industry has unleashed a mutated version of steroidal capitalism upon the world, failing along the way to first consider the potential damage its business models might wreak.

In an OpEd introducing the ideas in his new book “Farsighted”, author Steven Johnson details how good decisions are made, paying particular attention to how important it is to have diverse voices at the table capable of imagining many different potential scenarios for how a decision might play out. “Homogeneous groups — whether they are united by ethnic background, gender or some other commonality like politics — tend to come to decisions too quickly,” Johnson writes.  “They settle early on a most-likely scenario and don’t question their assumptions, since everyone at the table seems to agree with the broad outline of the interpretation.”

Sounds like the entire tech industry over the past decade, no?

Johnson goes on to quote the economist and Nobel laureate Thomas Schelling: “One thing a person cannot do, no matter how rigorous his analysis or heroic his imagination, is to draw up a list of things that would never occur to him.”

It’s clear that the consequences of Facebook’s platforms never occurred to Zuckerberg, Sandberg, Dorsey, or other leaders in the tech industry. But now that the damage is clear, they must be brave enough to consider new approaches.

To my mind, that will require objective study of tech’s business models, and an open mind toward changing them. It seems Jack Dorsey has realized that. Sheryl Sandberg and her colleagues at Facebook? Not so much.

 

 

 

2 Comments on Facebook, Twitter, and the Senate Hearings: It’s The Business Model, Period.

Hey Jack, Sheryl, and Sundar: It’s Time to Call Out Trump On Fake News.

Next week Sheryl Sandberg, COO of Facebook, and Jack Dorsey, CEO of Twitter, will testify in front of Congress. They must take this opportunity to directly and vigorously defend the role that real journalism plays not only on their platforms, but also in our society at large. They must declare that truth exists, that facts matter, and that while reasonable people can and certainly should disagree about how to respond to those facts, civil society depends on rational discourse driven by an informed electorate.

Why am I on about this? I do my very best to ignore our current president’s daily doses of Twitriol, but I couldn’t whistle past today’s rant about how tech platforms are pushing an anti-Trump agenda.

Seems the president took a look at himself in Google’s infinite mirror, and he apparently didn’t like what he saw. Of course, a more cynical reading would be that his advisors reminded him that senior executives from Twitter, Facebook, and Google* are set to testify in front of Congress next week, providing a perfect “blame others and deflect narrative from myself” moment for our Bully In Chief.

Trump’s hatred for journalism is legendary, and his disdain for any truth that doesn’t flatter is well established. As numerous actual news outlets have already established, there’s simply no evidence that Google’s search algorithms do anything other than reflect the reality of Trump news,  which in the world of *actual journalism* where facts and truth matter, is fundamentally negative. This is not because of bias – this is because Trump creates fundamentally negative stories. You know, like failing to honor a war hero, failing to deliver on his North Korea promises, failing to fix his self-imposed policy of imprisoning children, failing to hire advisors who can avoid guilty verdicts….and all that was just in the last week or so.

But the point of this post isn’t to go on a rant about our president. Instead, I want to make a point about the leaders of our largest technology platforms.

It’s time Jack, Sheryl, Sundar, and others take a stand against this insanity.  Next week, at least two of them actually have just that chance.

I’ll lay out my biases for anyone reading who might suspect I’m an agent of the “Fake News Media.” I’m on the advisory board of NewsGuard, a startup that ranks news sites for accuracy and reliability. I’m running NewsGuard’s browser plug in right now, and every single news site that comes up for a Google News search on “Trump News” is flagged as green – or reliable.

NewsGuard is run by two highly respected members of the “real” media – one of whom is a longstanding conservative, the other a liberal.

I’m also an advisor and investor in RoBhat Labs, which recently released a plugin that identifies fake images in news articles. Beyond that, I’ve taught journalism at UC Berkeley, where I graduated with a masters after two years of study and remain on the advisory board. I’m also a member of several ad-hoc efforts to address what I’ve come to call the “Real Fake News,” most of which peddles far right wing conspiracy theories, often driven by hostile state actors like Russia. I’ve testified in front of Congress on these issues, and I’ve spent thirty years of my life in the world of journalism and media. I’m tired of watching our president defame our industry, and I’m equally tired of watching the leaders of our tech industry fail to respond to his systematic dismantling of our civil discourse (or worse, pander to it).

So Jack, Sheryl, and whoever ends up coming from Google, here’s my simple advice: Stand up to the Bully in Chief. Defend civil discourse and the role of truth telling and the free press in our society. A man who endlessly claims that the press is the enemy is a man to be called out. Heed these words:

“It is the press, above all, which wages a positively fanatical and slanderous struggle, tearing down everything which can be regarded as a support of national independence, cultural elevation, and the economic independence of the nation.”

No one would claim these are Trump’s words, the prose is far too elegant. But the sentiment is utterly Trumpian. With with apologies to Mike Godwin, those words belong to Adolf Hitler. Think about that, Jack, Sheryl, and Sundar. And speak from your values next week.

*Google tried to send its SVP of Global Affairs and General Counsel, Kent Walker, but members of Congress have said they are tired of hearing from lawyers. It’s uncertain if the company will step up and send a leader of an actual business P&L, like Jack or Sheryl. 

 

Leave a comment on Hey Jack, Sheryl, and Sundar: It’s Time to Call Out Trump On Fake News.

The Accountable Capitalism Act: It’ll Never Happen, But At Least Now the Conversation Will

The past week or so has seen a surge in commentary on the role of corporations in society, a theme familiar to readers of this site. While it might be convenient to peg the trend to Senator Elizabeth Warren’s newly minted Accountable Capitalism Act (more on that in a second), I think it’s more likely that – finally – our collective will is turning to our most logical and obvious instrument of social change, namely, the instrument of business.

We humans like to organize ourselves into social units. They range from the informal (pickup basketball games) to the elaborately structured (Senate hearings). Our ability to harness collective will is unsurpassed in the animal kingdom, it’s one of our key evolutionary adaptations, driving the success of our species across the globe.

As I’ve argued elsewhere, one of our most sophisticated social structures is the corporation, which has co-evolved with our various systems of government over the past half millennium or so. The very first corporations were in fact formed (or chartered) by governments – the Dutch East India Company is the most common example of this. In the past century, however, corporations have largely sought to shake the yoke of government regulation – and nowhere have corporations won more freedoms than in the United States, where firms are now considered legal persons with an unrestrained right to “free speech” (IE, the ability to fund political positions).

So this is where we are today: Large corporations have the legal right to exercise unlimited influence over our political sphere, and the commercial imperative to control (and profit from) nearly all our society’s data. That kind of power will necessarily produce a backlash, one that’s found an articulate, but highly unlikely, argument in Senator Warren’s proposed legislation. From the release announcing the Accountable Capitalism Act:

For most of our country’s history, American corporations balanced their responsibilities to all of their stakeholders – employees, shareholders, communities – in corporate decisions. It worked: profits went up, productivity went up, wages went up, and America built a thriving middle class.

But in the 1980s a new idea quickly took hold: American corporations should focus only on maximizing returns to their shareholders. That had a seismic impact on the American economy. In the early 1980s, America’s biggest companies dedicated less than half of their profits to shareholders and reinvested the rest in the company. But over the last decade, big American companies have dedicated 93% of earnings to shareholders – redirecting trillions of dollars that could have gone to workers or long-term investments. The result is that booming corporate profits and rising worker productivity have not led to rising wages.

Additionally, because the wealthiest top 10% of American households own 84% of all American – held shares-while more than 50% of American households own no stock at all – the dedication to “maximizing shareholder value” means that the multi-trillion dollar American corporate system is focused explicitly on making the richest Americans even richer. 

Here are a few of the act’s key proposals:

  • Companies with more than $1 billion in revenues must register with, and agree to be regulated by, a new Federal oversight body known as the Office of United States Corporations.  By registering, firms are obliged to “consider the interests of all corporate stakeholders – including employees, customers, shareholders, and the communities in which the company operates.” This enshrines what is often called a “multi-stakeholder philosophy,” the underpinning of B Corps like Patagonia and Kickstarter, into federal law.
  • A corporations’ workers would be empowered to elect at least forty percent of their firms’ board of directors.
  • Long term restrictions on the sale of stock by board directors and corporate officers – three years for stock buy backs, and five years for everything else. This is to insure that a large firms’ managers plan for the long term.
  • A prohibition on political spending of any kind without approval from 75 percent of both directors and shareholders.

There’s more, but I think you’ve got the point – this is a sweeping and presently impossible piece of legislation that radically rethinks the governance of our most powerful corporations. It guts corporate political spending, upends business’s current compensation structure (often based on stock grants), radically reshapes board governance (giving a near majority control to workers), and creates a massive conservative bogeyman in the form of yet another Federal government oversight entity. In today’s political environment, Warren’s legislation is DOA.

But in tomorrow’s? Quite possibly not. Senator Warren is widely considered a front-runner for the Democratic nomination in 2020, and her initial opponent won’t be Trump – it’ll be Bernie Sanders, whose supporters likely will find plenty to love in Warren’s new plan.

Regardless of whether the act has any chance of passing without a strong Democratic majority in both houses of Congress, Warren has smartly identified a central issue in our country’s political conversation, and declared it to be fundamental to the Democrats’ platform for 2020. It’s about time someone did.

More recent reading on the role of capitalism in our society: 

Louis Hyman: It’s Not Technology That’s Disrupting Our Jobs

L.M. Sacasas: Technopoly and Anti-Humanism

Tom Wheeler: Time to Fix It: Developing Rules for Internet Capitalism

Neil Irwin: Are Superstar Firms and Amazon Effects Reshaping the Economy? 

 

 

 

3 Comments on The Accountable Capitalism Act: It’ll Never Happen, But At Least Now the Conversation Will

Google and China: Flip, Flop, Flap

Google’s Beijing offices in 2010, when the company decided to stop censoring its results and exit the market.

I’ve been covering Google’s rather tortured relationship with China for more than 15 years now. The company’s off again, on again approach to the Internet’s largest “untapped” market has proven vexing, but as today’s Intercept scoop informs us, it looks like Google has yielded to its own growth imperative, and will once again stand up its search services for the Chinese market. To wit:

GOOGLE IS PLANNING to launch a censored version of its search engine in China that will blacklist websites and search terms about human rights, democracy, religion, and peaceful protest, The Intercept can reveal.

The project – code-named Dragonfly – has been underway since spring of last year, and accelerated following a December 2017 meeting between Google’s CEO Sundar Pichai and a top Chinese government official, according to internal Google documents and people familiar with the plans.

If I’m reading story correctly, it looks like Google’s China plans, which were kept secret from nearly all of the company’s employees, were leaked to The Intercept by concerned members of Google’s internal “Dragonfly” team, one of whom was quoted:

“I’m against large companies and governments collaborating in the oppression of their people, and feel like transparency around what’s being done is in the public interest,” the source said, adding that they feared “what is done in China will become a template for many other nations.”

This news raises any number of issues – for Google, certainly, but given the US/China trade war, for anyone concerned with the future of free trade and open markets. And it revives an age old question about where the line is between “respecting the rule of law in markets where we operate,” a standard tech company response to doing business on foreign soil, and “enabling authoritarian rule,” which is pretty much what Google will be doing should it actually launch the Dragonfly app.

A bit of history. Google originally refused to play by China’s rules, and in my 2004 book, I reviewed the history, and gave the company props for taking a principled stand, and forsaking what could have been massive profits in the name of human rights. Then, in 2006, Google decided to enter the Chinese market, on government terms. Google took pains to explain its logic:

We ultimately reached our decision by asking ourselves which course would most effectively further Google’s mission to organize the world’s information and make it universally useful and accessible. Or, put simply: how can we provide the greatest access to information to the greatest number of people?

I didn’t buy that explanation then, and I don’t buy it now. Google is going into China for one reason, and one reason alone: Profits. As Google rolled out its service in 2006, I penned something of a rant, titled “Never Poke A Dragon While It’s Eating.” In it I wrote:

The Chinese own a shitload of our debt, and are consuming a shitload of the world’s export base of oil. As they consolidate their power, do you really believe they’re also planning parades for us? I’m pretty sure they’ll be celebrating decades of US policy that looked the other way while the oligarchy used our technology (and that includes our routers, databases, and consulting services) to meticulously undermine the very values which allowed us to create companies like Google in the first place. But those are not the kind of celebrations I’m guessing we’d be invited to.

So as I puzzle through this issue, understanding how in practical terms it’s really not sensible to expect that some GYMA pact is going to change the world (as much as I might wish it would), it really, honestly, comes down to one thing: The man in the White House.

Until the person leading this country values human rights over appeasement, and decides to lead on this issue, we’re never going to make any progress. 

Google pulled out of China in 2010, using a China-backed hacking incident as its main rationale (remember that?!).  The man in the White House was – well let’s just say he wasn’t Bush, nor Clinton, and he wasn’t Trump. In any case, the hacking incident inconveniently reminded Google that the Chinese government has no qualms about using data derived from Google services to target its own citizens.

Has the company forgotten that fact? One wonders. Back in 2010, I praised the company for standing up to China:

In this case, Google is again taking a leadership role, and the company is forcing China’s hand. While it’s a stretch to say the two things are directly connected, the seeming fact that China’s government was behind the intrusions has led Google to decide to stop censoring its results in China. This is politics at its finest, and it’s a very clear statement to China: We’re done playing the game your way.

Seems Google’s not done after all. Which is both sad, and utterly predictable. Sad, because in today’s political environment, we need our companies to lead on moral and human rights issues. And predictable, because Android has a massive hold on China’s internet market, and Google’s lack of a strong search play there threatens not only the company’s future growth in its core market, but its ability to leverage Android across all its services, just as it has in Europe and the United States.

Google so far has not made a statement on The Intercept’s story, though I imagine smoke is billowing out of some communications war room inside the company’s Mountain View headquarters.  Will the company attempt some modified version of its 2006 justifications? I certainly hope not. This time, I’d counsel, the company should just tell the truth: Google is a public company that feels compelled to grow, regardless of whether that growth comes at a price to its founding values. Period, end of story.

I’ll end with another quote from that 2006 “Don’t Poke a Dragon” piece:

…companies like Yahoo and Google don’t traffic in sneakers, they traffic in the most powerful forces in human culture – expression. Knowledge. Ideas. The freedom of which we take as fundamental in this country, yet somehow, we seem to have forgotten its importance in the digital age – in China, one protesting email can land you in jail for 8 years, folks.

…Congress can call hearings, and beat up Yahoo, Google and the others for doing what everyone else is doing, but in the end, it’s not (Google’s) fault, nor, as much as I wish they’d take it on, is it even their problem. It’s our government’s problem….Since when is China policy somehow the job of private industry?

Until that government gives (the tech industry) a China policy it can align behind, well, they’ll never align, and the very foundation of our culture – free expression and privacy, will be imperiled.

After all, the Chinese leaders must be thinking, as they snack on our intellectual property, we’re only protecting our citizens in the name of national security.

Just like they do in the US, right?

6 Comments on Google and China: Flip, Flop, Flap

When Senators Ask Followup Questions, You Answer Them.

Following my Senate testimony last month, several Senators reached out with additional questions and clarification requests. As I understand it this is pretty standard. Given I published my testimony here earlier, I asked if I could do the same for my written followup. The committee agreed, the questions and my answers are below.

Questions for the Record from Sen. Cortez Masto (D. Nevada)

Facebook Audits

On April 4, 2018, following the public controversy over Cambridge Analytica’s use of user data, Facebook announced several additional changes to its privacy policies. The changes include increased restrictions on apps’ ability to gather personal data from users and also a policy of restricting an app’s access to user data if that user has not used the app in the past three months. In addition, Facebook has committed to conducting a comprehensive review of all apps gathering data on Facebook, focusing particularly on apps that were permitted to collect data under previous privacy policies. Facebook will also notify any users affected by the Cambridge Analytica data leak.

Question 1: What steps can the government take to ensure that there is proper oversight of these reviews and audits?

John Battelle’s response:

I think this is a simple answer: Make sure Facebook does what it says it will do, and make sure its response is a matter not only of public record, but also public comment. This should include a full and complete accounting of how the audit was done and the findings.

Question 2: From a technical standpoint, how effective are forensic methods at ascertaining information related to what data was transferred in these cases?

John Battelle’s response:

I’m not a technologist, I’m an entrepreneur, author, analyst and commentator. I’d defer to someone who has more knowledge than myself on issues of forensic data analysis.  

Technology for Consumer Protection

Question 1: Are there any technological solutions being developed that can help address some of the issues of consumers’ privacy being violated online?

John Battelle’s response:

Yes, there are many, likely too many to mention. Instead, what I’d like to highlight is the importance of the architecture of how data flows in our society. We should be creating a framework that allows data to flow ethically, securely, and with key controls around permissioning, editing, validation, revocation, and value exchange. Blockchains hold great promise here, but are still underdeveloped (but they’re evolving rapidly).

Data Retention

Question 1: What should we, as legislators, be thinking about to verify that – when Americans are told that their data has been destroyed – that deletion can actually be confirmed?

John Battelle’s response:

Independent third party auditing services that services such as Facebook must employ seems the most straightforward response. “Trust us” is not enough, we must trust and verify.

Law Enforcement

During the hearing we had a brief discussion on the balance between privacy and sharing data with law enforcement.

Question 1: What should companies keep in mind to ensure that they can appropriately assist in law enforcement investigations?

John Battelle’s response:

This is a delicate balance, as evinced in the varied responses to these kind of cases from companies like Apple, Twitter, Yahoo, and others. Valid search warrants, not fishing expeditions, should be the rule. We’ve got the framework for this already. The issue of how governments and law enforcement deal with encryption is unresolved. However, I fall on the side of enabling strong encryption, as I believe all citizens have the right to privacy. Lose that, and we lose democracy.  

Questions 2: As lawmakers, what should we be aware of as we try to strike the right balance between privacy and safety in this area?

John Battelle’s response:

Democracy is open, messy, transparent, and has many failures. But it’s the best system yet devised (in my humble opinion) and privacy lies at its core. That means criminals will be able to abuse its benefits. That is a tradeoff we have to accept and work around. Sure, it’d be great if law enforcement had access to all the data created by its citizens. Until it’s abused, and cases of this kind of abuse by government are easy to find.

Senator Richard Blumenthal (D. Conn) Questions for the Record 

Privacy Legislation

Across hearings and questions for the record, members of Congress have raised concerns about the data collection tactics used by Facebook that are not made clear to its users. As I stated during the hearing, I am interested in putting into place rules of the road for online privacy, taking into consideration the European General Data Protection Regulation. During the hearing Mr. Battelle and others offered support for the intent of GDPR, but expressed reservations about the implementation and unintended consequences. I look forward to any further thoughts from the panelists regarding how to implement data privacy rules in the United States.

 Question for All Panelists:

Question 1. In addition to any recommendations or comments on what types of legislation or other measures could help protect consumer privacy, what lessons and principles of the California Consumer Privacy Act and the GDPR should Congress consider in privacy legislation?

 John Battelle’s response:

Implementation of sweeping legislation like those mentioned above is extremely onerous for small business. Instead of using that as an excuse to avoid legislation, the policy should incorporate remedies for smaller business (IE, enabling federation of resources and response/compliance, enabling trusted intermediaries).

The principle of empowering the consumer is embodied in both GDPR and CCPA. While well intentioned, neither envision how that empowerment will truly be effective in a modern digital marketplace. Take the principle of data portability. It’s one thing to allow consumers to download a copy of their data from a platform or service. But for that data to drive innovation, it must be easily uploaded, in a defined, well-governed, machine-readable format, so that new kinds of services can flourish. Watch how large tech platforms chip away at CCPA and attempt to subvert that ecosystem from taking root. Consider how best to ensure that ecosystem will in fact exist. I’m not a legislative analyst, but there must be an enlightened way to encourage a class of data brokers (and yes, they’re not all bad) who enable re-aggregation of consumer data, replete with permissions, revocation, validation, editing, and value exchange. Happy to talk more about this.

Questions for Mr. Battelle:

Question 2. You have written at length about the influence of Facebook and Google on the advertising and third party data market. In your experience, has Facebook driven the ad market as a sector to more invasively collect data about people? What other changes in the ad market can be attributed to the dominance of Google and Facebook?

John Battelle’s response:

Yes, without question, Facebook has driven what you describe in your initial question. But not for entirely negative reasons. Because Facebook has so much information on its users, larger advertisers feel at a disadvantage. This is also true of publishers who use Facebook for distribution (another important aspect of the platform, especially as it relates to speech and democratic discourse). Both advertisers and publishers wish to have a direct, one to one dialog with their customers, and should be able to do so on any platform. Facebook, however, has forced their business model into the middle of this dialog – you must purchase access to your followers and your readers. A natural response is for advertisers and publishers to build their own sophisticated databases of their customers and potential customers. This is to be expected, and if the data is managed ethically and transparently, should not be considered an evil.

As for other changes in the ad market that might be attributed to FB and GOOG, let’s start with the venture funding of media startups, or advertising-dependent startups of any kind. Given the duopoly’s dominance of the market, it’s become extremely hard for any entrepreneur to find financing for ideas driven by an advertising revenue stream. Venture capitalists will say “Well, that’s a great (idea, service, product), but no way am I going to fund a company that has to compete with Google or Facebook.” This naturally encourages a downward spiral in innovation.

Another major problem in ad markets is the lack of portable data and insights between Facebook and Google. If I’m an advertiser or publisher on Facebook, I’d like a safe, ethical, and practical way to know who has responded to my messaging on that platform, and to take that information across platforms, say to Google’s YouTube or Adwords. This is currently far too hard to do, if not impossible in many cases. This also challenges innovation across the business ecosystem.

Questions for the Record

Senator Margaret Wood Hassan (D. New Hampshire)

Question 1. The internet has the potential to connect people with ideas that challenge their worldview, and early on many people were hopeful that the internet would have just that effect. But too often we have seen that social media sites like Facebook serve instead as an echo chamber that polarizes people instead of bringing them together, showing them content that they are more likely to agree with rather than exposing them to new perspectives. Do you agree this is a problem? And should we be taking steps to address this echo chamber effect?

John Battelle’s response:

Yes, this filter bubble problem is well defined and I agree it’s one of the major design challenges we face not only for Facebook, but for our public discourse as well. The public square, as it were, has become the domain of private companies, and private companies do not have to follow the same rules as, say, UC Berkeley must follow in its public spaces (Chancellor Carol Christ has been quite eloquent on this topic, see her interview at the NewCo Shift Forum earlier this year).

As to steps that might be taken, this is a serious question that balances a private corporation’s right to conduct its business as it sees fit, and the rights and responsibilities of a public space/commons. I’d love to see those corporations adopt clear and consistent rules about speech, but they are floundering (see Mr. Zuckerberg’s recent comments on Holocaust deniers, for example). I’d support a multi-stakeholder commission on this issue, including policymakers, company representatives, legal scholars, and civic leaders to address the issue.

Question 2. In your testimony you discuss the value of data. You stated that you think in some ways, QUOTE, “data is equal to – or possibly even more valuable than – monetary currency.” We in Congress are seeking to figure out the value of data as well to help us understand the costs and benefits of protecting this data. Can you expand on what value you think data has, and how we should be thinking about measuring that value – both as citizens and as legislators?

John Battelle’s response:

Just as we had no idea the value of oil when it first came into the marketplace (it was used for lamps and for paving streets, and no one could have imagined the automobile industry), we still have not conceived of the markets, products, and services that could be enabled by free flowing and ethically sourced and permissioned data in our society. It’s literally too early to know, and therefore, too early to legislate in sweeping fashions that might limit or retard innovation. However, one thing I am certain of is that data – which is really a proxy for human understanding and innovation – is the most fundamentally valuable resource in the world. All money is simply data, when you think about it, and therefore a subset of data.

So how to measure its value? I think at this point it’s impossible – we must instead treat it as an infinitely valuable resource, and carefully govern its use. I’d like to add my response to another Senator’s question here, about new laws (GDPR and the California Ballot initiative) as added reference:

Implementation of sweeping legislation like those mentioned above is extremely onerous for small business. Instead of using that as an excuse to avoid legislation, the policy should incorporate remedies for smaller business (IE, enabling federation of resources and response/compliance, enabling trusted intermediaries).

The principle of empowering the consumer is embodied in both GDPR and CCPA. While well intentioned, neither envision how that empowerment will truly be effective in a modern digital marketplace. Take the principle of data portability. It’s one thing to allow consumers to download a copy of their data from a platform or service. But for that data to drive innovation, it must be easily uploaded, in a defined, well-governed, machine-readable format, so that new kinds of services can flourish. Watch how large tech platforms chip away at CCPA and attempt to subvert that ecosystem from taking root. Consider how best to ensure that ecosystem will in fact exist. I’m not a legislative analyst, but there must be an enlightened way to encourage a class of data brokers (and yes, they’re not all bad) who enable re-aggregation of consumer data, replete with permissions, revocation, validation, editing, and value exchange. Happy to talk more about this.

Question 3. Mark Zuckerberg has said that he sees Facebook more as a government than a traditional company.  Among other things, governments need to be transparent and open about the decisions they make. Many large institutions have set up independent systems — such as offices of inspectors general or ombudsmen and ethics boards — to ensure transparency and internally check bad decisions.  Facebook has none of those controls. What kinds of independent systems should companies like Facebook have to publicly examine and explain their decision-making?

John Battelle’s response:

OK, this one is simple. Facebook is NOT a government. If it is, I don’t want to be a “citizen.” I think Mr. Zuckerberg is failing to truly understand what a government truly is. If indeed Facebook wishes to become a nation state, then first it must decide what kind of nation state it wishes to be. It needs a constitution, a clear statement of rights, roles, responsibilities, and processes. None of these things exist at the moment. A terms of service does not a government make.

However, all of the ideas you mention make a ton of sense for Facebook at this juncture. I’d be supportive of them all.

Leave a comment on When Senators Ask Followup Questions, You Answer Them.

Do We Want A Society Built On The Architecture of Dumb Terminals?

God, “innovation.” First banalized by undereducated entrepreneurs in the oughts, then ground to pablum by corporate grammarians over the past decade, “innovation” – at least when applied to business – deserves an unheralded etymological death.

But.

This will be a post about innovation. However, whenever I feel the need to peck that insipid word into my keyboard, I’m going to use some variant of the verb “to flourish” instead. Blame Nobel laureate Edmund Phelps for this: I recently read his Mass Flourishing, which outlines the decline of western capitalism, and I find its titular terminology far less annoying.

So flourishing it will be.

In his 2013 work, Phelps (who received the 2006 Nobel in economics) credits mass participation in a process of innovation (sorry, there’s that word again) as central to mass flourishing, and further argues – with plenty of economic statistics to back him up – that it’s been more than a full generation since we’ve seen mass flourishing in any society. He writes:

…prosperity on a national scale—mass flourishing—comes from broad involvement of people in the processes of innovation: the conception, development, and spread of new methods and products—indigenous innovation down to the grassroots. This dynamism may be narrowed or weakened by institutions arising from imperfect understanding or competing objectives. But institutions alone cannot create it. Broad dynamism must be fueled by the right values and not too diluted by other values.

Phelps argues the last “mass flourishing” economy was the 1960s in the United States (with a brief but doomed resurgence during the first years of the open web…but that promise went unfulfilled). And he warns that “nations unaware of how their prosperity is generated may take steps that cost them much of their dynamism.” Phelps further warns of a new kind of corporatism, a “techno nationalism” that blends state actors with corporate interests eager to collude with the state to cement market advantage (think Double Irish with a Dutch Sandwich).

These warnings were proffered largely before our current debate about the role of the tech giants now so dominant in our society. But it sets an interesting context and raises important questions. What happens, for instance, when large corporations capture the regulatory framework of a nation and lock in their current market dominance (and, in the case of Big Tech, their policies around data use?).

I began this post with Phelps to make a point: The rise of massive data monopolies in nearly every aspect of our society is not only choking off shared prosperity, it’s also blinkered our shared vision for the kind of future we could possibly inhabit, if only we architect our society to enable it. But to imagine a different kind of future, we first have to examine the present we inhabit.

The Social Architecture of Data 

I use the term “architecture” intentionally, it’s been front of mind for several reasons. Perhaps the most difficult thing for any society to do is to share a vision of the future, one that a majority might agree upon. Envisioning the future of a complex living system – a city, a corporation, a nation – is challenging work, work we usually outsource to trusted institutions like government, religions, or McKinsey (half joking…).

But in the past few decades, something has changed when it comes to society’s future vision. Digital technology became synonymous with “the future,” and along the way, we outsourced that future to the most successful corporations creating digital technology. Everything of value in our society is being transformed into data, and extraordinary corporations have risen which refine that data into insight, knowledge, and ultimately economic power. Driven as they are by this core commodity of data, these companies have acted to cement their control over it.

This is not unusual economic behavior, in fact, it’s quite predictable. So predictable, in fact, that it’s developed its own structure – an architecture, if you will, of how data is managed in today’s information society. I’ve a hypothesis about this architecture – unproven at this point (as all are) – but one I strongly suspect is accurate. Here’s how it might look on a whiteboard:

We “users” deliver raw data to a service provider, like Facebook or Google, which then captures, refines, processes, and delivers that data back as services to us. The social contract we make is captured in these services’ Terms of Services – we may “own” the data, but for all intents and purposes, the power over that information rests with the platform. The user doesn’t have a lot of creative license to do much with that data he or she “owns” – it lives on the platform, and the platform controls what can be done with it.

Now, if this sounds familiar, you’re likely a student of early computing architectures. Back before the PC revolution, most data, refined or not, lived on a centralized platform known as a mainframe. Nearly all data storage and compute processing occurred on the mainframe. Applications and services were broadcast from the mainframe back to “dumb terminals,” in front of which early knowledge workers toiled. Here’s a graph of that early mainframe architecture:

 

This mainframe architecture had many drawbacks – a central point of failure chief among them, but perhaps its most damning characteristic was its hierarchical, top down architecture. From an user’s point of view, all the power resided at the center. This was great if you ran IT at a large corporation, but suffice to say the mainframe architecture didn’t encourage creativity or a flourishing culture.

The mainframe architecture was supplanted over time with a “client server” architecture, where processing power migrated from the center to the edge, or node. This was due in large part to the rise the networked personal computer (servers were used  for storing services or databases of information too large to fit on PCs). Because they put processing power and data storage into the hands of the user, PCs became synonymous with a massive increase in productivity and creativity (Steve Jobs called them “bicycles for the mind.”) With the PC revolution power transferred from the “platform” to the user – a major architectural shift.

The rise of networked personal computers became the seedbed for the world wide web, which had its own revolutionary architecture. I won’t trace it here (many good books exist on the topic), but suffice to say the core principle of the early web’s architecture was its distributed nature. Data was packetized and distributed independent of where (or how) it might be processed. As more and more “web servers” came online, each capable of processing data as well as distributing it, the web became a tangled, hot mess of interoperable computing resources. What mattered wasn’t the pipes or the journey of the data, but the service created or experienced by the user at the point of that service delivery, which in the early days was of course a browser window (later on, those points of delivery became smartphone apps and more).

If you were to attempt to map the social architecture of data in the early web, your map would look a lot like the night sky – hundreds of millions of dots scattered in various constellations across the sky, each representing a node where data might be shared, processed, and distributed. In those early days the ethos of the web was that data should be widely shared between consenting parties so it might be “mixed and mashed” so as to create new products and services. There was no “mainframe in the sky” anymore – it seemed everyone on the web had equal and open opportunities to create and exchange value.

This is why the late 1990s through mid oughts were a heady time in the web world – nearly any idea could be tried out, and as the web evolved into a more robust set of standards, one could be forgiven for presuming that the open, distributed nature of the web would inform its essential social architecture.

But as web-based companies began to understand the true value of controlling vast amounts of data, that dream began to fade. As we grew addicted to some of the most revelatory web services – first Google search, then Amazon commerce, then Facebook’s social dopamine – those companies began to centralize their data and processing policies, to the point where we are now: Fearing these giants’ power over us, even as we love their products and services.

An Argument for Mass Flourishing

So where does that leave us if we wish to heed the concerns of Professor Phelps? Well, let’s not forget his admonition: “nations unaware of how their prosperity is generated may take steps that cost them much of their dynamism.” My hypothesis is simply this: Adopting a mainframe architecture for our most important data – our intentions (Google), our purchases (Amazon), our communications and social relationships (Facebook) – is not only insane, it’s also massively deprecative of future innovation (damn, sorry, but sometimes the word fits). In Facebook, Tear Down This Wall, I argued:

… it’s impossible for one company to fabricate reality for billions of individuals independent of the interconnected experiences and relationships that exist outside of that fabricated reality. It’s an utterly brittle product model, and it’s doomed to fail. Banning third party agents from engaging with Facebook’s platform insures that the only information that will inform Facebook will be derived from and/or controlled by Facebook itself. That kind of ecosystem will ultimately collapse on itself. No single entity can manage such complexity. It presumes a God complex.

So what might be a better architecture? I hinted at it in the same post:

Facebook should commit itself to being an open and neutral platform for the exchange of value across not only its own services, but every service in the world.

In other words, free the data, and let the user decide what do to with it. I know how utterly ridiculous this sounds, in particular to anyone reading from Facebook proper, but I am convinced that this is the only architecture for data that will allow a massively flourishing society.

Now this concept has its own terminology: Data portability.  And this very concept is enshrined in the EU’s GDPR legislation, which took effect one week ago. However, there’s data portability, and then there’s flourishing data portability – and the difference between the two really matters. The GDPR applies only to data that a user *gives* to a service, not data *co-created* with that service. You also can’t gather any insights the service may have inferred about you based on the data you either gave or co-created with it. Not to mention, none of that data is exported in a machine readable fashion, essentially limiting its utility.

But imagine if that weren’t the case. Imagine instead you can download your own Facebook or Amazon “token,” a magic data coin containing not only all the useful data and insights about you, but a control panel that allows you to set and revoke permissions around that data for any context. You might pass your Amazon token to Walmart, set its permissions to “view purchase history” and ask Walmart to determine how much money it might have saved you had you purchased those items on Walmart’s service instead of Amazon. You might pass your Facebook token to Google, set the permissions to compare your social graph with others across Google’s network, and then ask Google to show you search results based on your social relationships. You might pass your Google token to a startup that already has your genome and your health history, and ask it to munge the two in case your 20-year history of searching might infer some insights into your health outcomes.

This might seem like a parlor game, but this is the kind of parlor game that could unleash an explosion of new use cases for data, new startups, new jobs, and new economic value. Tokens would (and must) have privacy, auditing, trust, value exchange, and the like built in (I tried to write this entire post without mentioned blockchain, but there, I just did it), but presuming they did, imagine what might be built if we truly set the data free, and instead of outsourcing its power and control to massive platforms, we took that power and control and, just like we did with the PC and the web, pushed it to the edge, to the node…to ourselves?

I rather like the sound of that, and I suspect Mssr. Phelps would as well. Now, how might we get there? I’ve no idea, but exploring possible paths certainly sounds like an interesting project…

2 Comments on Do We Want A Society Built On The Architecture of Dumb Terminals?