Once again, Google and Microsoft are battling for the AI spotlight – this time with news around their offerings for developers and the enterprise*. These are decidedly less sexy markets – you won’t find breathless reports about the death of Google search this time around – but they’re far more consequential, given their potential reach across the entire technology ecosystem.
Highlighting that consequence is Casey Newton’s recent scoop detailing layoffs impacting Microsoft’s “entire ethics and society team within the artificial intelligence organization.” This team was responsible for thinking independently about how Microsoft’s use of AI might create unintended negative consequences in the world. While the company continues to tout its investment in responsible AI** (as does every firm looking to make a profit in the field), Casey’s reporting raises serious questions, particularly given the Valley’s history of ignoring inconvenient truths.
Today let’s think out loud about TikTok, perhaps the most vexing and fascinating expression of Big Tech power since Google in the early 2000s. I’ve written about TikTok several times, and today’s news, from the Wall Street Journal, raises fresh questions that feel under-appreciated.
First, the background. As most of you likely know, TikTok is owned by a large Chinese company called ByteDance. In less than five years, TikTok has hijacked the very heart of Big Tech’s consumer business in the United States – our attention. Nearly 100 million US consumers will spend an average of more than 90 mins a day watching TikTok this year. That’s time that Google, Facebook, Instagram, Twitter, and every other consumer tech and media company can’t get back. Here’s Scott Galloway’s visualization of the trend, from a piece last Fall:
As has been my practice for nearly two decades, I penned a post full of prognostications at the end of last year. As 2021 subsequently rolled by, I stashed away news items that might prove (or disprove) those predictions – knowing that this week, I’d take a look at how I did. How’d things turn out? Let’s roll the tape…
My first prediction: Disinformation becomes the most important story of the year. At the time I wrote those words, Trump’s Big Lie was only two months old, and January 6th was just another day on the calendar. A year later, that Big Lie has spawned countless others, culminating in one of the most damaging shifts in our nation’s politics since the Civil War. The Republican party is now fully captured by bullshit, and countless numbers of local, state, and national politicians are busy undermining democracy thanks to the Big Lie’s power. A significant percentage of the US population has become unmoored from truth – and an equally significant group of us have simply thrown our hands up about it. Trust is at an all time low. This Barton Gellman piece in The Atlantic served as a wake up call late in the year – and its conclusions are terrifying: “We face a serious risk that American democracy as we know it will come to an end in 2024,” Gellman quotes an observer stating. “But urgent action is not happening.” I’m not happy about getting this one right, but as far as I’m concerned, this is still the most important story of the year – and the most terrifying.
The video above is from a conversation at The Recount’s SHIFT event last month, between Nick Clegg, Facebook VP, Global Affairs and Communications, and myself. If you can’t bear to watch 30 or seconds of video, the gist is this: Clegg says “Thank God Mark Zuckerberg isn’t editing what people can or can’t say on Facebook, that’s not his or our role.”
As the coronavirus crisis built to pandemic levels in early March, a relatively unknown tech company confronted a defining opportunity. Zoom Video Communications, a fast-growing enterprise videoconferencing platform with roots in both Silicon Valley and China, had already seen its market cap grow from under $10 billion to nearly double that. As the coronavirus began dominating news reports in the western press, Zoom announced its first full fiscal year results as a public company. The company logged $622.7 million in revenue, up 88 percent from the year before. Zoom’s high growth rate and “software as a service” business model guaranteed fantastic future profits, and investors rewarded the company by driving its stock up even further. On March 5th, the day after Zoom announced its earnings, the company’s stock jumped to $125, more than double its price on the day of its public offering eleven months before. Market analysts began issuing bullish guidance, and company executives noted that as the coronavirus spread, more and more customers were flocking to Zoom’s easy-to-use video conferencing platform.
But as anyone paying attention to business news for the past month knows, it’s been a tumultuous ride for Zoom ever since. As the virus forced the world inside, demand for Zoom’s services skyrocketed, and the company became a household name nearly overnight. Zoom’s “freemium” model – which offers a basic version of its platform for free, with more robust features available for a modest monthly subscription fee – allowed tens of millions of new users to sample the company’s wares. Initially, Zoom was a hit with this new user base – stories of Zoom seders, Zoom cocktail parties, and even Zoom weddings gave the company a consumer-friendly vibe. Just like Google or Facebook before it, here was the story of a scrappy Valley startup with just the right product at just the right time. According to the company, Zoom’s monthly users leapt from 10 million to more than 200 million – an unimaginable increase of 2,000 percent in just one month.
Andrew Yang has dropped out, which means the presidential campaign just got a lot less fun (you must watch this appreciation from The Recount, embedded above). The race also lost a credible and important voice on issues related to the impact of technology on our society. The fact that Yang’s campaign didn’t make it past New Hampshire didn’t surprise the political experts I know, but his rabid base both online and at campaign events clearly did.
Perhaps Yang’s message of a “Freedom Dividend” never really caught fire because stock markets are at all time highs, and his warnings about tech-driven job losses have yet to come to fruition. It’s hard to get folks to care about something that requires thinking beyond the daily headlines, and harder still to ask them to consider long term trends like AI-driven automation or the wholesale reconstruction of our social safety net. But when Yang started his quest, these issues rarely made it to the national stage. Now they’re part of our shared vocabulary.
A new year brings another run at my annual predictions: For 17 years now, I’ve taken a few hours to imagine what might happen over the course of the coming twelve months. And my goodness did I swing for the fences last year — and I pretty much whiffed. Batting .300 is great in the majors, but it kind of sucks compared to my historical average. My mistake was predicting events that I wished would happen. In other words, emotions got in the way. So yes, Trump didn’t leave office, Zuck didn’t give up voting control of Facebook, and weed’s still illegal (on a federal level, anyway).
Chastened, this year I’m going to focus on less volatile topics, and on areas where I have a bit more on-the-ground knowledge — the intersection of big tech, marketing, media, and data policy. As long time readers know, I don’t prepare in advance of writing this post. Instead, I just clear a few hours and start thinking out loud. So…here we go.
Facebook bans microtargeting on specific kinds of political advertising. Of course I start with Facebook, because, well, it’s one of the most inscrutable companies in the world right now. While Zuck & Co. seem deeply committed to their “principled” stand around a politician’s right to paid prevarication, the pressure to do something will be too great, and as it always does, the company will enact a half-measure, then declare victory. The new policy will probably roll out after Super Tuesday (sparking all manner of conspiracies about how the company didn’t want to impact its Q1 growth numbers in the US). The company’s spinners will frame this as proof they listen to their critics, and that they’re serious about the integrity of the 2020 elections. As with nearly everything it does, this move will fail to change anyone’s opinion of the company. Wall St. will keep cheering the company’s stock, and folks like me will keep wondering when, if ever, the next shoe will drop.
Netflix opens the door to marketing partnerships. Yes, I’m aware that the smart money has moved on from this idea. But in a nod to increasing competition and the reality of Wall St. expectations, Netflix will at least pilot a program — likely not in the US — where it works with brands in some limited fashion. Mass hysteria in the trade press will follow once this news breaks, but Netflix will call the move a pilot, a test, an experiment…no big deal. It may take the form of a co-produced series, or branded content, or some other “native” approach, but at the end of the day, it’ll be advertising dollars that fuel the programming. And while I won’t predict the program augurs a huge new revenue stream for the company, I can predict that what won’t happen, at least in 2020: A free, advertising-driven version of Netflix. Just not in the company’s culture.
CDA 230 will get seriously challenged, but in the end, nothing gets done, again. Last year I predicted there’d be no federal data privacy legislation, and I’m predicting the same for this year. However, there will be a lot of movement on legislation related to the tech oligarchy. The topic that will come the closest to passage will be a revision to CDA 230 —the landmark legislation that protects online platforms from liability for user generated content. Blasphemy? Sure, but here we are, stuck between free speech on the one hand, massive platform economics on the other, and a really, really bad set of externalities in the middle. CDA 230 was built to give early platforms the room to grow unhindered by traditional constraints on media companies. That growth has now metastasized, and we don’t have a policy response that anyone agrees upon. And CDA 230 is an easy target, given conservatives in Congress already believe Facebook, Google, and others have it out for their president. They’ll be a serious run at rewriting 230, but it will ultimately fail. Related…
Adversarial interoperability will get a moment in the sun, but also fail to make it into law. In the past I (and many others) have written about “machine readable data portability.” But for the debate we’re about to have (and need to have), I like “adversarial interoperability” better. Both are mouthfuls, and neither are easy to explain. Data governance and policy are complicated topics which test our society’s ability to have difficult long form conversations. 2020 will be a year where the legions of academics, policy makers, politicians, and writers who debate economic theory around data and capitalism get a real audience, and I believe much of that debate will center on whether or not large platforms have a responsibility to be open or closed. As Cory Doctorow explains, adversarial interoperability is “when you create a new product or service that plugs into the existing ones without the permission of the companies that make them.” As in, I can plug my new e-commerce engine into Amazon, my new mobile operating system into iOS, my new social network into Facebook, or my new driving instruction app into Google Maps. I grew up in a world where this kind of innovation was presumed. It’s now effectively banned by a handful of data oligarchs, and our economy – and our future – suffers for it.
As long as we’re geeking out on catchphrases only a dork can love, 2020 will also be the year “data provenance” becomes a thing. As with many nerdy topics, the concept of data provenance started in academia, migrated to adtech, and is about to break into the broader world of marketing, which is struggling to get its arms around a data-driven future. The ability to trace the origin, ownership, permissions, and uses of data is a fundamental requirement of an advanced digital economy, and in 2020, we’ll realize we have a ton of work left to do to get this right. Yes, yes, blockchain and ledgers are part of the discussion here, but the point isn’t the technology, it’s the policy enabling the technology.
Google zags. Saddled with increasingly negative public opinion and driven in large part by concerns over retaining its workforce, Google will make a deeply surprising and game changing move in 2020. It could be a massive acquisition, a move into some utterly surprising new industry (like content), but my money’s on something related to data privacy. The company may well commit to both leading the debate on the topics described above, as well as implementing them in its core infrastructure. Now that would really be a zag…
At least one major “on demand” player will capitulate. Gig economy business models may make sense long term, but that doesn’t mean we’re getting the execution right in the first group of on demand “unicorns.” In fact, I’d argue we’re mostly getting them wrong, even if as consumers, we love the supposed convenience gig brands bring us. Many of the true costs of these businesses have been externalized onto public infrastructure (and the poor), and civic patience is running out. Plus, venture and public finance markets are increasingly skeptical of business models that depend on strip mining the labor of increasingly querulous private contractors. A reckoning is due, and in 2020 we’ll see the collapse of one or more larger players in the field.
Influencer marketing will fall out of favor. I’m not predicting an implosion here, but rather an industry wide pause as brands start to ask the questions consumers will also be pondering: who the fuck are these influencers and why are we paying them so much attention? A major piece of this — on the marketing side anyway — will be driven by a massive increase in influencer fraud. As with other fast growing digital marketing channels, where money pours in, fraud fast follows — nearly as fast as fawning New York Times articles, but I digress.
Information warfare becomes a national bogeyman. If we’ve learned anything since the 2016 election, it’s this: We’ve taken far too long to comprehend the extent to which bad actors have come to shape and divide our discourse. These past few years have slowly revealed the power of information warfare, and the combination of a national election with the compounding distrust of algorithm-driven platforms will mean that by mid year, “fake news” will yield to “information warfare” as the catchphrase describing what’s wrong with our national dialog. Deep fakes, sophisticated state-sponsored information operations, and good old fashioned political info ops will dominate the headlines in 2020. Unfortunately, the cynic in me thinks the electorate’s response will be to become more inured and distrustful, but there’s a chance a number of trusted media brands (both new and old) prosper as we all search for a common set of facts.
Purpose takes center stage in business. 2019 was the year the leaders of industry declared a new purpose for the corporation — one that looks beyond profits for a true north that includes multiple stakeholders, not just shareholders. 2020 will be the year many companies will compete to prove that they are serious about that pledge. Reaction from Wall St. will be mixed, but I expect plenty of CEOs will feel emboldened to take the kind of socially minded actions that would have gotten them fired in previous eras. This is a good thing, and likely climate change will become the issue many companies will feel comfortable rallying behind. (I certainly hope so, but this isn’t supposed to be about what I wish for…)
Apple and/or Amazon stumble. I have no proof as to why I think this might happen but…both these companies just feel ripe for some kind of major misstep or scandal. America loves a financial winner — and both Amazon and Apple have been runaway winners in the stock market for the past decade. Both have gotten away with some pretty bad shit along the way, especially when it comes to labor practices in their supply chain. And while neither of them are as vulnerable as Facebook or Google when it comes to the data privacy or free speech issues circling big tech, both Apple and Amazon have become emblematic of a certain kind of capitalism that feels fraught with downside risk in the near future. I can’t say what it is, but I feel like both these companies could catch one squarely on the jaw this coming year, and the post-mortems will all say they never saw it coming.
So there you have it — 11 predictions for the coming year. I was going to stop at 10, but that Apple/Amazon one just forced itself out — perhaps that’s me wishing again. We’ll see. Let me know your thoughts, and keep your cool out there. 2020 is going to be one hell of a year.
Those of us fortunate enough to have lived through the birth of the web have a habit of stewing in our own nostalgia. We’ll recall some cool site from ten or more years back, then think to ourselves (or sometimes out loud on Twitter): “Well damn, things were way better back then.”
Then we shut up. After all, we’re likely out of touch, given most of us have never hung out on Twitch. But I’m seeing more and more of this kind of oldster wistfulness, what with Facebook’s current unraveling and the overall implosion of the tech-as-savior narrative in our society.
Hence the chuckle many of us had when we saw this trending piece suggesting that perhaps it was time for us to finally unhook from Facebook and – wait for it – get our own personal webpage, one we updated for any and all to peruse. You know, like a blog, only for now. I don’t know the author – the editor of the tech-site Motherboard – but it’s kind of fun to watch someone join the Old Timers Web Club in real time. Hey Facebook, get off my lawn!!!
That Golden Age
So as to not bury the lead, let me state something upfront: Of course the architecture of our current Internet is borked. It’s dumb. It’s a goddamn desert. It’s soil where seed don’t sprout. Innovation? On the web, that dog stopped hunting years ago.
And who or what’s to blame? No, no. It’s not Facebook. Facebook is merely a symptom. A convenient and easy stand in – an artifact of a larger failure of our cultural commons. Somewhere in the past decade we got something wrong, we lost our narrative – we allowed Facebook and its kin to run away with our culture.
Instead of focusing on Facebook, which is structurally borked and hurtling toward Yahoo-like irrelevance, it’s time to focus on that mistake we made, and how we might address it.
Just 10-15 years ago, things weren’t heading toward the our currently crippled version of the Internet. Back in the heady days of 2004 to 2010 – not very long ago – a riot of innovation had overtaken the technology and Internet world. We called this era “Web 2.0” – the Internet was becoming an open, distributed platform, in every meaning of the word. It was generative, it was Gates Line-compliant, and its increasingly muscular technical infrastructure promised wonder and magic and endless buckets of new. Bandwidth, responsive design, data storage, processing on demand, generously instrumented APIs; it was all coming together. Thousands of new projects and companies and ideas and hacks and services bloomed.
Sure, back then the giants were still giants – but they seemed genuinely friendly and aligned with an open, distributed philosophy. Google united the Internet, codifying (and sharing) a data structure that everyone could build upon. Amazon Web Services launched in 2006, and with the problem of storage and processing solved, tens of thousands of new services were launched in a matter of just a few years. Hell, even Facebook launched an open platform, though it quickly realized it had no business doing so. AJAX broke out, allowing for multi-state data-driven user interfaces, and just like that, the web broke out of flatland. Anyone with passable scripting skills could make interesting shit! The promise of Internet 1.0 – that open, connected, intelligence-at-the-node vision we all bought into back before any of it was really possible – by 2008 or so, that promise was damn near realized. Remember LivePlasma? Yeah, that was an amazing mashup. Too bad it’s been dormant for over a decade.
After 2010 or so, things went sideways. And then they got worse. I think in the end, our failure wasn’t that we let Facebook, Google, Apple and Amazon get too big, or too powerful. No, I think instead we failed to consider the impact of the technologies and the companies we were building. We failed to play our hand forward, we failed to realize that these nascent technologies were fragile and ungoverned and liable to be exploited by people less idealistic than we were.
Our Shadow Constitution
Our lack of consideration deliberately aided and abetted the creation of a unratified shadow Constitution for the Internet – a governance architecture built on assumptions we have accepted, but are actively ignoring. All those Terms of Service that we clicked past, the EULAs we mocked but failed to challenge, those policies have built walls around our data and how it may be used. Massive platform companies have used those walls to create impenetrable business models. Their IPO filings explain in full how the monopolization and exploitation of data were central to their success – but we bought the stock anyway.
We failed to imagine that these new companies – these Facebooks, Ubers, Amazons and Googles – might one day become exactly what they were destined to become, should we leave them ungoverned and in the thrall of unbridled capitalism. We never imagined that should they win, the vision we had of a democratic Internet would end up losing.
It’s not that, at the very start at least, that tech companies were run by evil people in any larger sense. These were smart kids, almost always male, testing the limits of adolescence in their first years after high school or college. Timing mattered most: In the mid to late oughts, with the winds of Web 2 at their back, these companies had the right ideas at the right time, with an eager nexus of opportunistic capital urging them forward.
They built extraordinary companies. But again, they built a new architecture of governance over our economy and our culture – a brutalist ecosystem that repels innovation. Not on purpose – not at first. But protected by the walls of the Internet’s newly established shadow constitution and in the thrall of a new kind of technology-fused capitalism, they certainly got good at exploiting their data-driven leverage.
So here we are, at the end of 2018, with all our darlings, the leaders not only of the tech sector, but of our entire economy, bloodied by doubt, staggering from the weight of unconsidered externalities. What comes next?
2019: The Year of Internet Policy
Whether we like it or not, Policy with a capital P is coming to the Internet world next year. Our newly emboldened Congress is scrambling to introduce multiple pieces of legislation, from an Internet Bill of Rights to a federal privacy law modeled on – shudder – the EU’s GDPR. In the past month, I’ve read draft policy papers suggesting we tax the Internet’s advertising model, that we break up Google, Facebook, and Amazon, or that we back off and just let the market “do its work.”
And that’s a good thing, to my mind – it seems we’re finally coming to terms with the power of the companies we’ve created, and we’re ready to have a national dialog about a path forward. To that end, a spot of personal news: I’ve joined the School of International and Public Affairs at Columbia University, and I’m working on a research project studying how data flows in US markets, with an emphasis on the major tech platforms. I’m also teaching a course on Internet business models and policy. In short, I’m leaning into this conversation, and you’ll likely be seeing a lot more writing on these topics here over the course of the next year or so.
Oh, and yeah, I’m also working on a new project, which remains in stealth for the time being. Yep, has to do with media and tech, but with a new focus: Our political dialog. More on that later in the year.
I know I’ve been a bit quiet this past month, but starting up new things requires a lot of work, and my writing has suffered as a result. But I’ve got quite a few pieces in the queue, starting with my annual roundup of how I did in my predictions for the year, and then of course my predictions for 2019. But I’ll spoil at least one of them now and just summarize the point of this post from the start: It’s time we figure out how to build a better Internet, and 2019 will be the year policymakers get deeply involved in this overdue and essential conversation.
Let’s be honest with ourselves, shall we? We’re in the midst of the most significant shift in our society since at least the Gilded Age – a tectonic reshaping of economic systems, social mores, and political institutions. Some even argue our current transition to a post-digital world, one in which technology has lapped our own intelligence and automation may displace the majority of our workforce within our lifetimes, is the most dramatic change to ever occur in recorded history. And that’s before we tackle a few other existential threats, including global warming – which is inarguably devastating our environment and driving massive immigration, drought, and famine – or income inequality, which has already fomented historic levels of political turmoil.
Any way you look at it, we’ve got a lot of difficult intellectual, social, and policy work to do, and we’ve got to do it quickly. Lucky for us, two major political events loom before us: The midterm elections this November, and a presidential election two years after that. Will we use these milestones to effect real change?
Given our current political atmosphere, it’s hard to imagine that we will. I fervently hope that the midterms will provide an overdue check on the insane clown show that the White House has delivered to us so far, but I’ve little faith that the build up to the 2020 Presidential election will be much more than an ongoing circus of divisive theatrics. Will there be room for serious debate about reshaping our fundamental relationship to government? If we are truly in an unprecedented period of social change, shouldn’t we be talking about how we’re going to manage it?
We could be, if Andrew Yang can poll above 15 percent in time for the Democratic debates next year.
Andrew Yang currently labors in near obscurity, but he is one of only two declared democratic candidates for president so far, and he’s been spending a lot of time in Iowa and New Hampshire lately. Yang is smart, thoughtful, and has the backing of a lot of folks in the technology world. He’s the founder of Venture for America, a program that trains college grads to work as entrepreneurs in “second cities” around the country like St. Louis, Pittsburgh, and Cleveland. He’s in no way a typical presidential candidate, but then again, we seem to be tired of those lately.
If you have heard of Yang, it might be as the “UBI candidate,” though he rankles a bit at that description. Yang is a proponent of what he calls the “Freedom Dividend,” a version of universal basic income that he argues will fundamentally reshape American culture. To get there, we’ll need to radically rethink our current social safety net, adopt an entirely new approach to taxation (he argues for a European-style value added tax), and get over our uniquely American love affair with the Horatio Alger mythos.
Can a candidate like Yang actually win the Democratic nomination for president, much less the presidency itself? I’ve not met a political professional who thinks he can, but then again, much stranger things have already happened. Regardless, it’s critical that we debate the ideas his campaign represents during the build up to our national elections in 2020, and for that reason alone I’m supporting Yang’s candidacy.
I met Yang two weeks ago at Thrival, an event that NewCo helps to produce in Pittsburgh (the video of that event will be up soon, when it is, I’ll post a link here). For nearly an hour on stage at the Carnegie museum, I grilled Yang about his economic theories, his chances of actually becoming president, and his agenda beyond the Freedom Dividend. I do a lot of interviews with well known folks, and I must say, if the reaction Yang got from the Pittsburgh audience is any indication, the man’s platform resonates deeply with voters.
For anyone who wants to get know Yang better, I recommend his recently published book The War on Normal People. But read it with this caveat: The thing is damn depressing. Yang lays out how structurally and fundamentally broken our society already is. He persuasively argues that we’re already in the midst of a “Great Displacement” across tens of millions of workers, a displacement that we’ve failed to identify, much less address. Echoing the recent work of Anand Giridharadas,Rana Foorohar, Edward Luce, and Andy Stern, Yang cites example after example of how perilously close we are to social collapse.
It’s hard to win a presidential election if fear is your primary motivator. But we live in strange, fearful times, and despite the pessimism of his book, I found Yang an optimistic, genuine, and actually pretty funny guy. He calls himself “the opposite of Trump – an Asian man who likes numbers.”
For Yang to actually shift the dialog of presidential politics, he’ll need to poll at or above 15 percent by early next year. That’s going to be a long shot, to be sure. But I for one hope he makes it to the debate stage, and that as a society, we will seriously discuss the ideas he proposes. We can no longer afford politics as usual – not the politics we have now, and certainly not a return to the cliché-ridden blandishments of years past. The time to traffic in new ideas – radically new ideas – is upon us.
If you pull far enough back from the day to day debate over technology’s impact on society – far enough that Facebook’s destabilization of democracy, Amazon’s conquering of capitalism, and Google’s domination of our data flows start to blend into one broader, more cohesive picture – what does that picture communicate about the state of humanity today?
Technology forces us to recalculate what it means to be human – what is essentially us, and whether technology represents us, or some emerging otherness which alienates or even terrifies us. We have clothed ourselves in newly discovered data, we have yoked ourselves to new algorithmic harnesses, and we are waking to the human costs of this new practice. Who are we becoming?
Nearly two years ago I predicted that the bloom would fade from the technology industry’s rose, and so far, so true. But as we begin to lose faith in the icons of our former narratives, a nagging and increasingly urgent question arises: In a world where we imaging merging with technology, what makes us uniquely human?
Our lives are now driven in large part by data, code, and processing, and by the governance of algorithms. These determine how data flows, and what insights and decisions are taken as a result.
So yes, software has, in a way, eaten the world. But software is not something being done to us. We have turned the physical world into data, we have translated our thoughts, actions, needs and desires into data, and we have submitted that data for algorithmic inspection and processing. What we now struggle with is the result of these new habits – the force of technology looping back upon the world, bending it to a new will. What agency – and responsibility – do we have? Whose will? To what end?
Synonymous with progress, asking not for permission, fearless of breaking things – in particular stupid, worthy-of-being-broken things like government, sclerotic corporations, and fetid social norms – the technology industry reveled for decades as a kind of benighted warrior for societal good. As one Senator told me during the Facebook hearings this past summer, “we purposefully didn’t regulate technology, and that was the right thing to do.” But now? He shrugged. Now, maybe it’s time.
Because technology is already regulating us. I’ve always marveled at libertarians who think the best regulatory framework for government is none at all. Do they think that means there’s no governance?
In our capitalized healthcare system, data, code and algorithms now drive diagnosis, costs, coverage and outcomes. What changes on the ground? People are being denied healthcare, and this equates to life or death in the real world.
Can you get credit to start a business? A loan to better yourself through education? Financial decisions are now determined by data, code, and algorithms. Job applications are turned to data, and run through cohorts of similarities, determining who gets hired, and who ultimately ends up leaving the workforce.
And in perhaps the most human pursuit of all – connecting to other humans – we’ve turned our desires and our hopes to data, swapping centuries of cultural norms for faith in the governance of code and algorithms built – in necessary secrecy – by private corporations.
How does a human being make a decision? Individual decision making has always been opaque – who can query what happens inside someone’s head? We gather input, we weigh options and impacts, we test assumptions through conversations with others. And then we make a call – and we hope for the best.
But when others are making decisions that impact us, well, those kinds of decisions require governance. Over thousands of years we’ve designed systems to insure that our most important societal decisions can be queried and audited for fairness, that they are defensible against some shared logic, that they will benefit society at large.
We call these systems government. It is imperfect but… it’s better than anarchy.
For centuries, government regulations have constrained social decisions that impact health, job applications, credit – even our public square. Dating we’ve left to the governance of cultural norms, which share the power of government over much of the world.
But in just the past decade, we’ve ceded much of this governance to private companies – companies motivated by market imperatives which demand their decision making processes be hidden. Our public government – and our culture – have not kept up.
What happens when decisions are taken by algorithms of governance that no one understands? And what happens when those algorithms are themselves governed by a philosophy called capitalism?
We’ve begun a radical experiment combining technology and capitalism, one that most of us have scarcely considered. Our public commons – that which we held as owned by all, to the benefit of all – is increasingly becoming privatized.
Thousands of companies are now dedicated to revenue extraction in the course of delivering what were once held as public goods. Public transportation is being hollowed out by Uber, Lyft, and their competitors (leveraging public goods like roadways, traffic infrastructure, and GPS). Public education is losing funding to private schools, MOOCs, and for-profit universities. Public health, most disastrously in the United States, is driven by a capitalist philosophy tinged with technocratic regulatory capture. And in perhaps the greatest example of all, we’ve ceded our financial future to the almighty 401K – individuals can no longer count on pensions or social safety nets – they must instead secure their future by investing in “the markets” – markets which have become inhospitable to anyone lacking the technological acumen of the world’s most cutting-edge hedge funds.
What’s remarkable and terrifying about all of this is the fact that the combinatorial nature of technology and capitalism outputs fantastic wealth for a very few, and increasing poverty for the very many. It’s all well and good to claim that everyone should have a 401K. It’s irresponsible to continue that claim when faced with the reality that 84 percent of the stock market is owned by the wealthiest ten percent of the population.
This outcome is not sustainable. When a system of governance fails us, we must examine its fundamental inputs and processes, and seek to change them.
So what truly is governing us in the age of data, code, algorithms and processing? For nearly five decades, the singular true north of capitalism has been to enrich corporate shareholders. Other stakeholders – employees, impacted communities, partners, customers – do not directly determine the governance of most corporations.
Corporations are motivated by incentives and available resources. When the incentive is extraction of capital to be placed in the pockets of shareholders, and a new resource becomes available which will aide that extraction, companies will invent fantastic new ways to leverage that resource so as to achieve their goal. If that resource allows corporations to skirt current regulatory frameworks, or bypass them altogether, so much the better.
Now the caveat: Allow me to state for the record that I am not a socialist. If you’ve never read my work, know I’ve started six companies, invested in scores more, and consider myself an advocate of transparently governed free markets. But we’ve leaned far too over our skis – the facts no longer support our current governance model.
We turn our worlds to data, leveraging that data, technocapitalism then terraforms our world. Nowhere is this more evident that with automation – the largest cost of nearly every corporation is human labor, and digital technologies are getting extraordinarily good at replacing that cost.
Nearly everyone agrees this shift is not new – yes yes, a century or two ago, most of us were farmers. But this shift is coming far faster, and with far less considered governance. The last great transition came over generations. Technocapitalism has risen to its current heights in ten short years. Ten years.
If we are going to get this shift right, we urgently need to engage in a dialog about our core values. Can we perhaps rethink the purpose of work, given work no longer means labor? Can we reinvent our corporations and our regulatory frameworks to honor, celebrate and support our highest ideals? Can we prioritize what it means to be human even as we create and deploy tools that make redundant the way of life we’ve come to know these past few centuries?
These questions beg a simpler one: What makes us human?
I dusted off my old cultural anthropology texts, and consulted the scholars. The study of humankind teaches us that we are unique in that we are transcendent toolmakers – and digital technology is our most powerful tool. We have nuanced language, which allows us both recollection of the past, and foresight into the future. We are wired – literally at the molecular level – to be social, to depend on one another, to share information and experience. Thanks to all of this, we have the capability to wonder, to understand our place in the world, to philosophize. The love of beauty, philosophers will tell you, is the most human thing of all.
Oh, but then again, we are uniquely capable of intentional destroying ourselves. Plenty of species can do that by mistake. We’re unique in our ability to do it on purpose.
But perhaps the thing that makes us most human is our love of story telling, for narrative weaves nearly everything human into one grand experience. Our greatest philosophers even tell stories about telling stories! The best stories employ sublime language, advanced tools, deep community, profound wonder, and inescapable narrative tension. That ability to destroy ourselves? That’s the greatest narrative driver in this history of mankind.
How will it turn out?
We are storytelling engines uniquely capable of understanding our place in the world. And it’s time to change our story, before we fail a grand test of our own making: Can we transition to a world inhabited by both ourselves, and the otherness of the technology we’ve created? Should we fail, nature will indifferently shrug its shoulders. It has billions of years to let the whole experiment play over again.
We are the architects of this grand narrative. Let’s not miss our opportunity to get it right.