free html hit counter John Battelle's Search Blog - Thoughts on the intersection of search, media, technology, and more.

Data Concentration In Platforms – A Modest Proposal

By - December 12, 2017
image

(I’ve been writing on NewCo Shift for the most part, but I wanted my Searchblog readers to know two things: One, I’m working on getting this site totally redone, and will be posting here in the New Year. And two, I really feel awful about how I’ve neglected this site. All of that will change next year.)

—-

Over the past few years I’ve been looking for a grand unifying theory that explains my growing discomfort with technology, an industry for which I’ve been a mostly unabashed cheerleader these past three decades.

I think it all comes down to how our society manages its most crucial new resource: Data.

That our largest technology companies have cornered the market on the data that powers our society’s most important functions is not in question. Who better than Amazon understands at-scale patterns in commerce (and with AWS, our demand for compute-related resources)? Who better than Google understands what products, services, and knowledge we want, and our path to finding them? Who better than Facebook understands our relationships to others and our interaction with (often bad) ideas? And who better than Apple (and Google) understand the applications, services, and entertainment we choose to engage with every day (not to mention our location, our ID, our most personal data, and on and on)?

These companies also dominate two crucial assets related to data: The compute power necessary to translate data into actionable insights, and the human talent required to leverage them both. Taken together, these three assets — massive amounts of data, massive compute platforms, and legions of highly trained engineers and data scientists — represent our society’s best path to understanding itself, and thereby improving all of our lives.

If anything should be defined as a public good — “a commodity or service provided without profit to all members of a society” — it should be the ability to study and understand society toward a goal of improving everyone’s lives.

But over the past decade, the most valuable data, processing power, and people have become concentrated in a handful of private companies that have demonstrated an almost genetic unwillingness to share their platform as a public good. Sure, they’ll happily share their platforms’ output — their consumer products — as for-profit services. And yes, each of us as consumers can benefit greatly from free social media, free search, free access to the “Everything Store,” and expensive but oh-so-worth-it smart phones.

But while each of us gets to benefit individuallynone of us get to benefit from the wholistic, aggregated view of the world that the tech oligarchy has over its billions of consumers. And only the tech platforms — and their shareholders — are accruing the benefit of that perspective.

Why am I on about this? Because having access to good, at-scale data, and the platforms and people to learn from that data, is a clear proxy for progress in our society. We all marvel at the extraordinary capabilities, profits, and market caps of the tech platforms. They are the modern equivalents of the industrial powerhouses that transformed the American landscape in the early 20th century.

Back then, what was good for GM was good for the USA. But when we went to war, we went to war in partnership with those companies. GM, Alcoa, US Steel and their peers’ capitalistic platforms became our government’s most important wartime assets.

And while it feels odd to write this, no serious scholar of modern geopolitics disputes that we are now at war — a new kind of information-based war, but war, nevertheless — with Russia in particular, but in all honesty, with a multitude of nation states and stateless actors bent on destroying western democratic capitalismThey are using our most sophisticated and complex technology platforms to wage this war — and so far, we’re losing. Badly.

Why? According to sources I’ve talked to both at the big tech companies and in government, each side feels the other is ignorant, arrogant, misguided, and incapable of understanding the other side’s point of view. There’s almost no data sharing, trust, or cooperation between them. We’re stuck in an old model of lobbying, soft power, and the occasional confrontational hearing.

Not exactly the kind of public-private partnership we need to win a war, much less a peace.

Am I arguing that the government should take over Google, Amazon, Facebook, and Apple so as to beat back Russian info-ops? No, of course not.But our current response to Russian aggression illustrates the lack of partnership and co-ordination between government and our most valuable private sector companies. And I am hoping to raise an alarm: When the private sector has markedly better information, processing power, and personnel than the public sector, one will only strengthen, while the latter will weaken. We’re seeing it play out in our current politics, and if you believe in the American idea, you should be extremely concerned.

 

During WWII, the US economy mobilized, growing at more than 10 percent for several years in a row. Sweeping new partnerships were established between large American corporations, new entrants to the workforce (black Americans and women in particular), and the government. And when the war was won, the peace dividend drove the United States to its current position as the most powerful nation — and economy — on the planet.

We desperately need a new compact between business and government, in particular as it relates to the most important resources in our society: data, processing power, and human intellectual capital.

In my next column I’ll dive into ideas for how we might mitigate our current imbalance, and the role that anti-trust may — or may not — play in that rebalancing. (Update, here it is.)

  • Content Marquee

Amazon’s HQ2 Isn’t a Headquarters. So What Is It?

By - September 28, 2017

Crossposted from NewCo Shift.

Everyone’s favorite parlor game is “where will Amazon go?” Better to ask: Why does Amazon needs a second headquarters in the first place?

It’s the future! Rendering of Amazon’s new Seattle HQ. The first and original one. 

Why does Amazon want a new headquarters? Peruse the company’s RFP, and the company is frustratingly vague on the question. “Due to the successful growth of the Company,” Amazon says of itself in the royal third person, “it now requires a second corporate headquarters in North America.”

It requires”?

Is this a request for bulk discounts on toner ink? Did Jeff Bezos outsource this momentous and extremely public communication to his purchasing department? Is there really no more room in Seattle?

So…Why? Why is Amazon doing this? If I were one of the hundreds of Mayors and local civic boosters huddling in meeting rooms around North America, that would be my first — and pretty much my only question. After all, if you don’t know why Amazon is looking for a “second headquarters,” then your response to their RFP is going to end up pretty rudderless. If Amazon’s true reason for another HQ boils down to, say, Latin American expansion, then Chicago, Toronto, and Philly should pretty much pack in in, no?

While the RFP is comprehensive in requirements (transportation networks, nearby international airports, sustainable office space, etc.), it nevertheless demonstrates a stunning lack of vision — the very vision that once defined “startups” like Amazon. The current accepted mythology about our fabled tech companies, those lions of our present economic theatre, is that they are fonts of vision — driven not just by profit, but by outsized missions to change the world, and to make it better. So what mission, exactly, will this new headquarter actually be charged with? Can anyone answer that? Absent any serious data, the default becomes “to expand Amazon.” And what, exactly, might that mean?

Amazon’s lists of current and projected businesses include e-commerce (its core), entertainment, home automation, cloud services, white label products, logistics and delivery, and any number of adjacent businesses yet to be scaled. It also harbors serious international expansion plans (one would presume). Any and all of these businesses might inform the “why” of its Bachelor-like RFP. But nowhere in the RFP does the company deliver a clue as to whether these factors play into its decision.

I have a theory about why Amazon issued such a vision-free RFP — and why the world responded with a parlor game instead of a serious inquiry as to the motivations of “the most valuable company in the world.” And that theory comes down to this: Amazon needs a place to put workers that are secondary but necessary — back office service, lower level engineering talent, accounting, compliance, administrative support. It will move those support positions to the city that has the cheapest cost per seat, and consolidate its “high value” workers in Seattle, where such talent is already significantly concentrated.

Put another way, “HQ2” isn’t a headquarters at all. But calling it one insures a lot more attention, a lot more concessions, and a lot more positive PR. Maybe Amazon doesn’t have an answer to the question, and is hoping its call for proposals will deliver it a fresh new vision for the future. But I doubt it.

I’d love to be wrong, but absent any other vision the most likely reasoning behind this beauty pageant boils down to money. It may sound like the cynical logic of a rapacious capitalist — but more often than not, that’s what usually drives business in the first place.

This Is What Happens When Context Is Lost.

By - September 15, 2017

Buzzfeed Google Ads

(Cross posted from NewCo Shift)

Facebook and Google’s advertising infrastructure is one of humanity’s most marvelous creations. It’s also one of its most terrifying, because, in truth, pretty much no one really understands how it works. Not Mark Zuckerberg, not Larry Page, and certainly not Russian investigator Robert Mueller, although of the bunch, it seems Mueller is the most interested in that fact.

And that’s a massive problem for Facebook and Google, who have been dragged to the stocks over their algorithms’ inability to, well, act like a rational and dignified human being.

So how did the world’s most valuable and ubiquitous companies get here, and what can be done about it?

Well, let’s pull back and consider how these two tech giants execute their core business model, which of course is advertising. You might want to pour yourself an adult beverage and settle in, because by the end of this, the odds of you wanting the cold comfort of a bourbon on ice are pretty high.

In the beginning (OK, let’s just say before the year 2000), advertising was a pretty simple business. You chose your intended audience (the target), you chose your message (the creative), and then you chose your delivery vehicle (the media plan). That media plan involved identifying publications, television programs, and radio stations where your target audience was engaged.

Those media outlets lived in a world regulated by certain hard and fast rules around what constituted appropriate speech. The FCC made sure you couldn’t go full George Carlin in your creative execution, for example. The FTC made sure you couldn’t commit fraud. And the FEC — that’s the regulatory body responsible for insuring fairness and transparency in paid political speech — the FEC made sure that when audiences were targeted with creative that supports one candidate or another, those audiences could know who was behind same-said creative.

But that neat framework has been thoroughly and utterly upended on the Internet, which, as you might recall, has mostly viewed regulation as damage to be routed around.

After all, empowering three major Federal regulatory bodies dedicated to old media advertising practices seems like an awful lot of liberal overkill, n’est ce pas? What waste! And speaking of waste, honestly, if you want to “target” your audience, why bother with “media outlets” anyway?! Everyone knows that Wanamaker was right — in the offline world, half your advertising is wasted, and thanks to offline’s lack of precise targeting, no one has a clue which half that might be.

But as we consider tossing the offline baby out with the bathwater waste, it’s wise to remember a critical element of the offline model that may well save us as we begin to sort through the mess we’re currently in. That element can be understood via a single word: Context. But we’ll get to that in a minute. First, let’s go back to our story of how advertising has shifted in an online world, and the unintended consequences of that shift (if you want a even more thorough take, head over to Rick Webb’s NewCo Shift series: Which Half Is Wasted).

Google: Millions Flock to Self Service, Rise of the Algos

Back in the year 2000, Google rolled out AdWords, a fantastically precise targeting technology that allowed just about anyone to target their advertisements to…just about anyone, as long as that person was typing a search term into Google’s rapidly growing service. (Keep that “anyone” word in mind, it’ll come back to haunt us later.) AdWords worked best when you used it directly on Google’s site — because your ad came up as a search result right next to the “organic” results. If your ad was contextually relevant to a user’s search query, it had a good chance of “winning” — and the prize was a potential customer clicking over to your “landing page.” What you did with them then was your business, not Google’s.

As you can tell from my fetishistic italicization, in this early portion of the digital ad revolution, context still mattered. Google next rolled out “AdSense,” which placed AdWords on publishers’ pages around the Internet. AdSense didn’t work as well as AdWords on Google’s own site, but it still worked pretty well, because it was driven by context — the AdSense system scanned the web pages on which its ads were placed, and attempted to place relevant AdWordsin context there. Sometimes it did so clumsily, sometimes it did so with spectacular precision. Net net, it did it well enough to start a revolution.

Within a few years, AdWords and AdSense brought billions of dollars of revenue to Google, and it reshaped the habits of millions of advertisers large and small. In fact, AdWords brought an entirely new class of advertiser into the fold — small time business owners who could compete on a level playing field with massive brands. It also reshaped the efforts of thousands of publishers, many of whom dedicated small armies of humans to game AdWords’ algorithms and fraudulently drink the advertisers’ milk shakes. Google fought back, employing thousands of engineers to ward off spam, fraud, and bad actors.

AdWords didn’t let advertisers target individuals based on their deeply personal information, at least not in its first decade or so of existence. Instead, you targeted based on the expressed intention of individuals — either their search query (if on Google’s own site), or the context of what they were reading on sites all over the web. And over time, Google developed what seemed like insanely smart algorithms which helped advertisers find their audiences, deliver their messaging, and optimize their results.

The government mostly stayed out of Google’s way during this period.

When Google went public in 2004, it was estimated that between 15 to 25 percent of advertising on its platform was fraudulent. But advertisers didn’t care — after all, that’s a lot less waste than over in Wanamaker land, right? Google’s IPO was, for a period of time, the most successful offering in the history of tech.

Facebook: People Based Marketing FTW

Then along came Facebook. Facebook was a social network where legions of users voluntarily offered personally identifying information in exchange for the right to poke each other, like each other, and share their baby pictures with each other.

Facebook’s founders knew their future lay in connecting that trove of user data to a massive ad platform. In 2008, they hired Sheryl Sandberg, who ran Google’s advertising operation, and within a few years, Facebook had built the foundation of what is now the most ruthlessly precise targeting engine on the planet.

Facebook took nearly all the world-beating characteristics of Google’s AdWords and added the crack cocaine of personal data. Its self service platform, which opened for business a year or so after Sandberg joined, was hailed as ‘ridiculously easy to use.’ Facebook began to grow by leaps and bounds. Not only did everyone in the industrialized world get a Facebook account, every advertiser in the industrialized world got themselves a Facebook advertising account. Google had already plowed the field, after all. All Facebook had to do was add the informational seed.

Both Google and Facebook’s systems were essentially open — as we established earlier, just about anyone could sign up and start buying algorithmically generated ads targeted to infinite numbers of “audiences.” By 2013 or so, Google had gotten into the personalization game, albeit most folks would admit it wasn’t nearly as good as Facebook’s, but still, way better than the offline world.

So how does Facebook’s ad system work? Well, just like Google, it’s accessed through a self-service platform that lets you target your audiences using Facebook data. And because Facebook knows an awful lot about its users, you can target those users with astounding precision. You want women, 30–34, with two kids who live in the suburbs? Piece of cake. Men, 18–21 with an interest in acid house music, cosplay, and scientology? Done! And just like Google, Facebook employed legions of algorithms which helped advertisers find their audiences, deliver their messaging, and optimize their results. A massive ecosystem of advertisers flocked to Facebook’s new platform, lured by what appeared to be the Holy Grail of their customer acquisition dreams: People Based Marketing!

The government mostly stayed out of Facebook’s way during this period.

When Facebook went public in 2012, it estimated that only 1.5% of its nearly one billion accounts were fraudulent. A handful of advertisers begged to differ, but they were probably just using the system wrong. Sad!

Facebook’s IPO quickly became the most successful IPO in the history of tech. (Till Alibaba, of course. But that’s another story).

(Meanwhile, Programmatic.)

The programmatic Lumascape. Seems uncomplicated, right?

Stunned by the rise of the Google/Facebook duopoly, the tech industry responded with an open web answer: Programmatic advertising. Using cookies, mobile IDs, and tons of related data gathered from users as they surfed the web, hundreds of startups built an open-source version of Facebook and Google’s walled gardens. Programmatic was driven almost entirely by the concept of “audience buying” — the purchase of a specific audience segment regardless of the context in which that audience resided. The programmatic industry quickly scaled to billions of dollars — advertisers loved its price tag (open web ads were far cheaper), and its seemingly amazing return on investment (driven in large part by fraud and bad KPIs, but that’s yet another post).

Facebook and Google were unfazed by the rise of programmatic. In fact, they bought the best companies in the field, and incorporated their technologies into their ever advancing platforms.

The Storm Clouds Gather

But a funny thing happened as Google, Facebook and the programmatic industry rewrote advertising history. Now that advertisers could precisely identify and target audiences on Facebook, Google and across the web, they no longer needed to use media outlets as a proxy for those audiences. Media companies began to fall out of favor with advertisers and subsequently fail in large numbers. Google and Facebook became advertisers’ primary audience acquisition machines. Marketers poured the majority of their budgets into the duopoly — 70–85% of all digital advertising dollars go to the one or the other of them, and nearly all growth in digital marketing spend is attributable to them as well.

By 2011, regulators began to wrap their heads around this burgeoning field. Up till then, Internet ads were exempt from political regulations governing television, print, and other non digital outlets. In fact, both Facebook and Google have both lobbied the FEC, at various times over the past decade or so, to exclude their platforms from the vagaries of regulatory oversight based on an exemption for, and I am not making this up, “bumper stickers, pins, buttons, pens and similar small items” where posting a disclaimer is impracticable (sky writing is also mentioned). AdWords and mobile feed ads were small, after all. And everyone knows the Internet has limited space for disclaimers, right?

Anyway, that was the state of play up until 2011, when Facebook submitted a request to the FEC to clear the issue up once and for all. With a huge election coming in 2012, it was both wise and proactive of Facebook to want to clarify the matter, lest they find themselves on the wrong end of a regulatory ruling with hundreds of millions of dollars on the line.

The FEC failed to clarify its position, but did request comment from industry and the public on the issue (PDF). In essence, things remained status quo, and nothing happened for several years.

That set the table for the election of 2016. In October of that year, perhaps realizing it had done nothing for half a decade while the most powerful advertising machine in the history of ever slowly marched toward its seemingly inevitable date with emergent super intelligence, the FEC re-opened its request for comments on the whether or not political advertising on the Internet should have some trace of transparency. But that was far too late for the 2016 election.

The rest, as they inevitably say, is history in the making.

Time will tell, I suppose.

So Now What?

Most everyone I speak to tells me that last week’s revelations about Facebook, Russia, and political advertising is, in the words of Senator Mark Warner, “the tip of the iceberg.” Whether or not that’s true (and I for one am quite certain it is), it’s plenty enough to bring the issue directly to the forefront of our political and regulatory debate.

Now the news is coming fast and furious: At what was supposed to be a relatively quotidian regular meeting of the FEC this week, the commissioners voted unanimously to re-open (again) the comment period on Internet transparency. The Campaign Legal Center, launched in 2002 by a Republican ally of Senator John McCain (co-sponsor of the McCain Feingold Bipartisan Campaign Reform Act of 2002), this week issued a release calling for Facebook to disclose any and all ads purchased by foreign agents. (Would that it were that simple, but we’ll get to that in the next installment.) One of the six FEC commissioners, a Democrat, subsequently penned an impassioned Op Ed in the Washington Post, calling for a new regulatory framework that would protect American democracy from foreign meddling. The catch? The Republicans on the commission refuse to consider any regulations unless the commission receives “enough substantive written comments.”

Once the link for comments goes up in a week or two, I’m pretty sure they will.

But in the meantime, there’s plenty of chin stroking to be done over this issue. While this may seem like a dust up limited to the transparency of political advertising on the internet, the real story is vastly larger and more complicated. The wheels of western capitalism are greased by paid speech, and online, much of that speech is protected by the first amendment to our constitution, as well as established policies enshrined in contract law between Facebook, Google, and their clients. There are innumerable scenarios where a company or organization demands opacity around its advertising efforts. So many, in fact, that if I were to go into them now, I’d extend this piece by another 2,500 words.

And given I’m now close to 3,000 words in what was supposed to be a 600-word column, I’m going to leave exploring those scenarios, and their impact, to next week’s columns. In the meantime, I’ll be speaking with as many experts and policy folks from tech, Washington, and media as I can find. Suffice to say, big regulation is coming for big tech. Never in the history of the tech industry has the 1996 CDMA ruling granting tech platforms immunity from the consequences of speech on their own platforms been more germane. Whether it’s in jeopardy or not remains to be seen.

This is not a simple issue, and resolving it will require a level of rational discourse and debate that’s been starkly absent from our national dialog these past few years. At stake is not only the fundamental advertising models that built our most valuable tech companies, but also the essential forces and presumptions driving our system of democratic capitalism*. Not to mention the nascent but utterly critical debate around the role of algorithms in civil society. And as we explore solutions to what increasingly feels like an intractable set of questions, we’d do well to keep one word in mind: Context.


*Ask yourselves this: Are the advertising platforms behind Alibaba and Tencent worried about transparency?

The Data Deal Is Opaque. We Should Fix It.

By - August 23, 2017

I wrote this post over on NewCo Shift, but it’s germane to the topics here on Searchblog, so I’m cross posting here…

What Did You *Think* They Do With Your Data?

Admit it, you know your data is how you pay for free services. And you’re cool with it. So let’s get the value exchange right.

Topping the charts on TechMeme yesterday is this story:

So as to be clear, what’s going on here is this: AccuWeather was sharing its users’ anonymized data with a third-party company for profit, even after those same users seemingly opted out of location-based data collection.

But the actual story is more complicated.

Because….come on. Is anyone really still under the impression that your data isn’t what you’re trading for free weather, anywhere, anytime, by the hour? For free e-mail services? For free social media like Instagram or Facebook? For pretty much free everything?

All day long, you’re giving your data up. This is NOT NEW. Technically, what AcccuWeather did is more than likely legal, but it violates the Spirit Of Customers Are Always Right, Even If They Don’t Know What They Are Talking About. It also fails the Front Page Test, and well, when that happens it’s time for a crucifixion!

Hold on, a reasonable person might argue, sensing I’m arguing a disagreeable case. The user opted out, right? In this instance the user (and we can’t call them a “customer,” because a customer traditionally pays money for something) did in fact explicitly tell the app to NOT access their location. Here’s the screen shot in that story:

But what does that really mean? Access for what? Under what circumstance? My guess is AcccuWeather asked this question for a very specific reason: When an app uses your location to deliver you information, it can get super creepy, super fast. It’s best to ask permission, so the user gets comfortable with the app “knowing” so much about where the user is. This opt out message has nothing to do with the use of location data for third party monetization. Nothing at all.

In fact, AccuWeather is not sharing location data, at least not in a way that contradicts what they’ve communicated. Once you ask it not to, the AccuWeather app most certainly does NOT use your location information to in any way inform the user’s experience within the app.

Here’s what AccuWeather should ask its users, if it wanted to be totally honest about the value exchange inherent in the use of free apps:

“Ban AcccuWeather from using your anonymized data so AccuWeather, which really likes giving you free weather information, can stay in business?”

But nope, it surely doesn’t say that.

Yet if we want to get all huffy about use of data, well, that’s really what’s going on here. Because if you’re a publisher, in the past five years you’ve had your contextual advertising revenue* stripped from your P&L. And if you’re going to make it past next Thursday, you have to start monetizing the one thing you have left: Your audience data.

AcccuWeather is a publisher. Publishers are under assault from a massive shift in value extraction, away from the point of audience value delivery (the weather, free, to your eyeballs!) and to the point of audience aggregation (Facebook, Google, Amazon). All of these massive platforms can sell an advertiser audiences who check the local weather, six ways to Sunday.** If you’re an advertiser, why buy those audiences on an actual weather site? It’s easier, cheaper, and far safer to just buy them from the Big Guys.

Publishers need revenue to replace those lost direct ads, so they sell our data — anonymized and triangulated, mind you — so they can stay in business. Because for publishers, advertising as a business sucks right about now.

Anyway. AcccuWeather has already responded to the story. Scolded by an industry that fails to think deeply about what’s really going on in its own backyard, AccuWeather is now appropriately abject, and will “fix” the problem within 24 hours. But that really won’t fix the damn problem.***

  • * and that’s another post.
  • **and with a lot more detailed data!
  • ***and that’s probably a much longer post.

Walmart and Google: A Match Made By Amazon

The retail and online worlds collided late yesterday with the news that Google and Walmart are hooking up in a stunning e-commerce partnership. Walmart will make its impressive inventory and distribution network available to shoppers on Google’s Express e-commerce service. This market the first time Walmart has leveraged its massive inventory and distribution assets outside its own e-commerce offerings. A few weeks ago I predicted in this space that Walmart would hook up with Facebook or Pinterest. I should have realized Google made more sense — though I’m sure there’s still room for more partnerships in this evolving retail landscape.

Those 1.3 million Records We Wanted? Never Mind.

Defenders of citizen’s rights briefly went on high alert when the Department of Justice subpoenaed the IP addresses (and much more) for every single visitor to an anti-Trump website. The web hosting company at the business end of that subpoena, DreamHost, went public with the request, which alerted the world to the government’s unreasonable demands. As the outcry grew, the DOJ relented, saying yesterday, in effect, “never mind, just kidding.” Here’s what chills me — and should chill you: What if DreamHost hadn’t stood up to the man?

No. Social Terrorists Will Not Win

By - August 10, 2017

Social Terrorist

 

 

 

 

 

 

 

 

 

 

 

 

 

small group of social terrorists have hijacked the rational discourse led by society’s most accomplished, intelligent, and promising organizations.

(cross posted from NewCo Shift)

Let’s start with this: Google is not a perfect company. It’s easy to cast it as an omniscient and evil villain, the leader of a millennium-spanning illuminati hellbent on world subjugation. Google the oppressor. Google the silencer of debate. Google, satanic overlord predicted by the holy text!

But that narrative is bullshit, and all rational humans know it. Yes, we have to pay close attention — and keep our powder dry — when a company with the power and reach of Google (or Facebook, or Amazon, or Apple…) finds itself a leader in the dominant cultural conversation of our times.

But when a legitimate and fundamentally important debate breaks out, and the company’s employees try to come together to understand its nuances, to find a path forward …..To threaten those engaged in that conversation with physical violence? That’s fucking terrorism, period. And it’s damn well time we called it that.

Have we lost all deference to the hard won lessons of the past few hundred years? Are we done with enlightenment, with scientific discourse, with fucking manners? Do we now believe progress can only be imposed? Have we abandoned debate? Can we no longer engage in rational discourse, or move forward by attempting to understand each other’s point of view?

I’m so fucking angry that the asshat trolls managed to force Google’s CEO Sundar Pichai to cancel his planned all hands meeting today, one half hour before it started, I’m finding it hard to even write. Before I can continue, I just need to say this. To scream it, and then I’m sure I’ll come to my senses: FUCK YOU. FUCK YOU, asshats, for hijacking the conversation, for using physical threats, implied or otherwise, as a weapon to shut down legitimate rational discourse. FUCK YOU for paralyzing one of our society’s most admired, intelligent, and successful engines of capitalism, FUCK YOU for your bullying, FUCK YOU for your rage and your anger, FUCK YOU for making me feel just like I am sure you feel about me: I want to fucking kick your fucking ass.

But now I will take a breath. And I will remember this: The emotions of that last paragraph never move us forward. Ever.

Google was gathering today to have an honest, difficult, and most likely emotional conversation about the most important idea in our society at present: How to allow all of us to have the right to our points of view, while at the same time insuring the application of those views don’t endanger or injure others. For its entire history, this company has had an open and transparent dialog about difficult issues. This is the first time that I’ve ever heard of where that dialog has been cancelled because of threats of violence.

This idea Google was preparing to debate is difficult. This idea, and the conflict it engenders, is not a finished product. It is a work in progress. It is not unique to Google. Nor is it unique to Apple, or Facebook, Microsoft or Apple — it could have easily arisen and been leapt upon by social terrorists at any of those companies. That it happened at Google is not the point.

Because this idea is far bigger than any of those companies. This idea is at the center of our very understanding of reality. At the center of our American idea. Painstakingly, and not without failure, we have developed social institutions — governments, corporations, churches, universities, the press — to help us navigate this conflict. We have developed an approach to cultural dialog that honors respect, abjures violence, accepts truth. We don’t have figured it out entirely. But we can’t abandon the core principles that have allowed us to move so far forward. And that is exactly what the social terrorists want: For us to give up, for us to abandon rational discourse.

Google is a company comprised of tens of thousands of our finest minds. From conversations I’ve had tonight, many, if not most of those who work there are fearful for their safety and that of their loved ones. Two days ago, they were worried about their ability to speak freely and express their opinions. Today, because social terrorists have gone nuclear, those who disagree with those terrorists — the vast majority of Googlers, and by the way, the vast majority of the world — are fearful for their physical safety.

And because of that, open and transparent debate has been shut down.

What. The. Fuck.

If because of physical threat we can no longer discuss the nuanced points of a difficult issue, then America dies, and so does our democracy.

This cannot stand.

Google has promised to have its dialog, but now it will happen behind closed doors, in secrecy and cloaked in security that social terrorists will claim proves collusion. Well done, asshats. You’ve created your own reality.

It’s up to us to not let that reality become the world’s reality. It’s time to stand up to social terrorists. They cannot and must not win.

My New Column – Please Sign Up!

By - August 05, 2017

Hi Searchblog readers. I know it’s been a while. But I’m writing a new column over at NewCo Shift, and instead of posting it verbatim here every other day (it comes out three times a week), I figured I’d let you know, and if you’d like to read it (my musings are pretty Searchbloggy, to be honest), you can get it right in your inbox by signing up for the NewCo Daily newsletter right here.

Here are my columns so far:
Is Social Media The New Tobacco?

Dow 36,000?

Bears and Dragons Bite Tech Where It Hurts

Memo to Tech’s Titans: Please Remember What It Was Like to Be Small

Don’t Quite Grok Blockchain? We Got You Covered.

This Is How Walmart Will Defend Itself Against Amazon

Facebook’s Data Trove May Well Determine Trump’s Fate

Google and Amazon Hit the Feed Trough

A Trio of Tech Takedowns

Thanks for reading Searchblog. I’ll continue to post stuff here – but probably not every column, which are meant to be short takes on key news of the day.

Uber Does Not Equal The Valley

By - June 14, 2017

Uber Protest

Now that the other shoe has dropped, and Uber’s CEO has been (somewhat) restrained, it’s time for the schadenfreude. Given Uber’s remarkable string of screwups and controversies, it’s coming in thick, in particular from the East coast. And while I believe Uber deserves the scrutiny — there are certainly critical lessons to be learned — the hot takes from many media outlets are starting to get lazy.

Here’s why. Uber does not reflect the entirety of the Valley, particularly when it comes to how companies are run. As I wrote in The Myth of the Valley Douchebag, there are far more companies here run by decent, earnest, well meaning people than there are Ubers. But of course, the Ubers get most of the attention, because they confirm an easy bias that all of tech is off the rails, and deserves to be taken down a notch.

Such is the case with this piece in Time — painting all of Uber’s failures broadly as the Valley’s failures. And to a point, the piece is correct — but only to a point. While the entire Valley (and let’s face it, Congress, the judiciary, the Fortune 500, nearly every public board in America, etc. etc.) has a major race and gender problem, Uber has far more troubles than just gender and race. Far more. And painting every company in the Valley with the tarred brush of Uber’s approach to business is simply unfair.

To that bias, I’d like to counter with Matt Mullenwegg, from Automattic, or Jen Pahlka, from Code for America, or Ben Silbermann, from Pinterest, or Michelle Zatlyn, from CloudFlare, or Jeff Huber, from Grail Bio. Sure, their companies aren’t worth billions (on second thought, Pinterest, CloudFlare, and Automattic are, and Grail may be on its way), but they are excellent examples of game changing organizations run by good people who, while they may not be perfect, are driven by far more than arrogance, lucre, and winning at all costs.

It’s certainly a good thing that Uber has been chastened. There are still far too many frothy startups driven by immature, bro-tastic founders eager to “move fast and break things” and “ask for forgiveness, not for permission.” Kalanick and Uber’s fall from grace is visceral proof that they must change their ways. But the Silicon Valley trope is starting to wear thin. Let’s not forget the good as we excise the bad. We’ve got a lot of important work to do.

Is Humanity Obsolete?

By - May 31, 2017

image

Upon finishing Yuval Harari’s Homo Deus, I found an unwelcome kink in my otherwise comfortably adjusted frame of reference. It brought with it the slight nausea of a hangover, a lingering whiff of jet exhaust from a hard night, possibly involving rough psychedelics.

I’m usually content with my (admittedly incomplete) understanding of the role humanity plays in the universe, and in particular, with the role that technology plays as that narrative builds. And lately that technology story is getting pretty damn interesting — I’d argue that our society’s creation of and reaction to digital technologies is pretty much the most important narrative in the world at present.

But as you consider that phrase “digital technologies,” are you conjuring images of computers and iPhones? Of “the cloud” and Google? Facebook, Snapchat, Twitter, Netflix, Slack, Uber? I’ve always felt that this group of artifacts — the “things” that we claim as digital — the companies and the devices, the pained metaphors (cloud?!) and the juvenile apps — these are only the most prominent geographic features of a vaster and more tectonic landscape, one we’ve only begun to explore.

Harari would ask us to explore that landscape with a new state of mind — to abandon our human-centered biases — our Humanism — and consider what our embrace of technology may augur for our species. Yet through most of the book, he failed to push me from my easy chair. It was comforting to nod along as Harari argued that the devices — the computers, the platforms and the networks — are nothing more than the transit layer in humanity’s inevitable evolution to a more god-like species. And cognizant of the inescapable baggage of the “digital technologies” tag, Harari has gifted his new state of mind with a name: Dataism. More on that in a minute.

Homo Deus is the possibly too-clever-by-half continuation of the author’s masterstroke bestseller Sapiens, which the New York Times, despite crowning it as a runaway hit, acidly derided as “tailor-made for the thought-leader industrial complex.” If that made you snort the literary milk out your erudite nose, just wait for the other whiteshoe to drop: The same Times review charitably credited Homo Deus with having “the easy charms of potted history.”

Oh, Snap!

And look, the decidedly humanist Times is right to be offended by Harari’s assertions. For they are utterly unsettling, in particular to those most content in the warm embrace of Humanism, which Harari dismisses as a state of mind already past its prime. Dataism is its replacement — a reductive religion of algorithms, both biological and digital, driven by intelligence but decoupled from consciousness. It is therefore unconcerned with experience, the very bread which feeds humanist mythos. Net net: Let’s just say Dataism could really give a fuck about people in the long run. Harari’s money quote? “Homo sapiens is an obsolete algorithm.”

So yeah, the ideas prosecuted in the pages of these two works, which run collectively just under 900 pages, are unsettling. But unlike the Times reviewer, I’m not ready to dismiss them as so much armchair pottery. It’s not often a work of literary merit (and this is certainly that) forces our vaunted industry to consider itself.

And did our industry consider it? After all, this is the follow on to Sapiens, a book celebrated by Mark Zuckerberg, Bill Gates, and Barak Obama, for goodness sakes.

Turns out, our industry has pretty much ignored Homo Deus. Ezra Klein did have a thing or two to say about it in a podcast, but…crickets from most everywhere else.


Technology is having a crisis of self reflection. It’s understandable — we’re not the types to think too hard about the impact of our actions, because we’ve already anticipated them, after all. Creating new behaviors is the business we’re in, so we’re not surprised when they actually happen. We’ve developed a super-fast creative process on top of digital technologies — we come up with new plans as quickly as the old ones fail, and the act of doing this just proves our world view correct: We have a thesis, we prosecute it, and as we collect more data — including and especially data about our failure — we stare at it all, we rethink our approach, and we deftly devise a new algorithm to navigate around the damn problem. The better the acuity of our data, the more responsive our tools, the better the outcomes. Even when most of us lose, we’re always winning! Failure is just more data to fuel an eventual, inevitable victory.

This approach to life and business doesn’t reward deep reflection. And we know it. That’s why we’re so damn obsessed with meditation, with yoga (guilty), with flying to South America and doing strange psychedelic drugs. But so far all those reflections center on the me, and not on the us, on the society we are building. How often do we — the Royal Technology We — consider the butterfly effects of our work? And don’t tell me Zuck did it for us with that manifesto. That thing could have used a touch more psilocybin, amiright?

Perhaps Harari strikes us as a lecturing harridan — we know we have more homework to do. We understand we now rule the world, but we are reluctant leaders, because our industry has forever been in opposition, forever carrying a torch for a future state of humankind that the noobs and the squares and the company men didn’t get.

Only, we’ve won. So now what?


Well, that gets us to purpose. Why are we here? Why are you here? Why am I here? What are we here for?

Remember when you were a kid, in that kid-like state of mind, when you whispered to a friend, a confidante — “Where’s the wall at the end of universe?” And if they bit, if they acknowledged there might be an end to it all, a place where the universe ebbs to finality, you ask them this: “Well, then, what’s on the other side of the wall!?”

Remember that little pre-adolescent mind hack? Yeah, we’re about at that point now, Technology Industry. It’s time for us to come up with a better answer.

My favorite response to this paradox is: “The unimaginable.” That’s what’s on the other side of the wall. The only boundary in the universe, for Homo sapiens anyway, is the fact that we need a boundary in the first place. We understand so much, but at the end of that understanding we face the unimaginable. In that dark gravity we first populated gods, then God Himself, then Science and its attendant Humanism, and now….well, Harari makes the case that our digital technologies have hastened our transition us to a new era — one in which we “dissolve within the data torrent like a clump of earth within a gushing river.”

OK, I’m out of my armchair now. If all biology is algorithms, and science certainly believes this is so, then our fate is to join the church of pure information processing, driven by the inescapable end game of evolution.

Checkmate! Humanity exists because algorithms exist, algorithms that predate us, algorithms that will outlive us, and algorithms that exist for one reason: to solve problems. If we embrace this, then perhaps we stand at the cusp of solving our biggest problem ever: ourselves.

I”m not sure I buy all this — and even Harari, at the very end of his book, admits he’s not sure either (that felt like quite a hedge, to be honest). But the issues he raises are worthy of deeper debate — in particular inside our own industry, where self-reflection is far too absent.

The Internet Big Five Is Now The World’s Big Five

By - May 17, 2017

Back in December of 2011, I wrote a piece I called “The Internet Big Five,” in which I noted what seemed a significant trend: Apple, Microsoft, Google, Amazon, and Facebook were becoming the most important companies not only in the technology world, but in the world at large. At that point, Facebook had not yet gone public, but I thought it would be interesting to compare each of them by various metrics, including market cap (Facebook’s was private at the time, but widely reported). Here’s the original chart:

I called it “Draft 1” because I had a sense there was a franchise of sorts brewing. I had no idea. I started to chart out the various strengths and relative weaknesses of the Big Five, but work on NewCo shifted my focus for a spell.

Three years later, in 2014, I updated the chart. The growth in market cap was staggering:

Nearly a trillion dollars in net market cap growth in less than three years! My goodness!

But since 2014, the Big Five have rapidly accelerated their growth. Let’s look at the same chart, updated to today:

Ummm..HOLY SHIT! Almost two trillion dollars of market cap added in less than seven years. And the “Big Five” have become, with a few limited incursions by Berkshire Hathaway, the five largest public companies in the US. This has been noted by just about everyone lately, including The Atlantic, which just employed the very talented Alexis Madrigal to pay attention to them on a regular basis. In his maiden piece, Madrigal notes that the open, utopian world of the web just ten years ago (Web 2, remember that? I certainly do…) has lost, bigly, to a world of walled-garden market cap monsters.

I agree and disagree. Peter Thiel is fond of saying that the best companies are monopolists by nature, and his predictions seem to be coming true. But monopolies grow old, fray, and usually fail to benefit society over time. There’s a crisis of social responsibility and leadership looming for the Big Five — they’ve got all the power, now it’s time for them to face their responsibility. I’ll be writing much more about that in coming weeks and months. As I’ve said elsewhere, in a world where our politics has devolved to bomb throwing and sideshows, we must expect our businesses — in particular our most valuable ones — to lead.

Dear Facebook…Please Give Me Agency Over The Feed

By - May 07, 2017

(cross posted from NewCo Shift)

Like you, I am on Facebook. In two ways, actually. There’s this public page, which Facebook gives to people who are “public figures.” My story of becoming a Facebook public figure is tortured (years ago, I went Facebook bankrupt after reaching my “friend” limit), but the end result is a place that feels a bit like Twitter, but with more opportunities for me to buy ads that promote my posts (I’ve tried doing that, and while it certainly increases my exposure, I’m not entirely sure why that matters).

Then there’s my “personal” page. Facebook was kind enough to help me fix this up after my “bankruptcy.” On this personal page I try to keep my friends to people I actually know, with mixed success. But the same problems I’ve always had with Facebook are apparent here — some people I’m actually friends with, others I know, but not well enough to call true “friends.” But I don’t want to be an ass…so I click “confirm” and move on.

On my public page, I post stuff from my work. I readily admit I’m not very good at engaging with this page, and I feel shitty whenever I visit, mainly because I don’t like being bad at media (and Facebook is extremely good at surfacing metrics that prove you suck, then suggesting ways to spend money to fix that problem). But, if you want to follow what I’m up to — mostly stuff I write or stuff we post on NewCo Shift, well, it’s probably a pretty decent way to do that.

However, on my personal page, I’m utterly hopeless. Except for the very occasional random post (a picture of my drum kit? a photo of my kids here and there to appease my guilt?), I don’t view Facebook as a place to curate a “feed” of my life. The place kind of creeps me out, in ways I can’t exactly explain. It feels like work, like a responsibility, like a drug I should avoid, so I avoid it. I’ve had enough work (and drugs) in my life.

But unlike me, most of true friends put a lot of care and feeding into their Facebook pages. It’s become a place where they announce important milestones, like births, graduations, separations, deaths, the works. These insanely important moments, alas, are all interspersed with random shots of pie, flowers, cocktails, sunsets, and endless, endless, endless advertisements for shit I really don’t care about.

Taken together, the Facebook newsfeed is a place that I’ve decided isn’t worth the time it demands to truly be useful. I know, I could invest the time to mute this and like that, and perhaps Facebook’s great algos would deliver me a better feed. But I don’t, and I feel alone in this determination. And lately it’s begun to seriously fuck up my relationships with important people in my life, namely, my…true friends.

I won’t go into details (it’s personal, after all), but suffice to say I’ve missed some pretty important events in my friends’ lives because everyone else is paying attention to Facebook, but I am not. As a result, I’ve come off looking like an asshole. No, wait, let me rephrase that. I have become an actual asshole, because the definition of an asshole is someone who puts themself above others, and by not paying attention to Facebook, that’s what I’ve become.

That kind of sucks.

It strikes me that this is entirely fixable. One way, of course, is for me to just swallow my pride and pick up the habit of perusing Facebook every day. I just tried that very thing again this weekend. It takes about half an hour or more each day to cull through the endless stream of posts from my 500+ friends, and the experience is just as terrible as it’s always been. For every one truly important detail I find, I have to endure a hundred things I’d really rather not see. Many of them are trivial, some are annoying, and at least ten or so are downright awful.

And guess what? I’m only seeing a minority of the posts that my friends have actually created! I know Facebook is doing its best to deliver to me the stuff I care about, but for me, it’s utterly failing.

Now, it’s fair to say that I’m an outlier — for most people, Facebook works just fine. The Feed seems to nourish most of its sucklers, and there’s no reason to change it just because one grumpy tech OG is complaining. BUT…my problem with my feed is in fact allegorical to what’s become a massive societal problem with the Feed overall: It’s simply untenable to have one company’s algorithms control the personalized feeds of billions of humans around the world. It’s untenable on so many axes, it’s almost not worth going into, but for a bit of background, read the work of Tristan Harris, who puts it in ethical terms, or Eli Parser, who puts it in political terms, or danah boyd, who frames it in socio-cultural terms. Oh, and then there’s the whole Fake News, trolling, and abuse problem…which despite its cheapening by our president, is actually a Really, Really Big Deal, and one that threatens Facebook in particular (did you see they’re hiring 3,000 people to address it? Does that scale? Really?!)

It’s time for the model to change. And I have a modest and probably far too simple proposal for you to consider.

This proposal breaks all manner of Silicon Valley product high holy-isms, but bear with me. I think at the end of the day, it’s what we need to get beyond the structural limitations of trusting one company with so much power over our informational diets.

The short form version of my solution is this: Give me filter control over my feed. I know — this probably breaks Facebook’s stranglehold on our attention, and therefore, impacts their business model in unacceptable ways. But I could argue the reverse is true (but this is already getting long, and that’s another post.)

So, when I come to Facebook, here’s what I’d love: Ask me what I’m looking for, and present me with simple ways to filter by the things I want to see. As far as I can tell, the only way to filter your Feed today is to toggle between “Top Stories” and “Most Recent.” That’s lame. Here are some possible additions:

  • Close Friends. Let me see just posts from folks I’m truly close to. Facebook already lets you tag people as “close friends,” but you can’t see only what they post and nothing else. You can “see first” people, but that feels like a half measure at best.
  • Key Moments. Let everyone tag posts they believe are truly important — the deaths, the births, the divorces, the new job, the graduations. Sure, there will be spammers, but hell, Facebook’s good at catching that shit. I know Facebook lets you tag your posts as “Life Events” (did you know that?! I just found out…), but… why can’t you filter the Feed so you only see the ones that matter?
  • Outrage. This is a kind of a joke, but with a purpose: let me see just posts that are political rants. This kind of content has overtaken Facebook, so why not give it a filter of its own so you can see it when you want, or filter it out if you don’t?
  • Kittens. This is the fluff setting. Users, posters, and Facebook’s own AI/Algos can identify this stuff and filter it into a category of its own. This is where the funny videos and pictures of pets go. This is where the endless stream of food porn goes. This is where most of the content from Buzzfeed goes.
  • Bubble Breaker. Show me posts that present views opposite my own, or that force me to engage with ideas I’ve not considered before. This could become an incredibly powerful feature, if it’s done right.

There are probably tons more, and most likely these examples aren’t even the best ones to focus on. And I am sure the smart folks at Facebook have considered this idea, and determined it’s a terrible one for all manner of fine reasons.

But my point is this: Facebook does not really allow us to decide what the Feed is feeding us, and that’s a major problem. It leaves agency in the hands (digits?) of Facebook’s algorithms, and as much as I’d like to believe the company can create super intelligent AIs that nourish us all, I think the facts on the ground state the opposite. So give us back the power to determine what we want to see. We might just surprise you.