free html hit counter Joints After Midnight & Rants Archives | Page 2 of 42 | John Battelle's Search Blog

Might Curators Be An Answer To Twitter’s Signal To Noise Problem?

By - March 28, 2014
prmote twitter-tm

My stats in 2008.

jbat 3.14 twitter

And at present. 10X the number of folks followed = Signal to Noise problem.

Twitter’s lack of growth over the past few months has quickly become its defining narrative – witness Inside Twitter’s plan to fix itself from Quartz, which despite the headline, fails to actually explain anything about said plan.

As with most things I write about Twitter, I have no particular inside knowledge of the company’s plans, but I’ve written over and over about its core failing, and promise. In 2008 (!) I suggested “TweetSense“, and in 2011, I wrote Twitter and the Ultimate Algorithm: Signal Over Noise (With Major Business Model Implications). It opens with this:

My goal in this post is to outline what I see as the biggest challenge/opportunity in the company’s path. And to my mind, it comes down to this: Can Twitter solve its signal to noise problem?

I go on to say that it most certainly has to, because solving the problem allows it to attach sponsored advertisements (promoted tweets in particular) to just the right timelines in just the right context. I called the solution “TweetWords” – because AdWords came before AdSense. Twitter’s promoted tweets product did in fact evolve toward interest-based targeting – alas, in one way only, as far as I can tell. Advertisers can target Twitter users based on their interests (as expressed by what they tweet, retweet, follow, etc.), but they can’t place their promoted tweets contextually into timelines (IE, in a manner that “fits” with the content around them). **Update. Twitter has had keyword targeting – a key step in contextual ad targeting – for a year now. I missed this. My apologies. 

So far, there’s no such thing as TweetSense or TweetWords – where ads are contextual to the stream in which they appear. It seems Twitter has not focused on this particular problem – and it may not have to. Revenues are doing extremely well, and Twitter is clearly opening up new forms of advertising based on larger formats, video (Vine), and cards.

But if the core problem of understanding individual timelines as context is not going to be solved, it’d be a shame – because solving that problem will address Twitter’s core signal to noise issue as well. Here’s more from that 2011 post:

If Twitter can assign a rank, a bit of context, a “place in the world” for every Tweet as it relates to every other Tweet and to every account on Twitter, well, it can do the same job for every possible advertiser on the planet, as they relate to those Tweets, those accounts, and whatever messaging the advertiser might have to offer. In short, if Twitter can solve its signal to noise problem, it will also solve its revenue scale problem. It will have built  an auction driven marketplace where advertisers can bid across those hundreds of millions of tweets for the the right to position relevant messaging in real time.

I still think this is a huge opportunity for Twitter, and not for revenue reasons. I get a ton of value out of the Twitter platform, but I don’t turn to it for news and happenings anymore. I follow too many people, and managing multiple screens on Tweetdeck is just too much work. Instead, I depend on great curators like Jason Hirschorn and his team at MediaReDEF – essentially the morning newspaper for folks like me – and a number of machine-driven services that consume my feed and spit back the most popular shared stories (News.me, Percolate, etc).

I find the machine services are predictable, but Jason’s service is top notch – he’s an Editor’s Editor. His stuff, along with folks like Dave Pell, have become my go to these days. But Twitter can’t get the mass market users on its system via human curation – or can it?

Back when Twitter was small and the signal was high, I found a lot of value in my Twitter feed. Individuals who were great curators were my favorite follow. Over time my feed clogged with too many other types of folks – and I’ve never found a tool that can help me get back to those halcyon days where the best stuff rose to the top. Twitter’s Discover tab is  interesting, but lacks instrumentation. Wouldn’t it be cool if Twitter somehow elevated the best curators on its platform in some way – promoting their work and helping them gain audience? Sure, it’d feel a lot like the old “who to follow” of the old days (and there was much to criticize with that system), but given how much Twitter now knows about its own platform, it might be a pretty powerful half-step toward giving people a better handle on the richness the platform has to offer. It’d be a great, lightweight way to start using the service, and for power users who have bankrupted their feeds (IE, me), it could really change the game.

I’d love a service on Twitter that pointed out the best curators for any given topic where I’ve indicated a strong interest (and my interests have already been mapped by Twitter, for purposes of promoted tweets). Further – and this is important – I’d love for Twitter to break out those feeds for me as part of its core service – a sort of Headline News to its constant 24-Hour barrage. It’d mean a break with the one-size-fits-all mentality of the main Twitter stream, but I think such a break is overdue.

Chances are, Twitter’s already explored and dismissed these ideas, but…are they crazy?

  • Content Marquee

Why You Should Read The Circle, Even If You Don’t Buy It

By - March 24, 2014

thecircleLast month I finished Dave Eggers’ latest novel The Circle, the first work by a bona fide literary light that takes on our relationship with today’s Internet technology and, in particular, our relationship with corporations like Google.

It took me a while to start The Circle, mainly because of its poor word of mouth. Most of the folks I know who mentioned it, did so in an unfavorable light. “Eggers doesn’t get our industry,” was one theme of the commentary. “He did zero research, and was proud of it!” was another. I wanted to let some time go by before I dove in, if only to let the criticism ebb a bit. It struck me that it’s not a novelist’s job to get an industry *right*, per se, but to tell a story and compel us to think about its consequences in way that might change us a little bit. I wanted to be open to that magic that happens with a great book, and not read it with too much bias.

Once I began, I found the novel engaging and worthy, but in the end, not wholly fulfilling. I found myself wishing Eggers would reveal something new about our relationship to technology and to companies like Google, Facebook, Apple – but in that department the book felt predictable and often overdone.

But first, a bit of background. “The Circle” refers to a fictional company by the same name, a rather terrifying monolith that arises sometime in the near future. The Circle has the arrogance and design sensibilities of Apple, the ‘we can do it because we’re smarter (and richer) than everyone else’ mentality of Google, the always-be-connected-and-share-everything ethos of Facebook, with a dash of Twitter’s public square and plenty of Microsoft’s once-famed rapaciousness. The Circle is, in short, a mashup of every major tech-company cliche in the book, which to be fair kind of makes it fun. It’s run by the “Three Wise Men,” for example, a direct nod to Google’s ten year rule of the “triumvirate” – Page, Brin, and Schmidt.

The story revolves around Mae Holland, a young woman who jumps from a dull job at a local utility to the golden ticket that is an entry level gig at The Circle. Mae is overwhelmed by her luck and eager to please her new bosses. Early on, reading was a lot of fun, because the patter of the Circle employees feels so…familiar. Every problem has a logical and obvious solution, and nearly all of those solutions involve everyone using The Circle’s services. All employees of the Circle become citizens of the Circle, wittingly or not. They live, eat, sleep, fuck, and party with others from the Circle, because that’s how they get ahead. Mae is swept into this culture willingly, losing sight of her family, non-Circle friends, and most of the facets of her life that once defined her. And so the story is pushed along, as Mae slowly becomes a product of the Circle, even as she (unconvincingly) rebels from time to time.

This phenomenon is certainly not foreign to any young tech worker at Google or Facebook, but Eggers takes it to extremes. He nails the breathless “save the world” mentality that often accompanies the pitches of young tech wizards, but offers no counterpoints save perhaps the reader’s own sense of improbability. For example, one exec at The Circle is working on a plan to implant a chip into every newborn’s bones, so there’d be no more child abductions. Another ruse is the sweeping adoption of “Transparency” by elected officials – every public servant uses The Circle’s technology to be “always on” while attending to their duties, so that anyone can check on them at any time (Mae ultimately goes transparent as well). Toward the end, much of government is close to becoming privatized through The Circle, because it’s more efficient, transparent, and accountable. And various ridiculous mottos espoused by The Circle – “Privacy Is Theft,” “Secrets Are Lies,” “All That Happens Must Be Known” – are readily accepted by society. All of these examples are offered as matter of fact, logical ends serving greater social means, but as readers we smirk – they are likely never happen due to issues the book fails to consider.

Then again…It may be that the lack of contrarian views is intentional, and if you can suspend disbelief, you find yourself in the a place not unlike 1984 or Animal Farm – a twisted version of the near future where absolutists have taken over society. And it’s for the creation of that potential that I give The Circle the most credit – it litigates the idea of the corporation as Paternitas, the all seeing, all caring, all nurturing force to which individuals have forsaken themselves so as to allow a greater good. It’s too early to say whether The Circle will stand with such classics, but certainly it does stand as a warning. I found myself disturbed by The Circle, even as I found it easy to dismiss. Because its predictions were too easily made – I couldn’t suspend disbelief.

But perhaps that’s Egger’s point. The Circle forces us to think critically about the world we’re all busy making, and that’s never a waste of time. And besides, the story has all manner of enjoyable and outlandish contours – if you work in this industry, or just find it fascinating, you’ll leave the book entertained. A worthy read.

Thinking Out Loud: Potential Information

By - March 20, 2014
o-ALPINE-SLIDE-PARK-CITY-570

Plenty of potential at the top of this particular system.

(image) If you took first-year physics in school, you’re familiar with the concepts of potential and kinetic energy. If you skipped Physics, here’s a brief review: Kinetic energy is energy possessed by bodies in motion. Potential energy is energy stored inside a body that has the potential to create motion. It’s sort of kinetic energy’s twin – the two work in concert, defining how pretty much everything moves around in physical space.

I like to think of potential energy as a force that’s waiting to become kinetic. For example, if you climb up a slide, you have expressed kinetic energy to overcome the force of gravity and bring your “mass” (your body) to the top. Once you sit at the top of that slide, you are full of the potential energy created by your climb – which you may once again express as kinetic energy on your way back down. Gravity provides what is known as the field, or system, which drives all this energy transfer.

For whatever reason, these principles of kinetic and potential energy have always resonated with me. They are easily grasped, to be certain, but it’s also how evocative they are. Everything around us is either in motion or it’s not – objects are either animated by kinetic energy (a rock flying through the air), or they are at rest, awaiting a kinetic event which might create action and possibly some narrative consequence (a rock laying on the street, picked up by an angry protestor….).

To me, kinetic and potential energy are the bedrock of narrative – there is energy all around us, and once that energy is set in motion, the human drama unfolds. The rock provides mass, the protestor brings energy, and gravity animates the consequence of a stone thrown…

Because we are physical beings, the principles of motion and force are hard wired into how we navigate the world – we understand gravity, even if we can’t run the equations to prove its cause and effect. But when it comes to the world of digital information, we struggle with a framework for understanding cause and effect – in particular with how information interacts with the physical world. We speak of “software eating the world,” “the Internet of Things,” and we massify “data” by declaring it “Big.” But these concepts remain for the most part abstract. It’s hard for many of us to grasp the impact of digital technology on the “real world” of things like rocks, homes, cars, and trees. We lack a metaphor that hits home.

But lately I’ve been using the basic principles of kinetic and potential energy as a metaphor in casual conversations, and it seems to have some resonance. Now, I’m not a physicist, and it’s entirely possible I’ve mangled the concepts as I think out loud here. Please pile on and help me express this as best I can. But in the meantime…

…allow me to introduce the idea of potential information. Like potential energy, the idea of potential information is that all physical objects contain the potential to release information if placed in the right system. In the physical world, we have a very large scale system already in place – it’s called gravity. Gravity provides a field of play, the animating system which allows physical objects (a rock, a child at the top of a slide) to become kinetic and create a narrative (a rock thrown in anger, a child whooping in delight as she slides toward the sand below).

It seems to me that if we were to push this potential information metaphor, then we need our gravity – our system that allows for potential information to become kinetic, and to create narratives that matter. To my mind, that system is digital technology, broadly, and the Internet, specifically. When objects enter the system of technology and the Internet, they are animated with the potential to become information objects. Before contact with the Internet, they contain potential information, but that information is repressed, because it has no system which allows for its expression.

In this framework, it strikes me that many of the most valuable companies in the world are in the business of unlocking potential information – of turning the physical into information. Amazon and eBay unlocked the value of merchandise’s potential information. Airbnb turns the potential information of spare bedrooms into kinetic information valued at nearly $10 billion and counting. Uber unlocked the potential information trapped inside transportation systems.  Nest is animating the potential information lurking in all of our homes. And Facebook leveraged the potential information lurking in our real world relationships.

I’d wager that the most valuable companies yet to be built will share this trait of animating potential information. One of the best ideas I’ve heard in the past few weeks was a pitch from an inmate at San Quentin (part of The Last Mile, an amazing program worthy of all your support). This particular entrepreneur, a former utilities worker, wanted to unlock all the potential information residing in underground gas, sewage, and other utilities. In fact, nearly every good idea I’ve come across over the past few years has had to do with animating potential information of some kind.

Which brings us to Google – and back to Nest. In its first decade, Google was most certainly in the business of animating potential information, but it wasn’t physical information. Instead, Google identified an underutilized class of potential information – the link – and transformed it into a new asset – search. A link is not a physical artifact, but Google treated as if it were, “mapping” the Web and profiting from that new map’s extraordinary value.

Now the race is on to create a new map – a map of all the potential information in the real world. What’s the value of potential information coming off a jet engine, or  a wind turbine? GE’s already on it. What about exploiting the potential information created by your body? Yep, that’d be Jawbone, FitBit, Nike, and scores of others. The potential information inside agriculture? Chris Anderson’s all over it. And with Nest, Google is becoming a company that unlocks not only the information potential of the Web, but of the physical world we inhabit (and yes, it’s already made huge and related moves via its Chauffeur, Earth, Maps, and other projects).

Of course, potential information can be leveraged for more than world-beating startups. The NSA understands the value of potential information, that’s why the agency has been storing as much potential information as it possibly can. What does it mean when government has access to all that potential information? (At least we are having the dialog now – it seems if we didn’t have Edward Snowden, we’d have to create him, no?)

Our world is becoming information – but then again, it’s always had that potential. Alas, I’m just a layman when it comes to understanding information theory, and how information actually interacts with physical mass (and yes, there’s a lot of science here, far more than I can grok for the purposes of this post.) But the exciting thing is that we get to be present at the moment all this information is animated into narratives that will have dramatic consequences for our world. This is a story I plan to read deeply in over the coming year, and I hope you’ll join me as I write more about it here.

We Have Yet to Clothe Ourselves In Data. We Will.

By - March 12, 2014

SenatorTogaWe are all accustomed to the idea of software “Preferences” – that part of the program where you can personalize how a particular application looks, feels, and works. Nearly every application that matters to me on my computer – Word, Keynote, Garage Band, etc. –  have preferences and settings.

On a Macintosh computer, for example, “System Preferences” is the control box of your most important interactions with the machine.

I use the System Preferences box at least five times a week, if not more.

And of course, on the Internet, there’s a yard sale’s worth of preferences: I’ve got settings for Twitter, Facebook, WordPress, Evernote, and of course Google – where I probably have a dozen different settings, given I have multiple identities there, and I use Google for mail, calendar, docs, YouTube, and the like.

preferencesAny service I find important has settings. It’s how I control my interactions with The Machine. But truth is, Preferences are no fun. And they should be.

The problem: I mainly access preferences when something is wrong. In the digital world, we’ve been trained to see “Preferences” as synonymous with “Dealing With Shit I Don’t Want To Deal With.” I use System Preferences, for example, almost exclusively to deal with problems: Fixing the orientation of my monitors when moving from work to home, finding the right Wifi network, debugging a printer, re-connecting a mouse or keyboard to my computer.  And I only check Facebook or Google preferences to fix things too – to opt out of ads, resolve an identity issue, or  enable some new software feature. Hardly exciting stuff.

Put another way, Preferences is a “plumbing” brand – we only think about it when it breaks.

But what if we thought of it differently? What if managing your digital Preferences was more like….managing your wardrobe?

A few years back I wrote The Rise of Digital Plumage, in which I posited that sometime soon we’ll be wearing the equivalent of “digital clothing.” We’ll spend as much time deciding how we want to “look” in the public sphere of the Internet as we do getting dressed in the morning (and possibly more). We’ll “dress ourselves in data,” because it will become socially important – and personally rewarding –  to do so. We’ll have dashboards that help us instrument our wardrobe, and while their roots will most likely stem from the lowly Preference pane, they’ll soon evolve into something far more valuable.

This is a difficult idea to get your head around, because right now, data about ourselves is warehoused on huge platforms that live, in the main, outside our control. Sure, you can download a copy of your Facebook data, but what can you *do* with it? Not much. Platforms like Facebook are doing an awful lot with your data – that’s the trade for using the service. But do you know how Facebook models you to its partners and advertisers? Nope. Facebook (and nearly all other Internet services) keep us in the dark about that.

We lack an ecosytem that encourages innovation in data use, because the major platforms hoard our data.

This is retarded, in the nominal/verb sense of the word. Facebook’s picture of me is quite different from Google’s, Twitter’s, Apple’s, or Acxiom’s*. Imagine what might happen if I, as the co-creator of all that data, could share it all with various third parties that I trusted? Imagine further if I could mash it up with other data entities – be they friends of mine, bands I like, or even brands?

Our current model of data use, in which we outsource individual agency over our data to huge factory farms, will soon prove a passing phase. We are at once social and individual creatures, and we will embrace any technology that allows us to express who we are through deft weavings of our personal data – weavings that might include any number of clever bricolage with any number of related cohorts. Fashion has its tailors, its brands, its designers and its standards (think blue jeans or the white t-shirt). Data fashion will develop similar players.

Think of all the data that exists about you – all those Facebook likes and posts, your web browsing and search history, your location signal, your Instagrams, your supermarket loyalty card, your credit card and Square and PayPal purchases, your Amazon clickstream, your Fitbit output – think of each of these as threads which might be woven into a fabric, and that fabric then cut into a personalized wardrobe that describes who you are, in the context of how you’d like to be seen in any given situation.

Humans first started wearing clothing about 170,000 years ago. “Fashion” as we know it today is traced to the rise of European merchant classes in the 14th century. Well before that, clothing had become a social fact. A social fact is a stricture imposed by society – for example, if you don’t wear clothing, you are branded as something of a weirdo.

Clothing is an extremely social artifact –  *what* you wear, and how, are matters of social judgement and reciprocity. We obsess over what we wear, and we celebrate those “geniuses” who have managed to escape this fact (Einstein and Steve Jobs both famously wore the same thing nearly every day).

There’s another reason the data fabric of your life is not easily converted into clothing – because at the moment, digital clothing is not a social fact. There’s no social pressure for your “look” a certain way, because thanks our outsourcing of our digital identity to places like Facebook, Twitter, and Google+, we all pretty much look the same to each other online. As I wrote in Digital Plumage:

How strange is it that we as humans have created an elaborate, branded costume culture to declare who we are in the physical world, but online, we’re all pretty much wearing khakis and blue shirts?

At it relates to data, we are naked apes, but this is about to change. It’s far too huge an opportunity.

Consider: The global clothing industry grosses more than $1 trillion annually. We now spend more time online that we do watching television. And as software eats the world, it turns formerly inanimate physical surroundings into animated actors on our digital stage. As we interact with these data lit spaces, we’ll increasingly want to declare our preferences inside them via digital plumage.

An example. Within a few years, nearly every “hip” retail store will be lit with wifi, sensors, and sophisticated apps. In other words, software will eat the store. Let’s say you’re going into an Athleta outlet. When you enter, the store will know you’ve arrived, and begin to communicate with your computing device – never mind if its Glass, a mobile phone, or some other wearable.  As the consumer in this scenario, won’t you want to declare “who you are” to the retail brand’s sensing device? That’s what you do in the real world, no? And won’t you want to instrument your intent – provide signal to that store that will allow the store to understand your intent? And wouldn’t the “you” at Athleta be quite different from, say, the “you” that you become when shopping at Whole Foods or attending a Lord Huron concert?

Then again, you could be content with whatever profile Facebook has on you, (or Google, or ….whoever). Good luck with that.

I believe we will embrace the idea of describing and declaring who we are through data, in social context. It’s wired into us. We’ve evolved as social creatures. So I believe we’re at the starting gun of a new industry. One where thousands of participants take our whole data cloth and stitch it into form, function, and fashion for each of us. Soon we’ll have a new kind of “Preferences” – social preferences that we wear, trade, customize, and buy and sell.

In a way, younger generations are already getting prepared for such a world – what is the selfie but a kind of digital dress up?

Lastly, as with real clothing, I believe brands will be the key driving force in the rise of this industry. As I’m already over 1,000 words, I’ll write more on that idea in another post. 

*(fwiw, I am on Acxiom’s board)

To Be Clear: Do Not Build Your Brand House On Land You Don’t Own

By - February 28, 2014

Too07(image) I took a rigorous walk early this morning, a new habit I’m trying to adopt – today was Day Two. Long walks force a certain meditative awareness. You’re not moving so fast that you miss the world’s details passing by  - in fact, you can stop to inspect something that might catch your eye. Today I explored an abandoned log cabin set beside a lake, for example. I’ve sped by that cabin at least a thousand times on my mountain bike, but when you’re walking, discovery is far more of an affordance.

Besides the cabin, the most remarkable quality of today’s walk was the water – it’s (finally) been raining hard here in Northern California, and the hills and forests of Marin are again alive with the rush of water coursing its inevitable path toward the sea. White twisting ribbons cut through each topographic wrinkle, joining forces to form great streams at the base of any given canyon. The gathering roar of a swollen stream, rich with foam and brown earth – well, it’s certainly  good for the soul.

I can’t say the same of my daily “walks” through the Internet. Each day I spend an hour or more reading industry news. I’m pretty sure you do too – that’s probably the impetus for your visit here – chances are you clicked on a link on Facebook, LinkedIn, Twitter, Google, or in email. Someone you know said “check this out,” or – and bless you if this is the case – you actually follow my musings and visit on a regular basis.

But the truth is, we now mostly find content via aggregated streams. Streams are the new distribution. We dip in and out of streams, we curate and search our streams, we abandon barren streams and pick up new streams, hoping they might prove more nourishing. Back before streams ruled the world, of course, we had a habit of visiting actual “pools” – sites that we found worthy because they did a good job of creating content that we valued. (Before that, I think we read actual publications. But that was a long, long time ago…)

Which got me thinking. What makes a stream? In the real world, streams are made from water, terrain, and gravity. To belabor the metaphor to the media business, content is the water, publishers are the terrain, and our thirst for good content is the gravity.

As publishers – and I include all marketing brands in this category – the question then becomes: “What terrain do we claim as ours?”

Deciding where to lay down roots as a publisher is an existential choice. Continuing the physical metaphor a bit further, it’s the equivalent of deciding what land to buy (or lease). If your intention is to build something permanent and lasting on that land, it’s generally a good idea to *own* the soil beneath your feet.

This is why I wrote Put Your Taproot Into the Independent Web two years ago. If you’re going to build something, don’t build on land someone else already owns. You want your own land, your own domain, your own sovereignty.

Trouble is, so much of the choice land – the land where all the *people* are – is already owned by someone else: By Google, Facebook, Twitter, LinkedIn, Yahoo, and Apple (in apps, anyway). These platforms are where are the people are, after all. It’s where the headwaters form for most of the powerful streams on the Internet.  It’s tempting to build your brand on those lands – but my counsel is simple: Don’t. There’s plenty of land out there on the Rest of The Internet. In fact, there’s as much land as you want, and what you make of it is up to you as a publisher.

Quick: Name one successful publisher that built its brand on the back of a social platform? Can’t do it? Neither can I, unless you count sites like UpWorthy. And those flying near the social network sun risk getting seriously burned. There’s a reason publishers don’t build on top of social platforms: publishers are an independent lot, and they naturally understand the value of owning your own domain. Publishers don’t want to be beholden to the shifting sands of inscrutable platform policies. So why on earth would a brand?

Despite the fact that my once-revolutionary bromide “all brands are publishers” is now commonplace, most brands still don’t quite understand how to act like a publisher.

Which takes me to this piece, Facebook is not making friends on Madison Avenue (Digiday). Besides the quippy headline and the rather obvious storyline (a burgeoning Internet company failing to satisfy agencies? Pretty much Dog-Bites-Man), the thing that got me to perk up was this:

One point of frustration is Facebook’s ongoing squeezing of traffic to organic brand content. A digital agency exec described a recent meeting with Facebook that turned contentious. In what was meant to be a routine meeting, the exec said the Facebook rep told him the brands the agency works with would now have to pay Facebook for the same amount of reach they once enjoyed automatically. That position and Facebook’s perceived attitude have led to some disillusionment on Madison Avenue, where many bought into the dream peddled by Facebook that brands could set up shop on the platform as “publishers” and amass big audiences on their own….

…The cruel irony in all of this is that brands themselves greatly helped Facebook by giving it free advertising in their TV commercials and sites, urging their customers to “like” the brand — and paying Facebook to pile up likes. Facebook has returned the favor by choking off  brands’ access to those communities. That’s one expensive and frustrating lesson that it’s better to own than rent.

Put another way: “Wait, I did what you asked, Facebook, and set up a big content site on your platform that drew a fair number of visitors organically. Now you’ve changed the rules of the game, and you want me to pay to get their attention?!”

Yup. You leased your land, Mr. Brand Marketer, and the rent’s going up. If I were you, I’d get back to your own domain. Spend your money building something worthy, then spend to drive people there. Your agencies have entire creative and media departments that are good at just such practices. They might even spend a fair amount carefully purchasing distribution through Facebook’s streams. I’m guessing Facebook will be happy to take your money. But there’s no point in paying them twice.

 

Linked In Is Now A Publishing Platform. Cool. But First Get Your Own Site.

By - February 21, 2014

Screen Shot 2014-02-21 at 4.59.15 AMI’ve been a LinkedIn “Influencer” for a year or so, and while the honorific is flattering, I’m afraid I’ve fallen down in my duties to post there. The platform has proven it has significant reach, and for folks like me, who thrive on attention for words written, it’s certainly an attractive place to write. Of course, it pays nothing, and LinkedIn makes all the money on the page views my words drive, but … that’s the quid pro quo. We’ll put yer name in lights, kid, and you bring the paying customers.

One reason I don’t post on LinkedIn that often is my habit of writing here: there are very few times I come up with an idea that doesn’t feel like it belongs on my own site. And by the time I’ve posted it here, it seems like overkill to go ahead and repost it over on LinkedIn (even though they encourage exactly that kind of behavior). I mean, what kind of an egomaniac needs to post the same words on two different platforms? And from what I recall, Google tends to penalize you in search results if it thinks you’re posting in more than one place.

But this news, that LinkedIn is opening up its publishing platform to all comers, has changed my mind. From now on I’m going on record as a passionate advocate of posting to your own site first, then posting to LinkedIn (or any other place, such as Medium).

Why? Well, it comes down to owning your own domain. Building out a professional profile on LinkedIn certainly makes sense, and bolstering that cv with intelligent pieces of writing is also a great idea. But if you’re going to take the time to create content, you should also take the time to create a home for that content that is yours and yours alone. WordPress makes it drop dead easy to start a site. Take my advice, and go do it. Given the trendlines of digital publishing, where more and more large platforms are profiting from, and controlling, the works of individuals, I can’t stress enough: Put your taproot in the independent web. Use the platforms for free distribution (they’re using you for free content, after all). And make sure you link back to your own domain. That’s what I plan to do when I post this to LinkedIn.  Right after I post this here.

We Are Not Google, Therefore, We Are

By - February 06, 2014

RubiconS1If you read me regularly, you know I am a fan of programmatic adtech. In fact, I think it’s one of the most important developments of the 21st century. And over the past few quarters, adtech has gotten quite hot, thanks to the recent successes of Rocket Fuel (up to 50 and holding from its open at 29), Criteo (trading above its already inflated opening price of 31), and, by extension, Facebook and Twitter (don’t get me started, but both these companies should be understood as programmatic plays, in my opinion).

But while I like all those companies, I find Rubicon’s recent filing far more interesting. Why? Well, here’s the money shot of the S-1:

Independence. We believe our independent market position enables us to better serve buyers and sellers because we are not burdened with any structural conflicts arising from owning and operating digital media properties while offering advertising purchasing solutions to buyers.

Ah, there it is, in a nutshell: “We are not Google, therefore, we are.” Rubicon uses the words “independent” or “independence” more than a half a dozen times in its S1, about the same number of times the word “Google” is invoked.

I am in full support of an independent adtech ecosystem. It’s vitally important that the world have options when it comes to what flavor of programmatic infrastructure it uses to transact – and when I say the “world” I mean everybody, from publishers to advertisers, consumers to service providers. Criteo and Rocket Fuel are important companies, but they don’t directly compete with Google – their business leverages buying strategies to maximize profits. Rubicon, on the other hand, has a full adtech stack and is focused on publishers (and yes, that’s what sovrn is as well).

Over time, we won’t be talking about “publishers” and “advertisers,” we’ll be talking about “consumers” and “services.” And the infrastructure that connects those two parties should not be a default – it should be driven by competition between independent players.

So bravo, Rubicon, for making that statement so clearly in your S-1. I wish you luck.

How Facebook Changed Us, and How We Might Change Again

By - February 05, 2014

keep-calm-and-love-data-2(image) If you weren’t under a rock yesterday, you know Facebook turned ten years old this week (that’s a link to a Zuckerberg interview on the Today Show, so yep, hard to miss). My favorite post on the matter (besides Sara’s musings here and here – she was at Harvard with Zuck when the service launched) is from former Facebook employee Blake Ross, who penned a beauty about the “Rules” that have fallen over the past ten years. Re/code covers it  here, and emphasizes how much has changed in ten years – what was once sacred is now mundane. To wit:

- No, you can’t let moms join Facebook because Facebook is for students.

- No, you can’t put ads in newsfeed because newsfeed is sacred.

- No, you can’t allow people to follow strangers because Facebook is for real-world friends.

- No, you can’t launch a standalone app because integration is our wheelhouse.

- No, you can’t encourage public sharing because Facebook is for private sharing.

- No, you can’t encourage private sharing because Facebook is moving toward public sharing.

- No, you can’t encourage public sharing because Facebook is moving toward ultra-private sharing between small groups.

And this one’s a snapchat with about 3 seconds left, so hurry up and bludgeon someone with it:

- No, you can’t allow anonymity because Facebook is built on real identity.

None of these pillars came down quietly. They crashed with fury, scattering huddles of shellshocked employees across watering holes like dotted brush fires after a meteor strike.

Re/code ends its post with “makes you wonder what might change in the next 10 years.” Well yes, it certainly does.

A close read of Ross’ post leaves me wondering about “informational personhood.” He considers all the change at Facebook, and his role in it as an sometimes frustrated employee, concluding that what he got from the experience was perspective:

It took me probably half a dozen meteoric nothings before I learned how to stop worrying and love the bomb. A congenital pessimist, I gradually began to see the other side of risk. Now, when the interns wanted to mix blue and yellow, I could squint and see green; but I thought the sun might still rise if everything went black. I felt calmer at work. I began to mentor the newer hires who were still afraid of meteors. Today I watch Facebook from a distance with 1.2 billion other survivors, and my old fears charm like the monster under the bed: I couldn’t checkmate this thing in a single move even if I wanted to. But even now, I know someone over there is frantically getting the band back together.

Fortunately, this blossoming resilience followed me home from work:

My very chemistry has changed. In relationships, hobbies, and life, I find myself fidgeting in the safe smallness of the status quo. I want more from you now, and I want more from myself, and I’m less afraid of the risks it’ll take to get there because I have breathed through chaos before and I believe now—finally—that we’ll all still be here when the band stops playing.

This is, of course, just a staple of adulthood. It’s what we were missing that night when meteors left us crater-faced for senior prom and we all thought our lives were over. It’s called perspective, and it’s the best thing I got from growing up Facebook.

Hmmm. So many things to ponder here. The constant renegotiation of the rules at Facebook changed his “very chemistry.” A fascinating observation – heated debate about the rules of our social road made Ross a different person. Did this happen to us all? Is it happening now? For example, are we, as a culture, “getting used to” having the policies around our informational identities – our “infopersons” – routinely renegotiated by a corporate entity?

I think so far the answer is yes. I’m not claiming that’s wrong, per se, but rather, it is interesting and noteworthy. This perspective that Ross speaks of – this “growing up” – it bears more conversation, more exploration. What are the “Rules” right now, and will they change in ten years, or less? (And these “Rules” need not be only internal to Facebook – I mean “Rules” from the point of view of ourselves as informational people.)

Some that come to mind for me include:

- I don’t spend that much of my time thinking about the information I am becoming, but when I do, it makes me uneasy.

- I can always change the information that is known about me, if it’s wrong, but it’s a huge PITA.

- I can always access the information that is known about me, if I really want to do the work (but the truth is, I usually don’t).

- I know the information about me is valuable, but I don’t expect to derive any monetary value from it.

- It’s OK for the government to have access to all this information, because we trust the government. (Like it or not, this is in fact true by rule of law in the US).

- It’s OK for marketers to have information about me, because it allows for free Internet services and content. (Ditto)

- I understand that most of the information that makes up my own identity is controlled by large corporations, because in the end, I trust they have my best interests at heart (and if not, I can always leave).

What rules do you think much of our society currently operates under? And are they up for renegotiation, or are we starting to set them in stone?

Bill Gates Active Again At Microsoft? Bad Idea.

By - February 04, 2014

bill(image) This story reporting that Gates will return to Microsoft “one day a week” to focus on “product” has been lighting up the news this week. But while the idea of a founder returning to the mothership resonates widely in our industry (Jobs at Apple, Dorsey at Twitter), in Gates’ case I don’t think it makes much sense.

It’s no secret in our industry that Microsoft has struggled when it comes to product. It’s a very distant third in mobile (even though folks praise its offerings), its search engine Bing has struggled to win share against Google despite billions invested, and the same is true for Surface, which is well done but selling about one tablet for every 26 or so iPads (and that’s not counting Android). And then there’s past history – you know, when Gates was far more involved: the Zune (crushed by the iPod), that smart watch (way too early), and oh Lord, remember Clippy and Bob?

If anything, what Gates brought to the product party over the past two decades was a sense of what was going to be possible, rather than what is going to work right now. He’s been absolutely right on the trends, but wrong on the execution against those trends. And while his gravitas and brand would certainly help rally the troops in Redmond, counting on him to actually create product sounds like grasping at straws, and ultimately would prove a huge distraction.

Not to mention, a return to an active role at Microsoft would be a bad move for Gates’ personal brand, which along with Bill Clinton, is one of the most remarkable transformation stories of our era. Lest we forget, Gates was perhaps the most demonized figure of our industry, pilloried and humbled by the US Justice Department and widely ostracized as a unethical, colleague-berating monopolist. The most famous corporate motto of our time – “Don’t be evil” – can thank Microsoft for its early resonance. In its formative years, Google was fervently anti-Microsoft, and it made hay on that positioning.

Bill Gates has become the patron saint of  philanthropy and the poster child of rebirth, and from what I can tell, rightly so. Why tarnish that extraordinary legacy by coming back to Microsoft at this late date? Working one day a week at a company famous for its bureaucracy won’t change things much, and might in fact make things worse – if the product teams at Microsoft spend their time trying to get Gates’ blessing instead of creating product/market fit, that’s just adding unnecessary distraction in a market that rewards focus and execution.

If Gates really wants to make an impact at Microsoft, he’d have to throw himself entirely back into the company, focusing the majority of his intellect and passion on the company he founded nearly 40 years ago. And I’m guessing he doesn’t want to do that – it’s just too big a risk, and it’d mean he’d have to shift his focus from saving millions of lives to beating Google, Apple, and Samsung at making software and devices. That doesn’t sound like a very good trade.

 

Step One: Turn The World To Data. Step Two?

By - February 03, 2014

housenumbers1Is the public ready to accept the infinite glance of our own technology? That question springs up nearly everywhere I look these days, from the land rush in “deep learning” and AI companies (here, here, here) to the cultural stir that accompanied Spike Jonze’ Her. The relentless flow of Snowden NSA revelations, commercial data breaches, and our culture’s ongoing battle over personal data further frame the question.

But no single development made me sit up and ponder as much as the recent news that Google’s using neural networks to decode images of street addresses. On its face, the story isn’t that big a deal: Through its Street View program, Google collects a vast set of images, including pictures of actual addresses. This address data is very useful to Google, as the piece notes: “The company uses the images to read house numbers and match them to their geolocation. This physically locates the position of each building in its database.”

In the past, Google has used teams of humans to “read” its street address images – in essence, to render images into actionable data. But using neural network technology, the company has trained computers to extract that data automatically – and with a level of accuracy that meets or beats human operators.Not to mention, it’s a hell of a lot faster, cheaper, and scaleable.

Sure, this means Google doesn’t have to pay people to stare at pictures of house numbers all day, but to me, it means a lot more. When I read this piece, the first thing that popped into my mind was “anything that can be seen by a human, will soon be seen by a machine.” And if it’s of value, it will be turned into data, and that data will be leveraged by both humans and machines – in ways we don’t quite fathom given our analog roots.

I remember putting up my first street number, on a house in Marin my wife and I had just purchased that was in need of some repair. I went to the hardware store, purchased a classic “6″ and “3″, and proudly hammered them onto a fence facing the street. It was a public declaration, to be sure – I wanted to be found by mailmen, housewarming partygoers, and future visitors. But when I put those numbers on my fence, I wasn’t wittingly creating a new entry in the database of intentions. Google Street View didn’t exist back then, and the act of placing a street number in public view was a far more “private” declaration. Sure, my address was a matter of record – with a bit of shoe leather, anyone could go down to public records and find out where I lived. But as the world becomes machine readable data, we’re slowly realizing the full power of the word “public.”

In the US and many other places, the “public” has the right to view and record anything that is in sight from a public place – this is the basis for tools like Street View. Step one of Street View was to get the pictures in place – in a few short years, we’ve gotten used to the idea that nearly any place on earth can now be visited as a set of images on Google. But I don’t think we’ve quite thought through what happens when those images turn into data that is “understood” by machines. We’re on the cusp of that awakening. I imagine it’s going to be quite a story.

Update: Given the theme of “turning into data” I was remiss to not mention the concept of “faceprints” in this piece. As addresses are to our home, our faces are to our identity, see this NYT piece for an overview.