Reading over my picks from the past week, I noticed a strong theme – we’re using more and more apps, creating more and more data, but we’re not seeing the true value we might from connecting all the dots. Sure, the NSA is – and Facebook, Google, and other large platforms are as well. But imagine what happens when *we* get those insights?! A move from the center (big platforms) to the node (us) of the information ecosystem seems imminent…
Nearly three hours a day on our mobile phones (and we’re not talking). Most of that time we’re in “AppWorld” – not on “the open web.” That is a scary trend, to my mind. But I think it’s temporary. Or rather, I hope it is.
Turns out, as a service, you have to provide what people want. For the most part. Facebook is considering the impact of apps like SnapChat and Secret. Clearly, it’s not what the social networking giant *wants* – but perhaps this is a worm turning.
Indeed, if this outgoing NSA Director *missed* the big data revolution, he’d have been outgoing a long time ago…
Apple is still #1. I wonder how long this will last, given Google’s ambitious push into entirely new markets.
Yes, I want this. Please. And please make it work with my Fuelband?!
Startling to see how easy it is for someone with a few bits of digital information to figure out quite a lot more about us.
Yes. I’ve been on about this for some time. Because of AppWorld, all these silos of data have yet to get to second and third-order insights. But we are starting to, slowly…
Most likley Google won’t do it the way the carriers are doing it. And I for one hope they go for it.
In light of the CEO controversy, worth remembering what it is about Mozilla that makes it unique.
Because no edition of Else is complete without some thinking about Bitcoin.
Reading this closely, and he’s talking about what I opened with – connecting all the dots…
As many of you know, each year I write a set of predictions about the industry – this year, however, I had a bit of a hard time getting going. The reason? A persistent sense of “existential anxiety” around climate change. In Predictions 2014: A Difficult Year To See, I wrote:
I’ve been mulling these predictions for months, yet one overwhelming storm cloud has been obscuring my otherwise consistent forecasting abilities. The subject of this cloud has nothing – directly – to do with digital media, marketing, technology or platform ecosystems – the places where I focus much of my writing. But while the topic is orthogonal at best, it’s weighing heavily on me.
So what’s making it harder than usual to predict what might happen over the coming year? In a phrase, it’s global warming. I know, that’s not remotely the topic of this site, nor is it in any way a subject I can claim even a modicum of expertise. But as I bend to the work of a new year in our industry, I can’t help but wonder if our efforts to create a better world through technology are made rather small when compared to the environmental alarm bells going off around the globe.
I’ve been worried about the effects of our increasingly technologized culture on the earth’s carefully balanced ecosystem for some time now. But, perhaps like you, I’ve kept it to myself, and assuaged my concerns with a vague sense that we’ll figure it out through a combination of policy, individual and social action, and technological solutions. Up until recently, I felt we had enough time to reverse the impact we’ve inflicted on our environment. It seemed we were figuring it out, slowly but surely.
But if this latest report from the UN is any indication, we’re not figuring it out fast enough. In fact, the “the costs of inaction are catastrophic,” according to Sec. of State John Kerry.
So how can we take action? In my post, I noted:
As Ben Horowitz pointed out recently, one key meaning of technology is “a better way of doing things.” So if we believe that, shouldn’t we bend our technologic infrastructure to the world’s greatest problem? If not – why not? Are the climate deniers right? I for one don’t believe they are. But I can’t prove they aren’t. So this constant existential anxiety grows within me – and if conversations with many others in our industry is any indication, I’m not alone.
Indeed, I am not alone, and today, a stellar group of people voted with their reputation and joined the #Climate movement. Sure, a hashtag isn’t going to change the world alone, but it’s a start – and it’s more than just posting on social networks. Created by my friend Josh Felser and a dedicated team, #Climate is “leveraging the social media reach of several dozen “influencers” to spread the word about concrete actions that citizens can take to confront the challenges of global warming. The tech-heavy class of inaugural influencers, who have a combined reach of 80 million people on Facebook and Twitter, include: Al Gore, Twitter CEO Dick Costolo, Medium founder Evan Williams, California Lieutenant Governor Gavin Newsom, actor Mark Ruffalo and the NBA.” (Re/Code)
I’m honored to be included in the list and will be using the app from now on. If you follow me on Twitter, I hope you’ll find my calls to action worthy of your time. Who knows, we might just be starting something….
Lost in the latest Facebook kerfuffle (if you’ve missed it, read this cheeky Eat24 post, and the hundreds of articles it prompted) is the fact that we all seemed quite confused about what Facebook’s newsfeed is supposed to be. Is it an intimate channel for peer to peer communication, where you stay in touch with people who matter to you? Is is a place you go to find out what’s happening in the world at large, a watercooler of sorts, a newspaper, as Zuckerberg has said? Is it a marketing channel, where any brand can pay for the right to pitch you things based on your stated or inferred interests? Is it all of these things? Can it be?
We’re in the midst of finding out. Of course, I have an opinion. It boils down to this: Facebook’s newsfeed should be what I tell it to be, not what Facebook – or anyone else – tells me it should be. If I want to fill my newsfeed with Eat24 sushi porn, then it should be brimming with it. If I tell it to only show musings from Dwight Schrute and Marc Cuban, then that’s what I want to see. If I love what Mickey D’s is posting and want to see the best of their posts as determined by engagement, then Big Mac me. And if I prefer to keep it to my immediate family, then damnit, show me that.
If the cost of giving me that kind of control is that I have to see a marketer’s post every five or six entries, I’m cool with that. That’s what Twitter does, and it doesn’t bother me, it’s table stakes, I get it. But what I think Facebook’s got wrong is where they’ve instrumented the controls. Facebook spends an inordinate amount of time and energy tweaking a black box set of algorithms to figure out what it thinks I want in my feed, boiling an ever-larger ocean of content into a stream of stuff it believes I want. For reasons I can’t fathom, it doesn’t give me the chance to truly curate my feed, beyond some clunky lists and filters which, from what I can tell, are only good for blocking people or indicating preference for a particular feed (but not saying, for example, “show me everything from this source.”)
Facebook is therefore viewed as paternalistic – it has a vibe of “we’ll figure out what’s best to show you.” You have *some* input into the feed, but you are not encouraged to actively curate it the way you can curate friends or brands on Instagram or Twitter (and I think both have a long way to go as well). I think Facebook could trump all this debate once and for all by putting the end-user of its service in charge, and iterating the newsfeed based on that feedback. Scary, perhaps, but ultimately liberating and, more importantly, truly authentic. Over time, the value will accrue back. As we say around the office at NewCo, give (control) to get (benefit back).
(image) If you’re a reader of this newsletter, you’re in elite company. Each week I chose ten or so stories from the score or so that I save to Evernote, and I annotate them after about three glasses of wine on a Sunday night. I make no pretense to be Jason or Dave, instead, this is a way to remember the most important stories of the past week through the filter of “the book.” And when I say “the book,” I mean That Project That Has Haunted Me For More Than Five Years But Is Increasingly Becoming Real. In other words, if you read this newsletter (or post), you’re a true fan of my work. And for that, I am thankful.
This past week was full of gems. The New Yorker reminded us how poignant digital life can be. We struggled with the ethics of 3D printing, even as we reveled in its power to save lives. Oh, and then there’s the singularity, and protecting us from the same. An epic Facebook rant, more Bitcoin, more brain-twisters about who’s a person, alive, dead, or corporate, in our increasingly mashed up world. To the links…
What happens if you die in your car, alone in your garage, and your bill pay (and some well-intentioned neighbors) keep the world thinking you’re alive? You still “exist” – and this certainly makes one think about what the word “exist” really means.
There are thousands of high-end PhDs at Google and many other places who want to create AI capable of thinking like a human. Then there’s this small group – seven in total – worrying about what happens if it actually comes to pass. Yikes.
What happens to religion when we can play God?
Never mind, we already are. And thank God for that.
Or, put another way, we have more money than we need for our current business, so let’s play the odds on where it might all go.
Long read. Worthy.
Short read. Topical.
Hard to disagree with the logic here. If Snowden has forced a President to change policy, he’s a whistleblower, not a traitor.
Are corporations “people” with “religious beliefs”? This is a VERY BIG QUESTION now before the Supreme Court. It matters, because, you know, these “people” are making…machines that ACT LIKE PEOPLE. Does your head hurt yet? Why not?!
Like this newsletter? Sign up!
Twitter’s lack of growth over the past few months has quickly become its defining narrative – witness Inside Twitter’s plan to fix itself from Quartz, which despite the headline, fails to actually explain anything about said plan.
As with most things I write about Twitter, I have no particular inside knowledge of the company’s plans, but I’ve written over and over about its core failing, and promise. In 2008 (!) I suggested “TweetSense“, and in 2011, I wrote Twitter and the Ultimate Algorithm: Signal Over Noise (With Major Business Model Implications). It opens with this:
My goal in this post is to outline what I see as the biggest challenge/opportunity in the company’s path. And to my mind, it comes down to this: Can Twitter solve its signal to noise problem?
I go on to say that it most certainly has to, because solving the problem allows it to attach sponsored advertisements (promoted tweets in particular) to just the right timelines in just the right context. I called the solution “TweetWords” – because AdWords came before AdSense. Twitter’s promoted tweets product did in fact evolve toward interest-based targeting – alas, in one way only, as far as I can tell. Advertisers can target Twitter users based on their interests (as expressed by what they tweet, retweet, follow, etc.), but they can’t place their promoted tweets contextually into timelines (IE, in a manner that “fits” with the content around them). **Update. Twitter has had keyword targeting – a key step in contextual ad targeting – for a year now. I missed this. My apologies.
So far, there’s no such thing as TweetSense or TweetWords – where ads are contextual to the stream in which they appear. It seems Twitter has not focused on this particular problem – and it may not have to. Revenues are doing extremely well, and Twitter is clearly opening up new forms of advertising based on larger formats, video (Vine), and cards.
But if the core problem of understanding individual timelines as context is not going to be solved, it’d be a shame – because solving that problem will address Twitter’s core signal to noise issue as well. Here’s more from that 2011 post:
If Twitter can assign a rank, a bit of context, a “place in the world” for every Tweet as it relates to every other Tweet and to every account on Twitter, well, it can do the same job for every possible advertiser on the planet, as they relate to those Tweets, those accounts, and whatever messaging the advertiser might have to offer. In short, if Twitter can solve its signal to noise problem, it will also solve its revenue scale problem. It will have built an auction driven marketplace where advertisers can bid across those hundreds of millions of tweets for the the right to position relevant messaging in real time.
I still think this is a huge opportunity for Twitter, and not for revenue reasons. I get a ton of value out of the Twitter platform, but I don’t turn to it for news and happenings anymore. I follow too many people, and managing multiple screens on Tweetdeck is just too much work. Instead, I depend on great curators like Jason Hirschorn and his team at MediaReDEF – essentially the morning newspaper for folks like me – and a number of machine-driven services that consume my feed and spit back the most popular shared stories (News.me, Percolate, etc).
I find the machine services are predictable, but Jason’s service is top notch – he’s an Editor’s Editor. His stuff, along with folks like Dave Pell, have become my go to these days. But Twitter can’t get the mass market users on its system via human curation – or can it?
Back when Twitter was small and the signal was high, I found a lot of value in my Twitter feed. Individuals who were great curators were my favorite follow. Over time my feed clogged with too many other types of folks – and I’ve never found a tool that can help me get back to those halcyon days where the best stuff rose to the top. Twitter’s Discover tab is interesting, but lacks instrumentation. Wouldn’t it be cool if Twitter somehow elevated the best curators on its platform in some way – promoting their work and helping them gain audience? Sure, it’d feel a lot like the old “who to follow” of the old days (and there was much to criticize with that system), but given how much Twitter now knows about its own platform, it might be a pretty powerful half-step toward giving people a better handle on the richness the platform has to offer. It’d be a great, lightweight way to start using the service, and for power users who have bankrupted their feeds (IE, me), it could really change the game.
I’d love a service on Twitter that pointed out the best curators for any given topic where I’ve indicated a strong interest (and my interests have already been mapped by Twitter, for purposes of promoted tweets). Further – and this is important – I’d love for Twitter to break out those feeds for me as part of its core service – a sort of Headline News to its constant 24-Hour barrage. It’d mean a break with the one-size-fits-all mentality of the main Twitter stream, but I think such a break is overdue.
Chances are, Twitter’s already explored and dismissed these ideas, but…are they crazy?
Last month I finished Dave Eggers’ latest novel The Circle, the first work by a bona fide literary light that takes on our relationship with today’s Internet technology and, in particular, our relationship with corporations like Google.
It took me a while to start The Circle, mainly because of its poor word of mouth. Most of the folks I know who mentioned it, did so in an unfavorable light. “Eggers doesn’t get our industry,” was one theme of the commentary. “He did zero research, and was proud of it!” was another. I wanted to let some time go by before I dove in, if only to let the criticism ebb a bit. It struck me that it’s not a novelist’s job to get an industry *right*, per se, but to tell a story and compel us to think about its consequences in way that might change us a little bit. I wanted to be open to that magic that happens with a great book, and not read it with too much bias.
Once I began, I found the novel engaging and worthy, but in the end, not wholly fulfilling. I found myself wishing Eggers would reveal something new about our relationship to technology and to companies like Google, Facebook, Apple – but in that department the book felt predictable and often overdone.
But first, a bit of background. “The Circle” refers to a fictional company by the same name, a rather terrifying monolith that arises sometime in the near future. The Circle has the arrogance and design sensibilities of Apple, the ‘we can do it because we’re smarter (and richer) than everyone else’ mentality of Google, the always-be-connected-and-share-everything ethos of Facebook, with a dash of Twitter’s public square and plenty of Microsoft’s once-famed rapaciousness. The Circle is, in short, a mashup of every major tech-company cliche in the book, which to be fair kind of makes it fun. It’s run by the “Three Wise Men,” for example, a direct nod to Google’s ten year rule of the “triumvirate” – Page, Brin, and Schmidt.
The story revolves around Mae Holland, a young woman who jumps from a dull job at a local utility to the golden ticket that is an entry level gig at The Circle. Mae is overwhelmed by her luck and eager to please her new bosses. Early on, reading was a lot of fun, because the patter of the Circle employees feels so…familiar. Every problem has a logical and obvious solution, and nearly all of those solutions involve everyone using The Circle’s services. All employees of the Circle become citizens of the Circle, wittingly or not. They live, eat, sleep, fuck, and party with others from the Circle, because that’s how they get ahead. Mae is swept into this culture willingly, losing sight of her family, non-Circle friends, and most of the facets of her life that once defined her. And so the story is pushed along, as Mae slowly becomes a product of the Circle, even as she (unconvincingly) rebels from time to time.
This phenomenon is certainly not foreign to any young tech worker at Google or Facebook, but Eggers takes it to extremes. He nails the breathless “save the world” mentality that often accompanies the pitches of young tech wizards, but offers no counterpoints save perhaps the reader’s own sense of improbability. For example, one exec at The Circle is working on a plan to implant a chip into every newborn’s bones, so there’d be no more child abductions. Another ruse is the sweeping adoption of “Transparency” by elected officials – every public servant uses The Circle’s technology to be “always on” while attending to their duties, so that anyone can check on them at any time (Mae ultimately goes transparent as well). Toward the end, much of government is close to becoming privatized through The Circle, because it’s more efficient, transparent, and accountable. And various ridiculous mottos espoused by The Circle – “Privacy Is Theft,” “Secrets Are Lies,” “All That Happens Must Be Known” – are readily accepted by society. All of these examples are offered as matter of fact, logical ends serving greater social means, but as readers we smirk – they are likely never happen due to issues the book fails to consider.
Then again…It may be that the lack of contrarian views is intentional, and if you can suspend disbelief, you find yourself in the a place not unlike 1984 or Animal Farm – a twisted version of the near future where absolutists have taken over society. And it’s for the creation of that potential that I give The Circle the most credit – it litigates the idea of the corporation as Paternitas, the all seeing, all caring, all nurturing force to which individuals have forsaken themselves so as to allow a greater good. It’s too early to say whether The Circle will stand with such classics, but certainly it does stand as a warning. I found myself disturbed by The Circle, even as I found it easy to dismiss. Because its predictions were too easily made – I couldn’t suspend disbelief.
But perhaps that’s Egger’s point. The Circle forces us to think critically about the world we’re all busy making, and that’s never a waste of time. And besides, the story has all manner of enjoyable and outlandish contours – if you work in this industry, or just find it fascinating, you’ll leave the book entertained. A worthy read.
Back in the saddle after missing a week of Else (sorry about that). The best stories from the past two weeks are below, and you’ll note a bit of TED, which ran last week, as well as a fair amount of Google, which is hard to avoid given the focus of this newsletter: If you’re going to cover “becoming data” it’s best you get used to hearing about Google.
Page does not do public speaking events very often, both because of his voice condition, and because it’s just not who he is. But this TED conversation with Charlie Rose offers insights into Page’s thinking on a range of issues, in particular, on privacy, where he moved the needle, in my estimation.
Ledgett is the Deputy Director of the NSA. He is responding to Snowden’s much covered video conference with TED curator Chris Anderson. It’s rare to have someone like Ledgett respond so quickly, it’s a worthy half hour, despite the predictable bromides.
A video and short article, taking up the fact that decisioning by machines is simply winning for most complex markets, in particular finance and marketing. Featuring Quid founder and CTO Sean Gourley.
As I read about this “smart” AC from GE, I thought to myself “Huh, now Google and GE are competing (via Google’s acquisition of Nest).” Interesting.
Thinking Out Loud: Potential Information - Searchblog
In which I muddle through an idea that’s been pulling at my brainstrings for quite some time.
Jeremy Rifkin is back with an essay arguing that many information-based goods are approaching the cost of “free” – raising the question of whether capitalism will continue as we know it.
How Google Does Fundamental Research Without a Separate Research Lab - MIT Technology Review
Step one: Tie research to actual product groups. Step Two: Bring in the academics, lots of them. Step Three: Add (piles of) money.
A Missing Jet in a World Where No One Gets Lost — Daily Intelligencer
A meditation on why the lost aircraft disturbs us so – in a world where data about our every move seems ubiquitous, how can something so “large” get lost?
The era of Facebook is an anomaly - The Verge
A profile of Microsoft researcher (and teen social expert) danah boyd, whose new book It’s Complicated recently came out.
“It’s time for us to make a big communal decision,” says Berners-Lee. “In front of us are two roads – which way are we going to go?”
(image) If you took first-year physics in school, you’re familiar with the concepts of potential and kinetic energy. If you skipped Physics, here’s a brief review: Kinetic energy is energy possessed by bodies in motion. Potential energy is energy stored inside a body that has the potential to create motion. It’s sort of kinetic energy’s twin – the two work in concert, defining how pretty much everything moves around in physical space.
I like to think of potential energy as a force that’s waiting to become kinetic. For example, if you climb up a slide, you have expressed kinetic energy to overcome the force of gravity and bring your “mass” (your body) to the top. Once you sit at the top of that slide, you are full of the potential energy created by your climb – which you may once again express as kinetic energy on your way back down. Gravity provides what is known as the field, or system, which drives all this energy transfer.
For whatever reason, these principles of kinetic and potential energy have always resonated with me. They are easily grasped, to be certain, but it’s also how evocative they are. Everything around us is either in motion or it’s not – objects are either animated by kinetic energy (a rock flying through the air), or they are at rest, awaiting a kinetic event which might create action and possibly some narrative consequence (a rock laying on the street, picked up by an angry protestor….).
To me, kinetic and potential energy are the bedrock of narrative – there is energy all around us, and once that energy is set in motion, the human drama unfolds. The rock provides mass, the protestor brings energy, and gravity animates the consequence of a stone thrown…
Because we are physical beings, the principles of motion and force are hard wired into how we navigate the world – we understand gravity, even if we can’t run the equations to prove its cause and effect. But when it comes to the world of digital information, we struggle with a framework for understanding cause and effect – in particular with how information interacts with the physical world. We speak of “software eating the world,” “the Internet of Things,” and we massify “data” by declaring it “Big.” But these concepts remain for the most part abstract. It’s hard for many of us to grasp the impact of digital technology on the “real world” of things like rocks, homes, cars, and trees. We lack a metaphor that hits home.
But lately I’ve been using the basic principles of kinetic and potential energy as a metaphor in casual conversations, and it seems to have some resonance. Now, I’m not a physicist, and it’s entirely possible I’ve mangled the concepts as I think out loud here. Please pile on and help me express this as best I can. But in the meantime…
…allow me to introduce the idea of potential information. Like potential energy, the idea of potential information is that all physical objects contain the potential to release information if placed in the right system. In the physical world, we have a very large scale system already in place – it’s called gravity. Gravity provides a field of play, the animating system which allows physical objects (a rock, a child at the top of a slide) to become kinetic and create a narrative (a rock thrown in anger, a child whooping in delight as she slides toward the sand below).
It seems to me that if we were to push this potential information metaphor, then we need our gravity – our system that allows for potential information to become kinetic, and to create narratives that matter. To my mind, that system is digital technology, broadly, and the Internet, specifically. When objects enter the system of technology and the Internet, they are animated with the potential to become information objects. Before contact with the Internet, they contain potential information, but that information is repressed, because it has no system which allows for its expression.
In this framework, it strikes me that many of the most valuable companies in the world are in the business of unlocking potential information – of turning the physical into information. Amazon and eBay unlocked the value of merchandise’s potential information. Airbnb turns the potential information of spare bedrooms into kinetic information valued at nearly $10 billion and counting. Uber unlocked the potential information trapped inside transportation systems. Nest is animating the potential information lurking in all of our homes. And Facebook leveraged the potential information lurking in our real world relationships.
I’d wager that the most valuable companies yet to be built will share this trait of animating potential information. One of the best ideas I’ve heard in the past few weeks was a pitch from an inmate at San Quentin (part of The Last Mile, an amazing program worthy of all your support). This particular entrepreneur, a former utilities worker, wanted to unlock all the potential information residing in underground gas, sewage, and other utilities. In fact, nearly every good idea I’ve come across over the past few years has had to do with animating potential information of some kind.
Which brings us to Google – and back to Nest. In its first decade, Google was most certainly in the business of animating potential information, but it wasn’t physical information. Instead, Google identified an underutilized class of potential information – the link – and transformed it into a new asset – search. A link is not a physical artifact, but Google treated as if it were, “mapping” the Web and profiting from that new map’s extraordinary value.
Now the race is on to create a new map – a map of all the potential information in the real world. What’s the value of potential information coming off a jet engine, or a wind turbine? GE’s already on it. What about exploiting the potential information created by your body? Yep, that’d be Jawbone, FitBit, Nike, and scores of others. The potential information inside agriculture? Chris Anderson’s all over it. And with Nest, Google is becoming a company that unlocks not only the information potential of the Web, but of the physical world we inhabit (and yes, it’s already made huge and related moves via its Chauffeur, Earth, Maps, and other projects).
Of course, potential information can be leveraged for more than world-beating startups. The NSA understands the value of potential information, that’s why the agency has been storing as much potential information as it possibly can. What does it mean when government has access to all that potential information? (At least we are having the dialog now – it seems if we didn’t have Edward Snowden, we’d have to create him, no?)
Our world is becoming information – but then again, it’s always had that potential. Alas, I’m just a layman when it comes to understanding information theory, and how information actually interacts with physical mass (and yes, there’s a lot of science here, far more than I can grok for the purposes of this post.) But the exciting thing is that we get to be present at the moment all this information is animated into narratives that will have dramatic consequences for our world. This is a story I plan to read deeply in over the coming year, and I hope you’ll join me as I write more about it here.
We are all accustomed to the idea of software “Preferences” – that part of the program where you can personalize how a particular application looks, feels, and works. Nearly every application that matters to me on my computer – Word, Keynote, Garage Band, etc. – have preferences and settings.
On a Macintosh computer, for example, “System Preferences” is the control box of your most important interactions with the machine.
I use the System Preferences box at least five times a week, if not more.
And of course, on the Internet, there’s a yard sale’s worth of preferences: I’ve got settings for Twitter, Facebook, WordPress, Evernote, and of course Google – where I probably have a dozen different settings, given I have multiple identities there, and I use Google for mail, calendar, docs, YouTube, and the like.
Any service I find important has settings. It’s how I control my interactions with The Machine. But truth is, Preferences are no fun. And they should be.
The problem: I mainly access preferences when something is wrong. In the digital world, we’ve been trained to see “Preferences” as synonymous with “Dealing With Shit I Don’t Want To Deal With.” I use System Preferences, for example, almost exclusively to deal with problems: Fixing the orientation of my monitors when moving from work to home, finding the right Wifi network, debugging a printer, re-connecting a mouse or keyboard to my computer. And I only check Facebook or Google preferences to fix things too – to opt out of ads, resolve an identity issue, or enable some new software feature. Hardly exciting stuff.
Put another way, Preferences is a “plumbing” brand – we only think about it when it breaks.
But what if we thought of it differently? What if managing your digital Preferences was more like….managing your wardrobe?
A few years back I wrote The Rise of Digital Plumage, in which I posited that sometime soon we’ll be wearing the equivalent of “digital clothing.” We’ll spend as much time deciding how we want to “look” in the public sphere of the Internet as we do getting dressed in the morning (and possibly more). We’ll “dress ourselves in data,” because it will become socially important – and personally rewarding – to do so. We’ll have dashboards that help us instrument our wardrobe, and while their roots will most likely stem from the lowly Preference pane, they’ll soon evolve into something far more valuable.
This is a difficult idea to get your head around, because right now, data about ourselves is warehoused on huge platforms that live, in the main, outside our control. Sure, you can download a copy of your Facebook data, but what can you *do* with it? Not much. Platforms like Facebook are doing an awful lot with your data – that’s the trade for using the service. But do you know how Facebook models you to its partners and advertisers? Nope. Facebook (and nearly all other Internet services) keep us in the dark about that.
We lack an ecosytem that encourages innovation in data use, because the major platforms hoard our data.
This is retarded, in the nominal/verb sense of the word. Facebook’s picture of me is quite different from Google’s, Twitter’s, Apple’s, or Acxiom’s*. Imagine what might happen if I, as the co-creator of all that data, could share it all with various third parties that I trusted? Imagine further if I could mash it up with other data entities – be they friends of mine, bands I like, or even brands?
Our current model of data use, in which we outsource individual agency over our data to huge factory farms, will soon prove a passing phase. We are at once social and individual creatures, and we will embrace any technology that allows us to express who we are through deft weavings of our personal data – weavings that might include any number of clever bricolage with any number of related cohorts. Fashion has its tailors, its brands, its designers and its standards (think blue jeans or the white t-shirt). Data fashion will develop similar players.
Think of all the data that exists about you – all those Facebook likes and posts, your web browsing and search history, your location signal, your Instagrams, your supermarket loyalty card, your credit card and Square and PayPal purchases, your Amazon clickstream, your Fitbit output – think of each of these as threads which might be woven into a fabric, and that fabric then cut into a personalized wardrobe that describes who you are, in the context of how you’d like to be seen in any given situation.
Humans first started wearing clothing about 170,000 years ago. “Fashion” as we know it today is traced to the rise of European merchant classes in the 14th century. Well before that, clothing had become a social fact. A social fact is a stricture imposed by society – for example, if you don’t wear clothing, you are branded as something of a weirdo.
Clothing is an extremely social artifact – *what* you wear, and how, are matters of social judgement and reciprocity. We obsess over what we wear, and we celebrate those “geniuses” who have managed to escape this fact (Einstein and Steve Jobs both famously wore the same thing nearly every day).
There’s another reason the data fabric of your life is not easily converted into clothing – because at the moment, digital clothing is not a social fact. There’s no social pressure for your “look” a certain way, because thanks our outsourcing of our digital identity to places like Facebook, Twitter, and Google+, we all pretty much look the same to each other online. As I wrote in Digital Plumage:
How strange is it that we as humans have created an elaborate, branded costume culture to declare who we are in the physical world, but online, we’re all pretty much wearing khakis and blue shirts?
At it relates to data, we are naked apes, but this is about to change. It’s far too huge an opportunity.
Consider: The global clothing industry grosses more than $1 trillion annually. We now spend more time online that we do watching television. And as software eats the world, it turns formerly inanimate physical surroundings into animated actors on our digital stage. As we interact with these data lit spaces, we’ll increasingly want to declare our preferences inside them via digital plumage.
An example. Within a few years, nearly every “hip” retail store will be lit with wifi, sensors, and sophisticated apps. In other words, software will eat the store. Let’s say you’re going into an Athleta outlet. When you enter, the store will know you’ve arrived, and begin to communicate with your computing device – never mind if its Glass, a mobile phone, or some other wearable. As the consumer in this scenario, won’t you want to declare “who you are” to the retail brand’s sensing device? That’s what you do in the real world, no? And won’t you want to instrument your intent – provide signal to that store that will allow the store to understand your intent? And wouldn’t the “you” at Athleta be quite different from, say, the “you” that you become when shopping at Whole Foods or attending a Lord Huron concert?
Then again, you could be content with whatever profile Facebook has on you, (or Google, or ….whoever). Good luck with that.
I believe we will embrace the idea of describing and declaring who we are through data, in social context. It’s wired into us. We’ve evolved as social creatures. So I believe we’re at the starting gun of a new industry. One where thousands of participants take our whole data cloth and stitch it into form, function, and fashion for each of us. Soon we’ll have a new kind of “Preferences” – social preferences that we wear, trade, customize, and buy and sell.
In a way, younger generations are already getting prepared for such a world – what is the selfie but a kind of digital dress up?
Lastly, as with real clothing, I believe brands will be the key driving force in the rise of this industry. As I’m already over 1,000 words, I’ll write more on that idea in another post.
*(fwiw, I am on Acxiom’s board)