Thanks to Brian Solis for taking the time to sit down with me and talk both specifically about my upcoming book, as well as many general topics.
Earlier this week I participated in Google’s partner conference, entitled Zeitgeist after the company’s annual summary of trending topics. Deep readers of this site know I have a particular affection for the original Zeitgeist, first published in 2001. When I stumbled across that link, I realized I had to write The Search.
The conference reminds me of TED, full of presentations and interviews meant to inspire and challenge the audience’s thinking. I participated in a few of the onstage discussions, and was honored to do so.
I’d been noodling a post about the meaning of Google’s brand*, in particular with respect to Google+, for some time, and I’d planned to write it before heading to the conference, if for no other reason than it might provide fodder for conversations with various Google executives and partners. But I ran out of time (I wrote about Facebook instead), and perhaps that’s for the good. While at the conference, I got a chance to talk with a number of sources and round out my thinking.
I also got the chance to ask Larry Page a question (video is embedded above, the question is at 19.30). In essence, my query was this: For most of Google’s history, when people thought about Google, they’d think about search. That was the brand: Google = search. For the next phase of Google’s life, what does Google equal?
I asked this question with an answer in mind (as I said, I’d been thinking about this for some time), but I didn’t get the answer I had hoped for. What Page did say was this:
“I’d like the brand to represent the things I just spoke about (for that, see the video) … it’s important that people trust the brand…that we’re trustworthy…and I think also it should stand for a beauty and technological purity…innovation, and things that are important to people, driving technology forward.”
The text above doesn’t really do Page’s answer justice, because somehow when he said “beauty” – a word I was surprised to hear – he delivered it with a sincerity that I and others at the conference found…almost Apple-like.
Then again, Page didn’t directly answer the question, at least from a marketing standpoint. In 2009, Google’s brand = search. That kind of clarity and consistency is what every marketer seeks to define in their brand.
At the moment, Google’s brand is a bit confusing. Google equals Chrome. And YouTube. And Android. And Google Docs. And Gmail. And Maps, Places, Voice, Calendar….and self driving cars, and investments in energy research, and antitrust hearings, and Adwords, and of course search. Not to mention Google+.
Oh, and Motorola.
One can forgive the average consumer if he or she is a bit confused about what Google really means.
In conversations with various Google executives over the past few weeks, including leaders in product, marketing, and search, it’s clear that the company is well aware of this problem, and is focused on finding a solution. And while most have seen Google+ as the company’s answer to Facebook’s social graph, I now see it as something far bigger.
In short, Google+ = Google.
Google VP of Product Bradley Horowitz, who I know well enough to know he doesn’t say things without thinking about them a bit, recently told Wired as much, but the context was missing. To wit:
Wired: How was working on Google+ different from working on the company’s previous offerings?
Horowitz: Until now, every single Google property acted like a separate company. Due to the way we grew, through various acquisitions and the fierce independence of each division within Google, each product sort of veered off in its own direction. That was dizzying. But Google+ is Google itself. We’re extending it across all that we do—search, ads, Chrome, Android, Maps, YouTube—so that each of those services contributes to our understanding of who you are.
Horowitz is making an important point, but the interview moved on. It should have lingered. In those conversations with Googlers over the past month, I’ve heard one consistent theme: Larry Page is obsessed with Google+, and not just for its value as a competitor to Facebook. Rather, as I wrote earlier this month, Google+ is the digital mortar between all of Google’s offerings, creating a new sense of what the brand *means*.
So what is that meaning? I’d like to venture a guess: one seamless platform for extending and leveraging your life through technology. In short, Google = the operating system of your life.
At the moment, there are really only three serious players who have the technological, capital, and brand resources to stake such an audacious claim. Of course, they are Apple, Microsoft, and Google (Amazon seems on the precipice of becoming the fourth). Of the three, Apple has the best handle on its brand. And Microsoft made its brand in the operating system world, so it has at least pitched its tent in the right part of our collective mindspace.
But Google? Well, Google’s got some brand work to do. Google’s products don’t all work together in a seamless way, and at first glance, don’t seem to all speak to the same brand experience. Google+ is the company’s attempt to address that problem, such that every experience with Google “makes sense” from a brand perspective. Which is to say, from the customer’s point of view. As a very senior Google marketing executive recently told me: “There’s a reason it’s called Google….plus!”
If this is correct, then the stakes of ensuring that Google+ succeeds are raised, significantly. Google has twice tried to out-social Facebook (Buzz, Orkut), and neither quite worked. But this time, Google’s not just trying to beat Facebook. It’s being far more ambitious – it’s trying to redefine what happens inside your brain when you consider the concept of “Google.” Part of that is social, sure. But far more of it has to do with being the brand to which you entrust nearly every technology-leveraged part of your life.
If that indeed is what the company is trying to do, I’m more certain that Google+ will succeed. Why? Because it means the company is committed in a new way to a singular purpose. It means it will cut new kinds of deals so as to compete (like bringing Cityville to Google+, or undermining Facebook’s Skype partnership through Hangouts, or, soon, bringing media and marketing into Google+). It means tying Google+ to its core promotion engine of search (which it most certainly has). And it means, as Horowitz told Wired, “extending (Google+) across all that we do.” I recently asked Google’s head of local, Marissa Mayer, what percentage of her products were integrated with Google+. Five or so percent, she told me. But she quickly added: That’s going to change, and fast.
At Zeitgeist, when Page answered my question about the brand, he answered mostly with meaning – innovation, trust, beauty. But Larry spoke for twenty or so minutes prior to my asking him that question, and he mentioned Google+ over and over, pressing how important the project was, and how excited he was about it. So come to think of it, maybe his first response to me – I’d like the brand to represent the things I just spoke about - was all the answer we really needed.
(image) Recently I was in conversation with a senior executive at a major Internet company, discussing the role of the news cycle in our industry. We were both bemoaning the loss of consistent “second day” story telling – where a smart journalist steps back, does some reporting, asks a few intelligent questions of the right sources, and writes a longer form piece about what a particular piece of news really means.
Instead, we have a scrum of sites that seem relentlessly engaged in an instant news cycle, pouncing on every tidbit of news in a race to be first with the story. And sure, each of these sites also publish smart second-day analysis, but it gets lost in the thirty to fifty new stories which are posted each day. I bet if someone created a venn diagram of the major industry news sites by topic, the overlap would far outweigh the unique on any given day (or even hour).
This is all throat clearing to say that with the Facebook story last week, I am sensing a bit more of a “pause and consider” cycle developing. Sure, everyone jumped on the new Timeline and Open Graph news, but by day two, I noticed a lot more thought pieces, and most of them were either negative in tone, or sarcastic (or both.) Exmples include:
Can Facebook Become the Web? (Fortune)
Analysis of F8, Timeline, Ticker and Open Graph (Chris Saad)
All of life has been utterly… (Dan Lyon)
Now, I am not endorsing all these pieces as perfect second day posts, but collectively, they do give us a fairly good sense of the issues raised by Facebook’s big news.
I’d like to add one more thought. Perhaps this might be called a “second week” post, given it’s been four or five days since the big news. In any case, the thing I find most interesting about the new approach to sharing and publishing on Facebook lies in what Mark Zuckerberg said his new product would deliver: “The story of your life.”
Now, long time readers know where I stand when it comes to telling the “story of your life.” I’m firmly in the camp that believes that story belongs to you, and should be told on your own domain, your own terms, and with a very, very clear understanding of who owns that story (that’d be you.) And this applies to brands as well: Your brand story should not be located or dependent on any third party platform. That’s the point of the web – anyone can publish, and no one has rights over what you publish (unless, of course, you break established law).
It was our inherent desire to tell “stories of our lives” that led to the explosion of blogging ten or so years ago. And crafting a rich narrative is just that, a craft (some elevate it to art). Yet Facebook’s new timeline, combined with the promiscuous sharing features of the Open Graph and some clever algorithms, promises to build a rich narrative timeline of your life, one that is rife with personal pictures, shared media objects (music, movies, publications), and lord knows what else (meals, trips, hookups – anything that might be recorded and shared digitally).
Now, I don’t find much wrong with this – most folks won’t spend their days obsessing over their timelines so as to present a perfectly crafted media experience. I’m guessing Facebook is counting on the vast majority of its users continuing to do what they’ve always done with Facebook’s curation of their data – ignore it, for the most part, and let the company’s internal algorithms manage the flow.
But our culture has always had a small percentage of folks who are native storytellers, people who do, in fact, obsess over each narrative they find worthy of relating. And to those people (which include media companies and brands falling over themselves to integrate with Open Graph), I once again make this recommendation: Don’t invest your time, or your narrative exertions, building your stories on top of the Facebook platform. Make them elsewhere, and then, sure, import them in if that’s what works for you. But individual stories, and brand stories, should be born and nurtured out in the Independent Web.
I’ve got plenty of philosophical reasons for saying this, which I wont’ get into in this post (some are here). But allow me to relate a more economic argument: At present, there’s no way for our story tellers to make money directly from Facebook for the favor of crafting engaging narratives on top of the company’s platform. And from what I can divine, Facebook plans to make a fair amount of money selling advertising next to these new timeline profiles. As they get richer and more multi-media, so will the advertisements. Do you think Facebook intends to cut its 800 million narrative agents into those advertising dollars? I didn’t think so.
Which is just fine, for most folks – for people who don’t see the “stories of their lives” as a way to make a living. But if crafting narrative is your business, or even just a hobby that brings in grocery money, I’d counsel staying on the open web. (BTW, crafting narratives is *every* brand’s business.) For you, Facebook is a wonderful distribution and community building platform. But it shouldn’t be where you build your house.
Thomas J. Watson, legendary chief of IBM during its early decades and the Bill Gates of his time, has oft been quoted – and derided – for stating, in 1943, that “I think there is a world market for maybe five computers.” Whether he actually said this quote is in dispute, but it’s been used in hundreds of articles and books as proof that even the richest men in the world (which is what Watson was for a spell) can get things utterly wrong.
After all, there are now hundreds of millions of computers, thanks to Bill Gates and Andy Grove.
But staring at how things are shaping up in our marketplace, maybe Watson was right, in a way. The march to cloud computing and the rush of companies building brands and services where both enterprises and consumers can park their compute needs is palpable. And over the next ten or so years, I wonder if perhaps the market won’t shake out in such a way that we have just a handful of “computers” – brands we trust to manage our personal and our work storage, processing, and creation tasks. We may access these brands through any number of interfaces, but the computation, in the manner Watson would have understood it, happens on massively parallel grids which are managed, competitively, by just a few companies.*
It seems that is how Watson, or others like him, saw it back in the 1950s. According to sources quoted from Wikipedia, Professor Douglas Hartee, a Cambridge mathematician, estimated that all the calculations required to run in England would take about three “computers,” each distributed in distinct geographical locations around the country. The reasoning was pretty defensible: computers were maddeningly complex, extraordinarily expensive, and nearly impossible to run.
Right now, I’d wager that the handful of brands leading the charge to win in this market might be Google, Amazon, Microsoft, Apple, and….IBM. About five or so. Maybe Watson will be proven right, even if he never was wrong in the first place.
* Among other things, it is this move to the cloud, with its attendant consequences of loss of generativity and control at the edges, which worries Zittrain, Lanier, and others. But more on that later.
In the comments on this previous post, I promised I’d respond with another post, as my commenting system is archaic (something I’m fixing soon). The comments were varied and interesting, and fell into a few buckets. I also have a few more of my own thoughts to toss out there, given what I’ve heard from you all, as well as some thinking I’ve done in the past day or so.
First, a few of my own thoughts. I wrote the post quickly, but have been thinking about the signal to noise problem, and how solving it addresses Twitter’s advertising scale issues, for a long, long time. More than a year, in fact. I’m not sure why I finally got around to writing that piece on Friday, but I’m glad I did.
What I didn’t get into is some details about how massive the solving of this problem really is. Twitter is more than the sum of its 200 million tweets, it’s also a massive consumer of the web itself. Many of those tweets have within them URLs pointing to the “rest of the web” (an old figure put the percent at 25, I’d wager it’s higher now). Even if it were just 25%, that’s 50 million URLs a day to process, and growing. It’s a very important signal, but it means that Twitter is, in essence, also a web search engine, a directory, and a massive discovery engine. It’s not trivial to unpack, dedupe, analyze, contextualize, crawl, and digest 50 million URLs a day. But if Twitter is going to really exploit its potential, that’s exactly what it has to do.
The same is true of Twitter’s semantic challenge/opportunity. As I said in my last post, tweets express meaning. It’s not enough to “crawl” tweets for keywords and associate them with other related tweets. The point is to associate them based on meaning, intent, semantics, and – this is important – narrative continuity over time. No one that I know of does this at scale, yet. Twitter can and should.
Which gets me to all of your comments. I heard both in the written comments, on Twitter, and in extensive emails offline, from developers who are working on parts of the problems/opportunities I outlined in my initial post. And it’s true, there’s really quite a robust ecosystem out there. Trendspottr, OneRiot, Roundtable, Percolate, Evri, InfiniGraph, The Shared Web, Seesmic, Scoopit, Kosmix, Summify, and many others were mentioned to me. I am sure there are many more. But while I am certain Twitter not only benefits from its ecosystem of developers, it actually *needs* them, I am not so sure any of them can or should solve this core issue for the company.
Several commentators noted, as did Suamil, “Twitter’s firehose is licensed out to at least publicly disclosed 10 companies (my former employer Kosmix being one of them and Google/Bing being the others) and presumably now more people have their hands on it. Of course, those cos don’t see user passwords but have access to just about every other piece of data and can build, from a systems standpoint, just about everything Twitter can/could. No?”
Well, in fact, I don’t know about that. For one, I’m pretty sure Twitter isn’t going to export the growing database around how its advertising system interacts with the rest of Twitter, right? On “everything else,” I’d like to know for certain, but it strikes me that there’s got to be more data that Twitter holds back from the firehose. Data about the data, for example. I’m not sure, and I’d love a clear answer. Anyone have one? I suppose at this point I could ask the company….I’ll let you know if I find out anything. Let me know the same. And thanks for reading.
(image charlie rose) As I prepare for writing my next book (#WWHW), I’ve been reading a lot. You’ve seen my review of The Information, and In the Plex, and The Next 100 Years. I’ve been reading more than that, but those made it to a post so far.
I’m almost done with Sherry Turkle’s Alone Together, with which I have an itch to quibble, not to mention some fiction that I think is informing to the work I’m doing. I expect the pace of my reading to pick up considerably through the Fall, so expect more posts like this one.
Last week I finished The Future of The Internet (And How to Stop It), by Harvard scholar Jonathan Zittrain. While written in 2008, this is an ever-more important book, for many reasons, in that it makes a central argument about what we’ve built so far, and where we might be going if we ignore the lessons we’ve learned as we’ve all enjoyed this E-ticket ride we call the Internet industry.
The book’s core argument has to do with a concept Zittrain calls “generativity” – the ability of a product or service to generate innovation, new ideas, new services, independent of centralized, authoritative control. It is, of course, very difficult to create generative technologies on a grand scale – it’s a statement of faith and shared values to do such a thing, and it really rubs governments and powerful interests the wrong way over time. Jonathan goes on to point out that truly open, generative systems are inherently subject to the tragedy of the commons – practices such as malware, bad marketing tactics, hacking etc. These threats are only growing, and provide a good reason to shut down generativity in the name of safety and order.
The Internet, as it turned out for the first ten or fifteen years, is one of the greatest generative technologies we’ve ever produced. And yes, I mean ever – as in, since we all figured out fire, or the wheel, or … well, forgive me for getting all Wired Manifesto on you, but it’s a very big deal.
But like Lessig before him, Zittrain is very worried that the essence of what has made the Internet special is changing, in particular, as the mainstream public falls deeper in love with services like Facebook and Apple’s iPhone.
His book is a meditation and a lecture, of sorts, on the history, meaning, and implications of this idea. After I read it, I was inspired to email Jonathan. I sent him this note:
“Hi Jonathan -
Wondering if, to start off an interview process (for my book), you might want to do a back and forth email interview that I’d publish on my site. It’d be mostly related to your book and some questions about how you view things have progressed since it came out. That would be both a good way for me to “review” the book on my site as well as to delve into some of the issues it raises in a fresh light. You game?”
To which he responded:
And my questions, and his response, in lightly edited form, are below. I think you’ll enjoy his thoughts updating his thesis over the past three years. Really good stuff. I have bolded what I, as a magazine editor, might turn into a “pullquote” were I laying this out on a printed page.
- You wrote the Future of the Internet three years ago. It warned of a lack of awareness with regard to what we’re building, and the consequences of that lack of attention. it also warned of data silos and early lockdown. Three years later, how are we doing? Are things better, worse, the same?
And a follow up. On a scale of one to ten, where one is “actively helping” and ten is “pretty much evil,” how do the following companies rate in terms of the debate you frame in the book?
- Google (you can break this down into Android, Search, Apps, etc)
- Facebook (which was really not at full scale when you published)
- Microsoft (again break it down if you wish)
Sorry this took me so long! I got a little carried away in answering –
- You wrote the Future of the Internet three years ago. It warned of a lack of awareness with regard to what we’re building, and the consequences of that lack of attention. it also warned of data silos and early lockdown. Three years later, how are we doing? Are things better, worse, the same?
It’s the best of times and the worst of times: the digital world offers us more every day, while we continue to set ourselves up for levels of surveillance and control that will be hard to escape as they gel.
That’s because the plus is also the minus: more and more of our activities are mediated by gatekeepers who make life easier, but who also can watch what we do and set boundaries on it — either for their own purposes, or under pressure from government authorities.
On the book’s specific predictions, Apple’s ethos remains a terrific bellwether. The iPhone — released in ’07 — has proved not only a runaway success, but the principles of its iOS have infused themselves across the spectrum. There’s less reason than ever to need a traditional PC, and by that I mean one that lets you run whatever code you want. OS X Lion points the way to a much more controlled PC zone, anyway, as it more and more funnels its software through a single company’s app store rather than from anywhere. I’d be surprised if Microsoft weren’t thinking along similar lines for Windows.
Google has offered a counterpoint, since the Android platform, while including an app store, allows outside code to be run. In part that’s because Google’s play is through the cloud. Google seeks to make our key apps based somewhere within the google.com archipelago, and to offer infrastructure that outside apps can’t resist, such a easy APIs to geographic mapping or user location. It’s important to realize that a cloud-based setup like Google Docs or APIs, or Facebook’s platform offer control similar to that of a managed device like an iPhone or a Kindle. All represent the movement of technology from product to service. Providers of a product have little to say about it after it changes hands. Providers of services are different: they don’t go away, and a choice of one over another can have lingering implications for months and even years.
At the time of the book’s drafting, the alternatives seemed stark: the “sterile” iPhone that ran only Apple’s software on the one hand, and the chaotic PC that ran anything ending in .exe on the other. The iPhone’s openness to outside code beginning in ’08 changed all that. It became what I call “contingently generative” — it runs outside code after approval (and then until it doesn’t). The upside is that the vast creativity of outside coders has led to a software renaissance on mobile devices, including iPhones, from the sublime to the ridiculous. And Apple’s gatekeeping has seemed to be with a light touch; apps not allowed in the store pale in comparison to the torrents of stuff let through. But that masks entire categories of applications that aren’t allowed — namely anything disruptive to Apple’s business model or that of its partners or regulators. No p2p, no alternate email clients, browsers with limited functionality.
More important, the ability to limit code is what makes for the ability to control content. More and more we see content, whether a book, or a magazine subscription, represented in and through an app. It’s sheer genius for a platform maker to demand a cut of in-app purchases. Can you imagine if, back in the day, the only browser allowed on Windows was IE, and further, all commerce conducted through that browser — say, buying a book through Amazon — constituted an “in-app purchase” for which Microsoft was due 30%?
A natural question is why competition isn’t the answer here — or at least reason to not worry about the question. If people thought the iPhone made for a bad deal, why would they want one? The reason they want one is the same thing that made the Mac so appealing when it first came on the scene: it was elegant and intuitive and it just worked. No blue screen of death. Consistency across apps. And, as viruses and worms naturally were designed for the most common platform, Windows, those 5% with Macs weren’t worth the trouble of corrupting.
We’ve seen a new generation of Mac malware as its numbers grow, and in the meantime a first defense is that of curation: the app store provides a rough filter for bad code, and accountability against its makers if something goes wrong even after it’s been approved. So that’s why the market likes these architectures. I’ll bet few Android users actually go “off-roading” with apps not obtained through the official Android app channels. But the fact that they can provides a key safety valve: if Google were to try the same deal as Apple with content providers for in-app content, the content providers could always offer their wares directly to Android users. I’m worried that a piece of malware could emerge on Android that would cause the safety valve of outside code to be changed, either formally by Google, or in practice as people become unwilling to drive outside the lanes.
So how about competition between platforms? Doesn’t that keep each competitor honest, even if all the platforms are curated? I suppose: the way that Prodigy and CompuServe and AOL competed with one another to offer different services as each chased subscribers. (Remember the day when AOL members couldn’t email CompuServe users and vice versa?) That was competition of a sort, but the Internet and the Web put them all to shame — even as the Internet arose from no business plan at all.
Here’s another way to think about it. Suppose you were going buy a new house. There are lots of choices. It’s just that each house is “curated” by its seller. Once you move in, that seller will get to say what furnishings can go in, and collects 30% of the purchase price of whatever you buy for the house. That seller has every reason to want to have a reputation for being generous about what goes in — but it still doesn’t feel very free when, two years after you’re living in the house, a particular coffee table or paint color is denied. There is competition in this situation — just not the full freedom that we rightly associate with inhabiting our dwellings. A small percentage of people might elect to join gated communities with strict rules about what can go inside and outside each house — but most people don’t want to have to consult their condo association by-laws before making choices that affect only themselves.
[I guess the Qs below (about each company) are answered above!]
I guess now my question is, what kind of place are we going to build next?
Thanks for your thoughts, Jonathan! What do you all think?
Note: I wrote this post without contacting anyone at Twitter. I do know a lot of folks there, and as regular readers know, have a lot of respect for them and the company. But I wanted to write this as a “Thinking Out Loud” post, rather than a reported article. There’s a big difference – in this piece, I am positing an idea. It’s entirely possible my lack of reporting will make me look like an uninformed boob. In the reported piece I’d posit the idea privately, get a response, and then report what I was told. Given I’m supposedly on a break this week, and I’ve wanted to get this idea out there for some time, I figured I’d just do so. I honestly have no idea if Twitter is actually working on the ideas I posit below. If you have more knowledge than me, please post in the comments, or ping me privately. Thanks!
I find Twitter to be one of the most interesting companies in our industry, and not simply because of its meteoric growth, celebrity usage, founder drama, or mind-blowing financings. To me what makes Twitter fascinating is the data the company sits atop, and the dramatic tension of whether the company can figure out how to leverage that data in a way that will insure it a place in the pantheon of long-term winners – companies like Microsoft, Google, and Facebook. I don’t have enough knowledge to make that call, but I can say this: Twitter certainly has a good shot at it.
My goal in this post is to outline what I see as the biggest challenge/opportunity in the company’s path. And to my mind, it comes down to this: Can Twitter solve its signal to noise problem?
Many observers have commented on how noisy Twitter is: That once you follow more than about fifty or so folks, your feed becomes unmanageable. If you follow hundreds, like I do, it’s simply impossible to extract value from your stream in any structured or consistent fashion (see image from my stream at left). Twitter’s answers to this issue has been anemic. One product manager even insisted that your Twitter feed should be viewed as a stream you dip into from time to time, using it as a thirsty person might use a nearby water source. I disagree entirely. I have chosen nearly 1,000 folks who I feel are interesting enough to follow. On average, my feed gets a few hundred new tweets every ten minutes. No way can I make sense of that unassisted. But I know there’s great stuff in there, if only the service could surface it in a way that made sense to me.
You know – in a way that feels magic, the way Google was the first time I used it.
I want Twitter to figure out how to present that stream in a way that adds value to my life. It’s about the visual display of information, sure, but it’s more than that. It requires some Really F*ing Hard Math, crossed with some Really Really Hard Semantic Search, mixed with more Super Ridiculous Difficult Math. Because we’re talking about some super big numbers here: 200 million tweets a day across hundreds of millions of accounts. And that’s growing bigger by the hour.
A mini industry has evolved to address this issue – I use News.me, Paper.li, TweetDeck (recently purchased by Twitter), Percolate and others, but the truth is, they are not fully integrated, systemic solutions to the problem. Only Twitter has access to all of Twitter. Only Twitter can see the patterns of usage and interest and turn meaningful insights and connections into algorithms which feed the entire service. In short, it’s Twitter that has to address this problem. Because, of course, this is not just Twitter’s great problem, it is also Twitter’s great opportunity.
Why? Because if Twitter can provide me a tool that makes my feed really valuable, imagine what it can do for advertisers. As with every major player that has scaled to the land of long-term platform winners (as I said, Google, Microsoft, Facebook), product comes first, and business model follows naturally (with Microsoft, the model was software sales of its OS and apps, not advertising).
If Twitter can assign a rank, a bit of context, a “place in the world” for every Tweet as it relates to every other Tweet and to every account on Twitter, well, it can do the same job for every possible advertiser on the planet, as they relate to those Tweets, those accounts, and whatever messaging the advertiser might have to offer. In short, if Twitter can solve its signal to noise problem, it will also solve its revenue scale problem. It will have built the foundation for a real time “TweetWords” – an auction driven marketplace where advertisers can bid across those hundreds of millions of tweets for the the right to position relevant messaging in real time. If this sounds familiar, it should – this is essentially what Google did when it first cracked truly relevant search, and then tied it to AdWords.
Now, I do know that Twitter sees this issue as core to its future, and that it’s madly working on solving it. What I don’t know is how the company is attacking the problem, whether it has the right people to succeed, and, honestly, whether the problem is even soluble regardless of all those variables. After all, Google solved the problem, in part, by using the web’s database of words as commodity fodder, and its graph of links as a guide to value. Tweets are more than words, they comprise sentiments, semantics, and they have a far shorter shelf life (and far less structure) than an HTML document.
In short, it’s a really, really, really hard problem. But it’s a terribly exciting one. If Twitter is going to succeed at scale, it has to totally reinvent search, in real time, with algorithms that understand (or at least replicate patterns of) human meaning. It then has to take that work and productize it in real time to its hundreds of millions of users (because while the core problem/opportunity behind Twitter is search, the product is not a search product per se. It’s a media product.)
To my mind, that’s just a very cool problem on which to work. But I sense that Twitter has the solution to the problem within its grasp. One way to help solve it is to throw open the doors to its data, and let the developer community help (a recent move seems to point in that direction). That might prove too dangerous (it’s not like Google is letting anyone know how it ranks pages). But it could help in certain ways.
Earlier in the week I was on the phone with someone who works very closely in this field (search, large scale ad monetization, media), and he said this of Twitter: “There’s definitely a $100 billion company in there.”
The question is, can it be built?
What do you think? Am I off the reservation here? And who do you know who’s working on this?
Even before I was a few pages into The Information, a deep, sometimes frustrating but nonetheless superb book by James Gleick, I knew I had to ask him to speak at Web 2 this year. Not only did The Information speak to the theme of the conference this year (the Data Frame), I also knew Gleick, one of science’s foremost historians and storytellers, would have a lot to say to our industry.
Now that I’ve finished the book (and by no means will it be the last time I read it) I can say I’m positively brimming with questions I’d like to ask the author. And perhaps most vexing is this: “What is Information, anyway?”
If you read The Information for the answer to this question, you may leave the work a bit perplexed. It may be in there, somewhere, but it’s not stated as such. And somehow, that’s OK, because you leave the book far more ready to think about the question than when you started. And to me, that’s the point.
When I was a kid, and fancied myself smarter than someone who might be in the room at the time, I’d ask them to explain to me where space ended. How far out? Often, and this was the trick, a youngster (we were six or seven, after all) would posit that there must be a wall at some point, an ending, a place where the universe no longer existed. “Oh yeah?!” I’d say, exultant that my trick had worked. “Then what’s on the other side?!”
I think the answer is information. Perhaps others would say God, but if that be true, then both are, and the truth is that both understanding God and understanding information are quests that are more about the narrative than the ending. At least, I think so.
Gleick’s book tells the story of how, over the past five thousand or so years, mankind has managed to create symbols which abstract meaning and intent into forms that are communicable beyond time and space. I too am fascinated with this (hence the focus and title of the new book I just announced – What We Hath Wrought .) While my book will attempt to be a narrative history of the next 30 or so years of information’s impact on our culture, Gleick’s is a history of the past 5,000 or more years – and it manages, for the most part, to stay focused just on the theory of information itself, rather than its political or social impacts. It’s ambitious, it’s heady, and at times, it’s nearly impossible to understand for a lay person such as myself.
Gleick traces the narrative of information from the first stirrings of alphabet-based communication to the explosion of academic excitement that accompanied the rise of “Information Science” and “Information Theory” in the mid to late 20th century. Nearly all the geek heros take a star turn in this work, from Ada Lovelace and Charles Babbage to Lord Kelvin, Claude Shannon, and Marshall McLuhan (Wired’s patron saint, in case you younger readers have forgotten…). Einstein, Borges, and scores of other folks who make you feel smart just for reading the book also make cameos.
The work really picks up speed as it describes the rise of early telecommunications, the role of information in mid century warfare, and the birth of both genetic sciences and the computing industry. In the end, Gleick seems to be arguing, it’s all bits – and I think most of us in this industry would agree. But I think Gleick’s definition of “bit” may differ from ours, and while it may be esoteric, it’s there I want to really focus when he visits Web 2 in October.
Reviews of The Information are mostly raves, and I have to add mine to the pile. But as with his earlier work (Chaos cemented my desire to be a technology journalist, for example, and may as well be viewed as a precursor to The Information), this most recent book is sometimes a rather dry tick tock of various academics’ journeys through difficult problems, often accompanied by descriptions of insights that, I must admit, escaped me the first two or three times I read them*. While I thought I knew it, I had to look up the definition of “logarithm” at least twice, and honestly, as its used in some passages, I had to just give up and hope I didn’t miss too much for my ignorance of Gleick’s nuanced use. (Given his larger point, that the core information is that which can be reduced to its essence, I think I got the point. I think).
I guess what I’m saying is that I had to work hard through parts of this book – for example, in understanding how randomness relates to the essence and amount of information in any given object. But I find the work worth it. I’m also still getting my head around the relationship of randomness to entropy (Maxwell’s Demons help…)
But isn’t that the point of a great book?In the end, I feel far more prepared to be a participant in what we’re making together in this industry, more rooted in the history that got us here, and more….yeah, I’ll say it, more reverent about the implications of our work moving forward. For that, I thank Gleick and The Information.
*This, for example, is a typical footnote: “The finite binary sequence S with the first proof that S cannot be described by a Turning machine with n states or less is a (log2 n+cp)-state description of S.” My blogging software doesn’t even have the right scientific notation capabilities to do that phrase justice, but I think you get the point I’m making….
It’s hard to not voice at least one note into the Morman Tabernacle of commentary coming out of Google’s first two weeks as a focused player in the social media space.
I haven’t read all the commentary, but one observation that seems undervoiced is this: If Google+ really works, Google will be creating a massive amount of new “conversational media” inventory, the very kind of marketing territory currently under development over at Tumblr and Twitter. Sure, the same could be said of Facebook, but I think that story has been well told. Google+ is a threat to Facebook, but for other reasons. The threat to Tumbrl and Twitter feels more existential in nature. (Ian remarks on how Google+ feels like content here, for example).
Let’s look at a typical flow for Tumblr, for example. Most of the action on Tumblr is in the creator’s “dashboard.” Mine looks like this:
As you can see, this is a flow of posts from folks that I follow, with added features and information on the right rail. I can take action on these posts in the dashboard, including reblogging them on my own Tumblr, which is, for the most part, a blog. A blog, like…Blogger.
Now let’s look at what my flow looks like in Twitter. I use the web app for the most part:
Again, flow on the left, info and services (and ads) on the right. However, Twitter has no integrated blog like function, though I love using it as a platform to promote my blog posts (as many of you undoubtedly have noticed). Also, Twitter recently bought Tweetdeck, which organizes flow more along the lines of “Circles” in Google+, but more on that later.
Now, let’s look at my flow for my “Colleagues” circle on Google+. I choose “Colleagues” because it’s really the only one with content in it. My “friends” and “Family” are not really using Google+ yet. If those streams start getting traction, well, then we can talk about Facebook’s existential threats. But already, I am finding this stream useful:
Look familiar? Yeah, it sure does. Just like Tumblr’s dashboard, and Twitter’s main stream. Both those companies are focused now on how best to monetize this key “conversational media” content, and just as they are getting traction, Google comes along with a product that is nearly identical. However, there are important differences, and of course, Google has a massive advantage: Google+ is integrated into everything the company owns and operates.
I’ll be adding more to this post later tonight, but I wanted to get this idea out there. Later, I’ll go into the key differences, and also, map out the advantages Twitter and Tumblr maintain compared to Google+. My one thought to keep you going while I’m away: If Google+ works, and Google integrates all that conversational media inventory with its extraordinary advertising sales machine, there’s even more of a need for what I’ve come to call a truly “independent” and “conversational” media company. Twitter and Tumblr are not playing the same game as Google, and they’ll need to tack into the advantage *not* being Google provides to them.
(image) Last night I got to throw a party, and from time to time, that’s a pretty fun thing to do. To help us think through the program and theme of the Web 2 Summit this Fall, we invited a small group of influential folks in the Bay area to a restaurant in San Francisco, fed them drinks and snacks, and invited their input. (Here are some pics if you want to see the crowd.)
Nothing beats face to face, semi-serendipitous conversation. You always learn something new, and the amount of knowledge that can be shared in even a few minutes of face time simply cannot be replicated with technology, social media, or even a long form post like this one. I always find myself reinvigorated after spending an evening in a room full of smart folks, and last night was certainly no exception. In fact, about halfway through, as I watched several of my close friends from my home turf of Marin mingling with the crowd, I realized something: The whole world is an Internet startup now.
Let me try to explain.
Back even five years ago, our industry was dominated by people who considered themselves a select breed of financier and entrepreneur – they were Internet startup folk. I considered myself one of them, of course, but I also kept a bit apart – it’s one reason I live up in Marin, and not down in the Silicon Valley. Why did I do that? I am not entirely sure, other than I wasn’t certain I wanted to be fully immersed in the neck-deep culture of the Valley, which can at times be a bit incestuous. I wanted to be part of the “rest of the world” even as I reveled in the extraordinary culture of Internet startup land.
Part of living up here in Marin is meeting and befriending smart folks who have pretty much nothing to do with my business. In the past ten years, I’ve become good friends with real estate developers, investment bankers (and not ones who take Internet companies public), musicians, artists, and doctors. When we first connected, I was always “the Internet guy” in the room. And that was that.
But as I scanned the room last night and watched those friends of mine, I realized that each of them was now involved in an Internet startup in some way or another. I then thought about the rest of my Marin pals, and realized that nearly every one of them is either running or considering running an Internet startup. Only thing is, to them it’s not about “starting an Internet company.” Instead, it’s about innovating in their chosen field. And to do so, they of course are leveraging the Internet as platform. The world is pivoting, and the axis is the industry we’ve built. This is what we meant when we chose “Web Meets World” for the theme of the 2008 Web 2 Summit, but it’s really happening now, at least in my world. I’m curious if it’s happening in yours.
A few examples – though I have to keep the details cloudy, as I can’t breach my friends’ confidence. One of my pals, let’s call him Jack, is a highly successful banker specializing in buying and selling other banks. But he’s an artist in his soul, and has a friend who is a talented photographer. Together they’ve cooked up a startlingly new approach to commercial consumer photography, including a retail concept and, of course, a fully integrated digital and social media component. Jack is now an Internet startup guy.
Another pal is a doctor. We’ll call him Dr. Smith. Smith is a true leader in his field, redefining standards of medical practice. He often gives speeches on what’s broken in the medical world, and holds salons where some of the most interesting minds in medicine hold forth on any number of mind bending topics. For the past year or so, Smith has been working on a major problem: How to get people to understand the basics of nutrition, and engage with their own diets in ways that might break the cycle of disease driven by poor eating habits. He’s got a genius answer to that question, and now, Smith is an Internet startup guy as well.
Dan, another anonymized pal of mine, made his name in real estate. Two years ago he effectively retired, having made enough money several times over to live a very good life and never have to work again. But Dan is a restless soul, and he’s also a bit haunted by the loss of his father to a poorly understood but quite well known neurological disease. He’s dedicated his life to supporting new approaches to research in the field, and the work he’s funded is tantalizingly close to a breakthrough. It’s an entirely new framework for understanding the illness, one that isn’t easy to grok if you’re a layman (as he was when he started). As I listened to him explain the work, I had a very strong sense of deja vu. Dan was an Internet startup guy now, pitching me his new approach to disrupting a sclerotic industry (in this case, the foundation-driven research institutes and their kissing cousins, the pharmaceutical companies.). It may work, it may not, but he’s going to go for it. To raise funds for his new approach, Dan is talking to angels and VCs, and developing a new model for profiting from drug compounds that may come out of the research he’s funded. In short, Dan’s appropriated the Internet’s core funding process to try to solve for one of the most obstinate problems in health.
I could go on. There’s the award winning filmmaker and his musician/producer partner who are creating mind-blowing next generation online games. The agency creative who’s won every traditional advertising prize on the planet, and is now obsessed with digital. And on and on and on….
I guess my point is this: The Internet no longer belongs to the young tech genius with a great idea and the means to execute it online. Innovation on the Internet now belongs to the world, and that is perhaps the most exciting thing about this space. It’s attracting not just the “next Mark Zuckerberg,” but also thousands of super smart innovators from every field imaginable, each of whom brings extraordinary insights and drive to play. And that’s another reason I love this industry, because, in the end, it’s not a singular business. It now encapsulates the human narrative, writ very large.
What a great story. Does it resonate with you? Do you have examples like mine? I’d love to hear them.