free html hit counter August 2011 | John Battelle's Search Blog

Physics of the Future, by Michio Kaku

By - August 31, 2011

science-book-physics-of-the-future-michio-kaku.jpeg

As part of my ongoing self-education – so as to not be a total moron while writing “What We Hath Wrought” – this past weekend I read “Physics of the Future” by Michio Kaku.

I was excited to read the book, because Kaku is a well regarded physicist, and that’s a field that I know will inform what’s possible, technologically, thirty or so years from now. I will admit I did not read the reviews of the book before hitting “purchase” on my Kindle. The topic alone made it worth my time, and the book was on the NYT bestseller list for five weeks, after all. Turns out, the book was worth the time….but perhaps I should have read the reviews so my expectations were more properly set.

I thought I was going to learn some fundamentals about what’s possible in the next few generations, and if you work hard enough, you will learn some of that. But the book reads more like a string of popular science articles meant for a *very* broad audience, and far less like a serious investigation of how physics might inform our world in the coming decades.

The New York Times’ review might sum it best (and I know, it’s a cliche to depend on the Times, but…it pretty much sums up my thoughts on the book):

“…Mr. Kaku thinks in numbers better than he thinks in words, which is a problem only in that he’s written a book and not a series of equations….This is not boring stuff, and it all somewhat makes me wish that I (born in 1965) were going to be around to witness it all. In terms of data delivery, “Physics of the Future” gets the job done. But airplane food gets the job done, too, and airplane food — bland and damp — is what Mr. Kaku’s prose too often resembles.”

Ouch. I hope I never get a review like that. But then again, I rarely think in numbers.

Kaku does not lack for ambition – he sets out to explain, in great detail, how we will live in 2100, and how we’ll get there. But he often uses the conditional tense, and swaps between “this might happen” to “this will certainly happen,” sometimes on the same page. It makes for a general lack of trust when it comes to whether or not you want to buy into his proclamations. He also leans heavily on a thesis he calls “The Cave Man Principle,” which, to simplify, says that we are still pretty much driven by the same impulses we had when we lived in caves. The idea gets old and is often used as a salve when the future starts to get hard to predict. He also loves to drop pop culture references – in particular to sci fi movies – as a way to explain how things might shape up. I’m not sure I’d want Hollywood to loom that large in *my* future…

Still, if you are looking for a relatively fast read that covers a lot of ground around flying cars, the Internet on contact lenses, and palm sized MRI machines, this book is worth a look, despite the sometimes artless prose. Better yet, however, I’d recommend you watch a few of Kaku’s television shows (he’s made a number of them for Discovery Channel, among others), and we’ve enjoyed watching those as a family.

I’m still reading Kevin’s “What Technology Wants,” which so far I’m really enjoying. Hope to write about that shortly.

Other books I’ve reviewed recently:

Alone Together by Sherry Turkle

The Information by James Gleick

In the Plex by Steven Levy

The Future of the Internet (And How to Stop It) by Jonathan Zittrain

The Next 100 Years by George Friedman

  • Content Marquee

We Need An Identity Re-Aggregator (That We Control)

By - August 29, 2011

FINGERPRINT.jpeg

The subject of “owning your own domain” has been covered to death in our industry, with excellent posts from Anil Dash and others (Fred) explaining the importance of having your own place on the web. I’ve also weighed in on the importance of “The Independent Web,” where creators have control, as opposed to the Dependent Web, where platforms ultimately control how your words, data, and expression are leveraged.

But not everyone gravitates toward having their own, independent site – at least not initially. Even those who do have sites don’t necessarily see those sites as the best place to express themselves. I was reminded of this reading a Quora thread over the weekend entitled “What’s it like to have your film flop at the box office?” (The subtitle of the thread is hilarious: “Don’t they know how bad it is before it comes out?”)

The question elicited a well written, funny, and informative post by one Sean Hood, a professional “fixer” of scripts who had worked on the recent “Conan the Barbarian” movie – apparently a big-time summer flop.

It’s clear that Hood was inspired to write a wonderful post not because he wanted to muse out loud on his own blog (he does, it turns out, have one), but because of something particularly social in nature about Quora.

The same, I’d wager, can be said of Google+, where a lot of folks, including well know “traditional bloggers” like Robert Scoble are content to post at length, regardless of the fact that Google+, unlike blogging software like WordPress, is not a platform that they “control.” Ditto places like the Huffington Post, Facebook, ePinions, Amazon Reviews – you get the picture. The web is full of places where the value is created by authors, but control and monetization accrues, in the main, to the company, not the individual.

Scoble, who is paid by the hosting company Rackspace to be nearly omnipresent, is clearly an edge case. He’s a professional blogger, but he doesn’t really care where his words live, as long as they get a lot of attention. Traditional authors, like, for example, the folks behind Dooce or The Awl, are far less likely to leave their core value – their words – all over the web, and in particular, they don’t see the point of given that value away for free, when their own sites provide their economic lifeblood (both sites are FM partners, but there are tens of thousands of others as well.).

The downsides of not owning your own words, on your own platform, are not limited simply to money. Over time, the words and opinions one leaves all over the web form a web of identity – your identity – and controlling that identity feels, to me, a human right. But unless you are a sophisticated netizen, you’re never going to spend the time and effort required to gather all your utterances in one place, in a fashion that best reflects who you are in the world.

Every site has a different terms of service – rules which guide what rights you have when you post on the site. I haven’t read them all (most of us don’t), but I’d imagine most of the would allow you to take your own words and cut and paste them on your own site, should you be so inspired. On his personal blog, Sean Hood, the film writer, has linked to many of his past answers on Quora. But he hasn’t “re-posted” them – which I think is a shame. Because while Quora is a great service, should it go dark, Sean’s words will be lost.

Earlier in the year I wrote a piece called “The Rise of Meta Services,” in which I posited that we need a new class of services that help us make sense of the fractured nature of all the sites, apps, and platforms we’re using. I’d wager there’s a great opportunity to create such a service that follows individuals around the web, noting, indexing, and reposting everything he or she writes back to his or her own domain.

Or maybe there’s already a WordPress plugin for that?!

With Tech, We Are Not Where We Want To Be (Or, This Cake Ain't Baked)

By - August 21, 2011

alonetogethercover.jpg

Last week I finished reading Sherry Turkle’s “Alone Together“, and while I have various disagreements with the work (I typed in more than 70 notes on my Kindle, even with that terribly tiny keyboard), I still found myself nodding in agreement more than I thought I would.

In her book, Turkle explores our relationship with technology, in particular what she calls “sociable robots” (toys like the Aibo or My Real Baby), as well as with email, IM, and shared virtual spaces like Second Life and Facebook.

Turkle is one of the most important sociologists of technology working today, and her new book reads like a personal field notebook, rife with anecdotes about how children and teens, in particular, are responding to these new technological artifacts.

I came to “Alone Together” a skeptic – it was clear from almost the first page that Turkle is troubled by what her field work yielded. Children projecting “life force” onto robots, adults fretting about the morality of robots caring for their failing elders, parents so distracted by their smart phones that they lose connection with their kids. That’s the framing of most of her reviews, and it was the framing I’ll admit I had when I began reading.

The book has a clear posture about our collective abilities to fend off seductive but ultimately damaging technological crutches – and that posture is that as we engage with machines, we’re losing important parts of our humanity. And Turkle is clearly worried about that.

When it comes to our relationship to technology, I tend not to be a worrier. My early marginalia on Turkle’s pages included “false premise!” and “what is the problem here?” and “so is a damn teddy bear!” (that last one in response to Turkle’s fretting about a child’s conception of whether a Tamagotchi is “alive.”)

The reason for my skepticism is simple: As Turkle describes page after page of people losing “true connectedness” in their lives and falling instead for the false thrill of tech, I keep thinking: Our tools have not caught up with our brains, and vice versa. We have shaped technology, and now it is shaping us – sure – but we can keep shaping it till we get the feedback loop right. So far, we simply have not – the music ain’t flowing, so to speak. In our relationship to what Kevin Kelly* calls the technium, we’re awkward pre-teens.

Or put another way, this cake ain’t baked. I mean, think about it. Facebook: Not quite right. Smart phones? Not quite right. Desktop computing? Even though we’ve had nearly three decades of interaction, it’s still not quite right.

One of “Alone Together’s” greatest failings for me – or perhaps it’s a lesson of sorts – was the parade of examples based on technological products which, after an initial period of cultural uptake, have been discarded or marginalized over time. Tamagotchis, Teddy Ruxpins, My Real Babies, Second Life, Blackberries, even, dare I say it, Facebook, are ephemeral in the sweep of a generation or two (or in some cases, in a year or two). We shouldn’t draw stern conclusions from our pre-teen love affairs, so to speak. We are learning, failing, trying again.

However, as the book unfolded, and I thought more personally about the issues Turkle raises, I began to agree with some of her concerns. We’re only on this earth for a short time, and the time we lose to poor relationships with technology is time we can’t get back. When Turkle describes a young parent pushing his child on a swing while checking email on his Blackberry, I saw myself as the CEO of The Standard back in the 1990s, and I winced. I think I was a pretty good Dad to my kids when they were young, but I do mourn the time I lost to my obsession with …. well, not technology, to be honest. But my work, and my career. Then again, for me, anyway, that career has been about technology…

So as I get a bit older, I do feel a need to reflect on how and whether my impulse to connect is impairing my most important relationships. And that’s a fair and good reflection to take.

After reading “Alone Together,” I happened to be watching television late one night with my wife, and while flipping around, we found the last half hour or so of Bladerunner. This 1982 film, based (loosely) on Philip K. Dick’s “Do Androids Dream of Electric Sheep”, is both lovely and a bit over the top. Turkle cites it near the end of her book. Its core question is simply this: What is it to be human, and when might machines reach that threshold? And…then what?

As I watched the film for what must have been the hundredth time, I found myself certain that this question will be central to our experience over the next generation or two (not a new thought, of course, but still…). I feel better prepared to debate the answer having read Turkle’s book.

—-

* NB: I am reading Kevin’s excellent “What Technology Wants” right now. I got about a third of the way through it when it came out, but was not in “deep reading” mode then, and wanted to do it justice. With an extraordinary crew of thinkers, dreamers, and makers, Kevin and I worked together to bring Wired to life from its inception in 1992 to1997, when I left to start The Standard. Turkle devotes the conclusion of her book to what amounts to an argument with Kevin’s premise in “What Technology Wants.” I emailed Kevin and asked him about it. Turns out, the two are close friends, which, of course, I should have known. By disagreeing, debating, conversing, and resolving, we become better people, more connected. More on Kevin’s book soon.

Other books I’ve reviewed recently:

The Information by James Gleick

In the Plex by Steven Levy

The Future of the Internet (And How to Stop It) by Jonathan Zittrain

The Next 100 Years by George Friedman

More on Twitter's Great Opportunity/Problem

By - August 10, 2011

Itwitter-bird.pngn the comments on this previous post, I promised I’d respond with another post, as my commenting system is archaic (something I’m fixing soon). The comments were varied and interesting, and fell into a few buckets. I also have a few more of my own thoughts to toss out there, given what I’ve heard from you all, as well as some thinking I’ve done in the past day or so.

First, a few of my own thoughts. I wrote the post quickly, but have been thinking about the signal to noise problem, and how solving it addresses Twitter’s advertising scale issues, for a long, long time. More than a year, in fact. I’m not sure why I finally got around to writing that piece on Friday, but I’m glad I did.

What I didn’t get into is some details about how massive the solving of this problem really is. Twitter is more than the sum of its 200 million tweets, it’s also a massive consumer of the web itself. Many of those tweets have within them URLs pointing to the “rest of the web” (an old figure put the percent at 25, I’d wager it’s higher now). Even if it were just 25%, that’s 50 million URLs a day to process, and growing. It’s a very important signal, but it means that Twitter is, in essence, also a web search engine, a directory, and a massive discovery engine. It’s not trivial to unpack, dedupe, analyze, contextualize, crawl, and digest 50 million URLs a day. But if Twitter is going to really exploit its potential, that’s exactly what it has to do.

The same is true of Twitter’s semantic challenge/opportunity. As I said in my last post, tweets express meaning. It’s not enough to “crawl” tweets for keywords and associate them with other related tweets. The point is to associate them based on meaning, intent, semantics, and – this is important – narrative continuity over time. No one that I know of does this at scale, yet. Twitter can and should.

Which gets me to all of your comments. I heard both in the written comments, on Twitter, and in extensive emails offline, from developers who are working on parts of the problems/opportunities I outlined in my initial post. And it’s true, there’s really quite a robust ecosystem out there. Trendspottr, OneRiot, Roundtable, Percolate, Evri, InfiniGraph, The Shared Web, Seesmic, Scoopit, Kosmix, Summify, and many others were mentioned to me. I am sure there are many more. But while I am certain Twitter not only benefits from its ecosystem of developers, it actually *needs* them, I am not so sure any of them can or should solve this core issue for the company.

Several commentators noted, as did Suamil, “Twitter’s firehose is licensed out to at least publicly disclosed 10 companies (my former employer Kosmix being one of them and Google/Bing being the others) and presumably now more people have their hands on it. Of course, those cos don’t see user passwords but have access to just about every other piece of data and can build, from a systems standpoint, just about everything Twitter can/could. No?”

Well, in fact, I don’t know about that. For one, I’m pretty sure Twitter isn’t going to export the growing database around how its advertising system interacts with the rest of Twitter, right? On “everything else,” I’d like to know for certain, but it strikes me that there’s got to be more data that Twitter holds back from the firehose. Data about the data, for example. I’m not sure, and I’d love a clear answer. Anyone have one? I suppose at this point I could ask the company….I’ll let you know if I find out anything. Let me know the same. And thanks for reading.

The Future of The Internet (And How to Stop It) – A Dialog with Jonathan Zittrain Updating His 2008 Book

By - August 06, 2011

segment_9081_460x345.jpeg

(image charlie rose) As I prepare for writing my next book (#WWHW), I’ve been reading a lot. You’ve seen my review of The Information, and In the Plex, and The Next 100 Years. I’ve been reading more than that, but those made it to a post so far.

I’m almost done with Sherry Turkle’s Alone Together, with which I have an itch to quibble, not to mention some fiction that I think is informing to the work I’m doing. I expect the pace of my reading to pick up considerably through the Fall, so expect more posts like this one.

Last week I finished The Future of The Internet (And How to Stop It), by Harvard scholar Jonathan Zittrain. While written in 2008, this is an ever-more important book, for many reasons, in that it makes a central argument about what we’ve built so far, and where we might be going if we ignore the lessons we’ve learned as we’ve all enjoyed this E-ticket ride we call the Internet industry.

The book’s core argument has to do with a concept Zittrain calls “generativity” – the ability of a product or service to generate innovation, new ideas, new services, independent of centralized, authoritative control. It is, of course, very difficult to create generative technologies on a grand scale – it’s a statement of faith and shared values to do such a thing, and it really rubs governments and powerful interests the wrong way over time. Jonathan goes on to point out that truly open, generative systems are inherently subject to the tragedy of the commons – practices such as malware, bad marketing tactics, hacking etc. These threats are only growing, and provide a good reason to shut down generativity in the name of safety and order.

The Internet, as it turned out for the first ten or fifteen years, is one of the greatest generative technologies we’ve ever produced. And yes, I mean ever – as in, since we all figured out fire, or the wheel, or … well, forgive me for getting all Wired Manifesto on you, but it’s a very big deal.

But like Lessig before him, Zittrain is very worried that the essence of what has made the Internet special is changing, in particular, as the mainstream public falls deeper in love with services like Facebook and Apple’s iPhone.

His book is a meditation and a lecture, of sorts, on the history, meaning, and implications of this idea. After I read it, I was inspired to email Jonathan. I sent him this note:

“Hi Jonathan -

Wondering if, to start off an interview process (for my book), you might want to do a back and forth email interview that I’d publish on my site. It’d be mostly related to your book and some questions about how you view things have progressed since it came out. That would be both a good way for me to “review” the book on my site as well as to delve into some of the issues it raises in a fresh light. You game?”

To which he responded:

“Sure!”

And my questions, and his response, in lightly edited form, are below. I think you’ll enjoy his thoughts updating his thesis over the past three years. Really good stuff. I have bolded what I, as a magazine editor, might turn into a “pullquote” were I laying this out on a printed page.

JBAT:

- You wrote the Future of the Internet three years ago. It warned of a lack of awareness with regard to what we’re building, and the consequences of that lack of attention. it also warned of data silos and early lockdown. Three years later, how are we doing? Are things better, worse, the same?

And a follow up. On a scale of one to ten, where one is “actively helping” and ten is “pretty much evil,” how do the following companies rate in terms of the debate you frame in the book?

- Google (you can break this down into Android, Search, Apps, etc)

- Facebook (which was really not at full scale when you published)

- Apple

- Twitter

- Microsoft (again break it down if you wish)

Thanks!

JONATHAN ZITTRAIN:

Sorry this took me so long! I got a little carried away in answering –

- You wrote the Future of the Internet three years ago. It warned of a lack of awareness with regard to what we’re building, and the consequences of that lack of attention. it also warned of data silos and early lockdown. Three years later, how are we doing? Are things better, worse, the same?

It’s the best of times and the worst of times: the digital world offers us more every day, while we continue to set ourselves up for levels of surveillance and control that will be hard to escape as they gel.

That’s because the plus is also the minus: more and more of our activities are mediated by gatekeepers who make life easier, but who also can watch what we do and set boundaries on it — either for their own purposes, or under pressure from government authorities.

On the book’s specific predictions, Apple’s ethos remains a terrific bellwether. The iPhone — released in ’07 — has proved not only a runaway success, but the principles of its iOS have infused themselves across the spectrum. There’s less reason than ever to need a traditional PC, and by that I mean one that lets you run whatever code you want. OS X Lion points the way to a much more controlled PC zone, anyway, as it more and more funnels its software through a single company’s app store rather than from anywhere. I’d be surprised if Microsoft weren’t thinking along similar lines for Windows.

Google has offered a counterpoint, since the Android platform, while including an app store, allows outside code to be run. In part that’s because Google’s play is through the cloud. Google seeks to make our key apps based somewhere within the google.com archipelago, and to offer infrastructure that outside apps can’t resist, such a easy APIs to geographic mapping or user location. It’s important to realize that a cloud-based setup like Google Docs or APIs, or Facebook’s platform offer control similar to that of a managed device like an iPhone or a Kindle. All represent the movement of technology from product to service. Providers of a product have little to say about it after it changes hands. Providers of services are different: they don’t go away, and a choice of one over another can have lingering implications for months and even years.

At the time of the book’s drafting, the alternatives seemed stark: the “sterile” iPhone that ran only Apple’s software on the one hand, and the chaotic PC that ran anything ending in .exe on the other. The iPhone’s openness to outside code beginning in ’08 changed all that. It became what I call “contingently generative” — it runs outside code after approval (and then until it doesn’t). The upside is that the vast creativity of outside coders has led to a software renaissance on mobile devices, including iPhones, from the sublime to the ridiculous. And Apple’s gatekeeping has seemed to be with a light touch; apps not allowed in the store pale in comparison to the torrents of stuff let through. But that masks entire categories of applications that aren’t allowed — namely anything disruptive to Apple’s business model or that of its partners or regulators. No p2p, no alternate email clients, browsers with limited functionality.

More important, the ability to limit code is what makes for the ability to control content. More and more we see content, whether a book, or a magazine subscription, represented in and through an app. It’s sheer genius for a platform maker to demand a cut of in-app purchases. Can you imagine if, back in the day, the only browser allowed on Windows was IE, and further, all commerce conducted through that browser — say, buying a book through Amazon — constituted an “in-app purchase” for which Microsoft was due 30%?

A natural question is why competition isn’t the answer here — or at least reason to not worry about the question. If people thought the iPhone made for a bad deal, why would they want one? The reason they want one is the same thing that made the Mac so appealing when it first came on the scene: it was elegant and intuitive and it just worked. No blue screen of death. Consistency across apps. And, as viruses and worms naturally were designed for the most common platform, Windows, those 5% with Macs weren’t worth the trouble of corrupting.

We’ve seen a new generation of Mac malware as its numbers grow, and in the meantime a first defense is that of curation: the app store provides a rough filter for bad code, and accountability against its makers if something goes wrong even after it’s been approved. So that’s why the market likes these architectures. I’ll bet few Android users actually go “off-roading” with apps not obtained through the official Android app channels. But the fact that they can provides a key safety valve: if Google were to try the same deal as Apple with content providers for in-app content, the content providers could always offer their wares directly to Android users. I’m worried that a piece of malware could emerge on Android that would cause the safety valve of outside code to be changed, either formally by Google, or in practice as people become unwilling to drive outside the lanes.

So how about competition between platforms? Doesn’t that keep each competitor honest, even if all the platforms are curated? I suppose: the way that Prodigy and CompuServe and AOL competed with one another to offer different services as each chased subscribers. (Remember the day when AOL members couldn’t email CompuServe users and vice versa?) That was competition of a sort, but the Internet and the Web put them all to shame — even as the Internet arose from no business plan at all.

Here’s another way to think about it. Suppose you were going buy a new house. There are lots of choices. It’s just that each house is “curated” by its seller. Once you move in, that seller will get to say what furnishings can go in, and collects 30% of the purchase price of whatever you buy for the house. That seller has every reason to want to have a reputation for being generous about what goes in — but it still doesn’t feel very free when, two years after you’re living in the house, a particular coffee table or paint color is denied. There is competition in this situation — just not the full freedom that we rightly associate with inhabiting our dwellings. A small percentage of people might elect to join gated communities with strict rules about what can go inside and outside each house — but most people don’t want to have to consult their condo association by-laws before making choices that affect only themselves.

[I guess the Qs below (about each company) are answered above!]

—-####—-

I guess now my question is, what kind of place are we going to build next?

Thanks for your thoughts, Jonathan! What do you all think?

Twitter and the Ultimate Algorithm: Signal Over Noise (With Major Business Model Implications)

By - August 05, 2011

Note: I wrote this post without contacting anyone at Twitter. I do know a lot of folks there, and as regular readers know, have a lot of respect for them and the company. But I wanted to write this as a “Thinking Out Loud” post, rather than a reported article. There’s a big difference – in this piece, I am positing an idea. It’s entirely possible my lack of reporting will make me look like an uninformed boob. In the reported piece I’d posit the idea privately, get a response, and then report what I was told. Given I’m supposedly on a break this week, and I’ve wanted to get this idea out there for some time, I figured I’d just do so. I honestly have no idea if Twitter is actually working on the ideas I posit below. If you have more knowledge than me, please post in the comments, or ping me privately. Thanks! twitter issue.png

—-

I find Twitter to be one of the most interesting companies in our industry, and not simply because of its meteoric growth, celebrity usage, founder drama, or mind-blowing financings. To me what makes Twitter fascinating is the data the company sits atop, and the dramatic tension of whether the company can figure out how to leverage that data in a way that will insure it a place in the pantheon of long-term winners – companies like Microsoft, Google, and Facebook. I don’t have enough knowledge to make that call, but I can say this: Twitter certainly has a good shot at it.

My goal in this post is to outline what I see as the biggest challenge/opportunity in the company’s path. And to my mind, it comes down to this: Can Twitter solve its signal to noise problem?

Many observers have commented on how noisy Twitter is: That once you follow more than about fifty or so folks, your feed becomes unmanageable. If you follow hundreds, like I do, it’s simply impossible to extract value from your stream in any structured or consistent fashion (see image from my stream at left). Twitter’s answers to this issue has been anemic. One product manager even insisted that your Twitter feed should be viewed as a stream you dip into from time to time, using it as a thirsty person might use a nearby water source. I disagree entirely. I have chosen nearly 1,000 folks who I feel are interesting enough to follow. On average, my feed gets a few hundred new tweets every ten minutes. No way can I make sense of that unassisted. But I know there’s great stuff in there, if only the service could surface it in a way that made sense to me.

You know – in a way that feels magic, the way Google was the first time I used it.

I want Twitter to figure out how to present that stream in a way that adds value to my life. It’s about the visual display of information, sure, but it’s more than that. It requires some Really F*ing Hard Math, crossed with some Really Really Hard Semantic Search, mixed with more Super Ridiculous Difficult Math. Because we’re talking about some super big numbers here: 200 million tweets a day across hundreds of millions of accounts. And that’s growing bigger by the hour.

A mini industry has evolved to address this issue – I use News.me, Paper.li, TweetDeck (recently purchased by Twitter), Percolate and others, but the truth is, they are not fully integrated, systemic solutions to the problem. Only Twitter has access to all of Twitter. Only Twitter can see the patterns of usage and interest and turn meaningful insights and connections into algorithms which feed the entire service. In short, it’s Twitter that has to address this problem. Because, of course, this is not just Twitter’s great problem, it is also Twitter’s great opportunity.

Why? Because if Twitter can provide me a tool that makes my feed really valuable, imagine what it can do for advertisers. As with every major player that has scaled to the land of long-term platform winners (as I said, Google, Microsoft, Facebook), product comes first, and business model follows naturally (with Microsoft, the model was software sales of its OS and apps, not advertising).

If Twitter can assign a rank, a bit of context, a “place in the world” for every Tweet as it relates to every other Tweet and to every account on Twitter, well, it can do the same job for every possible advertiser on the planet, as they relate to those Tweets, those accounts, and whatever messaging the advertiser might have to offer. In short, if Twitter can solve its signal to noise problem, it will also solve its revenue scale problem. It will have built the foundation for a real time “TweetWords” – an auction driven marketplace where advertisers can bid across those hundreds of millions of tweets for the the right to position relevant messaging in real time. If this sounds familiar, it should – this is essentially what Google did when it first cracked truly relevant search, and then tied it to AdWords.

Now, I do know that Twitter sees this issue as core to its future, and that it’s madly working on solving it. What I don’t know is how the company is attacking the problem, whether it has the right people to succeed, and, honestly, whether the problem is even soluble regardless of all those variables. After all, Google solved the problem, in part, by using the web’s database of words as commodity fodder, and its graph of links as a guide to value. Tweets are more than words, they comprise sentiments, semantics, and they have a far shorter shelf life (and far less structure) than an HTML document.

In short, it’s a really, really, really hard problem. But it’s a terribly exciting one. If Twitter is going to succeed at scale, it has to totally reinvent search, in real time, with algorithms that understand (or at least replicate patterns of) human meaning. It then has to take that work and productize it in real time to its hundreds of millions of users (because while the core problem/opportunity behind Twitter is search, the product is not a search product per se. It’s a media product.)

To my mind, that’s just a very cool problem on which to work. But I sense that Twitter has the solution to the problem within its grasp. One way to help solve it is to throw open the doors to its data, and let the developer community help (a recent move seems to point in that direction). That might prove too dangerous (it’s not like Google is letting anyone know how it ranks pages). But it could help in certain ways.

Earlier in the week I was on the phone with someone who works very closely in this field (search, large scale ad monetization, media), and he said this of Twitter: “There’s definitely a $100 billion company in there.”

The question is, can it be built?

What do you think? Am I off the reservation here? And who do you know who’s working on this?

Who Am I, According to Google Ads? Who Am I, According to the Web? Who Do I Want to Be?

By - August 03, 2011

Over on Hacker News, I noticed this headline: See what Google knows about you. Now that’s a pretty compelling promise, so I clicked. It took me to this page:

Goog Ad pref main.png

Ah, the Google ad preferences page. It’s been a while since I’ve visited this place. It gives you a limited but nonetheless interesting overview of the various categories and demographic information Google believes reflect your interests (and in a way, your identity, or “who you are” in the eyes of an advertising client). This is all based on a cookie Google places on your browser.

I was hoping for more – because Google has a lot more information about us than just our advertising preferences (think of how you use Google apps like Docs, or Gmail, or Google+, or Search, or….). But it’s an interesting start. I certainly hope that someday soon, Google will pull of this in one place, and let us edit/export/correct/leverage it. I sense probably it will. If it does, expect some pretty big shifts in how our culture understands identity to take place. But more on that later.

Anyway, I thought it’d be interesting to see who and what Google thought I was. I use three browsers primarily, and I use them in different ways. My main browser has been Apple’s Safari, but lately it’s become slow and a bit of a pain to use. I have my suspicions as to why (iWorld, anyone?), but it’s led me to a gradual move over to Google Chrome, which is way faster and feature rich. I’d say over the past few months, I’ve used Safari about 60% of the time, and Chrome about 30% of the time. The other 10%? I use Firefox. Why? Well, that’s the browser I use when I want anonymity. I have it set to “do not record my history” and I delete cookies on it from time to time. For this reason, it’s not very useful, but I do like having a “clean” browser to try out new services without the baggage of those services sniffing out my past identity in some way. Increasingly, I think this ability will become second nature to us all – after all, we are not the same person everywhere we are in the physical world, and our identity is something we want to manage and control ourselves (for more on that, read my piece Identity and The Independent Web). We just haven’t come to this realization culturally. We will.

There’s currently a pretty hotly contested identity debate in the ourosborosphere, and I find myself aligning with the Freds and Anils of the world. I’m glad this debate is happening, but the real shift will come from the bottom up, as more and more people realize they want to more carefully instrument “who” they are online, and start to realize the implications of not paying attention to this. And entrepreneurs will see opportunities to catch this coming wave, as the time comes for services that help us manage all this identity data in a way that feels natural and appropriate. Sure, there have already been attempts, but they came before our society was ready. It soon will be.

Meanwhile, it’s interesting to see who Google thinks I am in the three browsers I use. In Safari, where I have the longest history, here’s my profile:

my safari google data.png

I find it interesting to note that Google gets my age wrong (I’ve been 45 for nearly a year), and that it thinks I am so into Law & Government, but that’s probably because I read so much policy stuff for my book, my work with FM and the IAB, and my writing here. Otherwise, it’s a pretty decent picture of me, though it misses a lot as well. I love that I can add categories – I am tempted to do just that and see if the ads change noticeably, but I don’t like that I can’t correct my identity information (for example, tell Google how old I really am). In short, this is a great start, but it’s pretty poorly instrumented. I’d be very interested in how it changes if and when I really start using Google+ (I am on it, but not really active. This is typical of me with new services.)

Now, let’s take a look at my Google Chrome “identity” as it relates to Google Ads:

my chrome data Google.png

Not much there. Odd, given I’ve used it a lot. Seems either Google is holding some info back, or is pretty slow to gather data on me in Chrome. I find that hard to believe, but there you have it. It’s not like I only use Chrome to look for books or read long articles, though I think I have used it for my limited interaction with Google+, because I figured it’d work best in a Google browswer. Hmmm.

Now, on to Firefox, which as you recall is the one I keep “clean,” or, put another way, my identity is “anonymous.”

Firefox data Google.png

Just as I would have expected it.

I’ll be watching for more dashboards like this one to pop up over the coming years, and I expect more tools will help us manage them – across non-federated services like Google, Facebook, Twitter, etc. It’s going to be a very interesting evolution.

Who Will Be Here One Generation From Now?

By - August 01, 2011

crystal-ball-2.jpg (image) I just re-read my post explaining What We Hath Wrought, the book I am currently working on. (Yes, I know that’s a dangling participle, Mom). And it strikes me I might ask you all this question: Which company do you think will be around, and let me add – around and thriving – one generation from now?

I could install a widget and let you vote for a company, but that’s the easy way out. I’m looking for folks willing to take the time to name a company in the comments or maybe on Twitter (#wwhw), and defend why you think, when my kids grow up, that company will still be a dominant force in our culture. In a month or so, I’ll have redone the site, and added Disqus, but for now, it’s hard to comment, and hard to follow them. Sorry about that. But stay old school with me for a minute, and help me with this, will ya?

I’ll throw out a few names to get you started. And after you all answer, I’ll give you my gut feel:

- Apple

- Amazon

-AT&T

- eBay

- Facebook

- Foursquare

- Google

- Groupon

- HP

- Intel

- LinkedIn

- Microsoft

- Twitter

- Verizon

- Yahoo

- Zynga

I’m sure I missed any number of companies, so tell me what you think. Who will be a dominant force one generation from now? And why?