free html hit counter John Battelle's Search Blog | Page 38 of 547 | Thoughts on the intersection of search, media, technology, and more.

The Future of Twitter Ads

By - September 14, 2011

twitter-money.png

(image) As I posted earlier, last week I had a chance to sit down with Twitter CEO Dick Costolo. We had a pretty focused chat on Twitter’s news of the week, but I also got a number of questions in about Twitter’s next generation of ad products.

As usual, Dick was frank where he could be, and demurred when I pushed too hard. (I’ll be talking to him at length at Web 2 Summit next month.) However, a clear-enough picture emerged such that I might do some “thinking out loud” about where Twitter’s ad platform is going. That, combined with some very well-placed sources who are in a position to know about Twitter’s ad plans, gives me a chance to outline what, to the best of my knowledge, will be the next generation of Twitter’s ad offerings.

I have to say, if the company pulls it off, the company is sitting on a Very Big Play. But if you read my post Twitter and the Ultimate Algorithm, you already knew that.

In that post, I laid out what I thought to be Twitter’s biggest problem/opportunity: surfacing the right content, in the right context, to the right person at the right time. It’s one of the largest computer science and social engineering problems on the web today, a fascinating opportunity to leverage what is becoming a real time database of folks’ implicit and explicitly declared interests.

I also noted that should Twitter crack this code, its ad products would follow. As I wrote: “If Twitter can assign a rank, a bit of context, a “place in the world” for every Tweet as it relates to every other Tweet and to every account on Twitter, well, it can do the same job for every possible advertiser on the planet, as they relate to those Tweets, those accounts, and whatever messaging the advertiser might have to offer. In short, if Twitter can solve its signal to noise problem, it will also solve its revenue scale problem.”

Well, I’ve got some insights on how Twitter plans to make its first moves toward these ends.

First, Dick made it clear last week that Twitter will be widening the rollout of its “Promoted Tweets” product, which pushes Tweets from advertisers up to the top of a logged-in user’s timeline (coverage). Previously, brands could promote tweets only to people who followed those brands. (This of course drove advertisers to use Twitter’s “Promoted Accounts” product, which encouraged users to follow a brand’s Twitter handle. After all, if Promoted Tweets are only seen by your followers, you better have a lot of them).

Just recently, Twitter began to allow brands to push their Promoted Tweets to non-followers. This adds a ton of scale to a product that previously had limited reach. Remember, Twitter announced some pretty big numbers last week: more than 100 million “logged in” users, and nearly 400 million users a month on its website alone. Not to mention around 230 million tweets generated a day. All of these metrics are growing at a very strong clip, Twitter tells me.

All this begs we step back and ask an important question. Now that advertisers can push their Tweets to non-followers, how might they be able to target these ads?

Twitter’s answer, in short, is this: We’ll handle that, at least for now. The first iteration of the product does not allow the advertiser to determine who sees the promoted tweet. Instead, Twitter will find “lookalikes” – people who are similar in interests to folks who follow the brand. Characteristically, Twitter is going slow with this launch – as I understand it, initially just ten percent of its users will see this product.

(The implication of Twitter finding “lookalikes” should not be ignored – it means Twitter is confident in its ability to relate the interest graphs of its users one to another, at scale. This is part of the issue I wrote about in the “Ultimate Algorithm” post, a major and important development that is worth noting).

Now, I’ve spent many years working with marketers, and even if Twitter’s lookalike approach has scale, I know brands won’t be satisfied with a pure “black box” answer from the service. They’ll want some control over how they target, who they target to, and when their ads show up, among other things. Google, for example, gives advertisers an almost overwhelming number of data points as input to their AdWords and AdSense products. Facebook, of course, has extremely rich demographic and interest based targeting.

So how will Twitter execute targeting? Here are my thoughts:

- Interest targeting. Twitter will expose a dashboard that allows advertisers to target users based on a set of interests. I’d expect, for example, that a movie studio launching a summer action film might want to target Twitter users have shown interest in celebrities, Hollywood, and, of course, action movies.

How might that interest be known? There are plenty of clear signals: What a user posts, of course. But also what he or she retweets, replies to, clicks on in someone else’s tweet, or who they follow (and who that followed person follows, and, and….).

- Geotargeting. Say that movie is premiering in just ten cities across the country. Clearly, that movie studio will want to target its ads just in those regions. Nearly every major advertiser demands this capability – consumer packaged goods companies like P&G, for example, will want to compare their geotargeted ads to “shelf lift” in a particular region.

Twitter has told me it will have geotargeting capabilities shortly.

- Audience targeting. I’d expect that at some point, Twitter will expose various audience “buckets” to the marketer for targeting based on unique signals that Twitter alone has views into. These might include “active retweeters,” “influencers,” or “tastemakers” – folks who tend to find things first.

- Demographic targeting. This one I’m less certain of – Twitter doesn’t have a clear demographic dataset, the way Facebook does. However, neither does Google, and it figured out a way to include demos in its product line.

- Device/location targeting. Do you want your Promoted Tweets only on the web, or only on Windows? Maybe just iPads, or iOS more broadly? Perhaps just mobile, or only Android? And would you like location with that? You get the picture….

Given all this targeting and scale, the next question is: How will advertisers actually buy from Twitter? I think it’s clear that Twitter will adopt a model based on two familiar features: a cost-per-engagement model (the company already uses engagement as a signal to rank an ads efficacy) and a real-time second-price bidded auction. The company already exposes dashboards to its marketing partners on no less than five metrics, allowing them to manage their marketing presence on Twitter in real time. And its recently announced analytics product only adds on to that suite. Twitter has also said a self-serve platform will be open for business shortly, one that will allow smaller businesses to play on the service.

Next up? APIs that allows third parties to run Promoted Tweets, as well as help marketers manage their Twitter presence. Just as with Facebook and Google, expect a robust “SEO/SEM” ecosystem to develop around these APIs.

The cost per engagement model is worth a few more lines. If an ad does not resonate – is not engaged with in some way by users – it will fall off the page, an approach that has clearly worked well for Google. The company is very pleased with its early tests on engagement, which one source tells me is one to two orders of magnitude above traditional banner ads.

activity.png

Finally, recall that Twitter also announced, and couched as very good news, that a large percentage of its users are “not logged in,” but rather consume Twitter content just as you or I might read a blog post. Fred writes about this in his post The Logged Out User. In that post, he estimates that nearly three in four folks on Twitter.com are “logged out.” That’s a huge audience. Expect ad products for those folks shortly, including – yes – display ads driven by cookies and/or other modeling parameters.

In short, after staring at this beast for many years, I think Twitter is well on its way to cracking the code for revenue. But let’s not forget the key part of this equation: The product itself. Ad product development is nearly always in lockstep with user product development.

Twitter recently surfaced a new tab for some of its users called “Activity”, and I was lucky enough to get it in my stream. It makes my timeline far better than it was. The “Mentions” tab (which we see as our own handle) is also far richer, showing follows, retweets, and favorites as well as replies and mentions. But there’s much, much more to do. My sense of the company now, however, is that it’s going to deliver on the opportunity we’ve all known it has ahead. It’s mostly addressed its infrastructure issues, Costolo told me, and is now focused on delivering product improvements through rapid iteration, testing, and deployment. I look forward to seeing how it all plays out.

  • Content Marquee

On Location, Brand, and Enterprise

By - September 11, 2011

HP IO.pngFrom time to time I have the honor of contributing to a content series underwritten by one of FM’s marketing partners. It’s been a while since I’ve done it, but I was pleased to be asked by HP to contribute to their Input Output site. I wrote on the impact of location – you know I’ve been on about this topic for nearly two years now. Here’s my piece. From it:

Given the public face of location services as seemingly lightweight consumer applications, it’s easy to dismiss their usefulness to business, in particular large enterprises. Don’t make that mistake. …

Location isn’t just about offering a deal when a customer is near a retail outlet. It’s about understanding the tapestry of data that customers create over time, as they move through space, ask questions of their environment, and engage in any number of ways with your stores, your channel, and your competitors. Thanks to those smartphones in their pockets, your customers are telling you what they want – explicitly and implicitly – and what they expect from you as a brand. Fail to listen (and respond) at your own peril.

More on the Input Output site.

Twitter Makes a Statement

By - September 08, 2011

twitter 100mm.png

I could not make Twitter’s press event today, but I did get a chance to sit with CEO Dick Costolo (the Web 2 Summit dinner speaker this year) yesterday afternoon, and got a chance to do a deep dive on today’s news. I’ll write up more on that as soon as I can, but the recap:

100 million active users around the globe turn to Twitter to share their thoughts and find out what’s happening in the world right now. More than half of these people log in to Twitter each day to follow their interests. For many, getting the most out of Twitter isn’t only about tweeting: 40 percent of our active users simply sign in to listen to what’s happening in their world.

This is from their blog post, but there are a lot more stats to share (400 million people visit Twitter.com each month, for example), as well as insights and thoughts from our conversation yesterday. Stay tuned.

Google As Content Company – A Trend Worth Watching

By -

mayer zagat.png

It’s been a while since I’ve said this, but I’ll say it again: Google is a media company, and at some point, most media companies get pretty deep into the Making Original Content business. With the acquisition of Zagat* Google has clearly indicated it’s going to play in a space it once left to the millions of partners who drove value in its search and advertising business. Google is walking a thin line here – media partners are critical to its success, but if its seen as favoring its “owned and operated” content over those who operate in the open or independent web, well, lines may be redrawn in the media business.

Now, it’s easy to argue that this was a small, strategic buy to support Google’s local offering. Then again…Blogger, YouTube, and GoogleTV are not small efforts at Google. And if I were an independent publisher who focused on the travel and entertainment category, I’d be more than a bit concerned about how my content might rank in Google compared to Zagat. Just ask Yelp.

So…what other content-driven categories might Google find the need to get into? Well, ask yourself this question: What other content-driven business categories are important to Google?

Answering that question falls into the category of “things that make you go….huh…”

I’ll have more thinking on this soon, I hope. But I wanted to note the sale as indicative or a larger trend worth watching.

*Zagat has had a commercial relationship with FM, a company I founded, but not a material one on either side as I understand it.

The 2011 Web 2 Summit Program Is Live; My Highlights

By - September 07, 2011

W2 shot.png

August is a month of vacation, of beaches, reading, and leisure….unless you happen to work with me creating the program for the eighth annual Web 2 Summit this October. Each year, my “summer vacation” turns into a “working vacation” as my team and I spend hours massaging more than 50 speakers into a tightly choreographed program running over what always turns out to be an extraordinary three days. I must be a masochist. Because I always love how it turns out.

This year, as I wrote earlier, our theme is “The Data Frame.” And this year’s program hews more tightly to our theme than any before it. Just about every speaker will be presenting on some aspect of how data changes the game in our industry. From policy to tech, art to retail, we’ve got one of the most varied lineups ever. You can see it here, but remember, these are extremely volatile times. In other words, the lineup might change a bit in the next six weeks. I’m just glad I didn’t ask Carol Bartz to come back, but then again, that would have been fun, no?

Web 2 is a year book of sorts, a stake in the ground where our industry has some of its most important conversations. This year we are taking a new tack – eliminating panels altogether, and focusing on our trademark conversations, as well as short, high impact presentations.

pincus.jpeg

Here are a few I’m really looking forward to.

We’ll start day one with Mark Pincus, CEO of Zynga. Mark has been busy, in particular given both the growth of Zynga and the recent turmoil in the financial markets, which plan on welcoming his company to public status at some point in the near future. But Mark is just the starting gun of an amazing opening session, one that will include John Donahoe, CEO of eBay, Marc Benioff, CEO of Salesforce, Paul Otellini, CEO of Intel, Dennis Crowley, co-founder of foursquare, Ross Levinsohn,

wyden.jpeg

EVP Americas at Yahoo!, and Reid Hoffman, founder and Chair of LinkedIn, the public market’s current darling.

Of the group, I’m particularly pleased to welcome Ron Wyden, Senator from Oregon. This will mark Web 2′s first ever visit from a sitting senator, and our industry will have plenty to discuss with him – he’s the man who has taken stands on COICA and its cousin Protect IP, controversial (and many would say flawed) pieces of legislation that may have significant impacts on how the Internet works.

costolo.jpeg

After cocktails we’ll sit down to dinner, and I’m very pleased to announce that our dinner conversation with be with Twitter CEO Dick Costolo, a man who would win any “funniest CEO” competition. running away. Be prepared to snort wine through your nose.

Day two opens with Dell CEO Michael Dell, who will have plenty to say about the moves of his competitors HP, Apple, and Samsung. We’ll get our first taste of a new program element – “Pivot” – short presentations tailored to shift your thinking in five minutes or less. You’ll hear Pivots from Tony Conrad (about.me),Chris Poole (Canv.as, 4chan), Bill Gross (uber media), Aileen Lee (KPCB), David Hornik (August Capital) and many more.

We’ll also hear from two data and privacy policy experts – Dr. Ann Cavoukian, of the Ontario Office of Information & Privacy, and David Vladeck, of the FTC. Ben Horowitz (of Andreessen Horowitz) will sit for a conversation, as will John Partridge, President of Visa, and Dan Schulman, Group President, American Express – together. That’s sort of like getting Coke and Pepsi in the same room, which, it turns out, we did. Over the three days, we’ll hear from both Alison Lewis, CMO of Coca Cola Inc., as well as Frank Cooper, CMO of Pepsico Beverages.

lewis.jpeg

This brings me to another important point – with data, all companies must become Internet companies. John, Dan, Alison, and Frank will bring that point home. As will Michael Roth, CEO of IPG, one of the largest advertising holding companies on the planet.

And of course we’ll hear from Mary Meeker, in her eighth appearance at Web 2. But this time, I’ve given her enough time to both do her “capital markets roundup,” as well as sit down with us and discuss her new role as partner at Kleiner Perkins.

A highlight of Day Two will be Thomas Drake, who used to work at the NSA on a forward-looking data surveillance program called ThinThread. While there, he uncovered facts about how the NSA was conducting surveillance which he believed was illegal. He blew the whistle, was charged with espionage, and lived to tell the tale.

meeker.jpeg

Rounding out Day Two will be Jack Tretton, President and CEO of Sony Computer Entertainment of American, Tim Westergren, founder of Pandora, and Steve Ballmer, CEO of Microsoft.

But wait…there’s more! Sprinkled throughout the three days will be our trademark “High Order Bits” – shortform presentations designed to amaze, inspire, and even perplex. We’ll hear from voices as varied as Genevieve Bell, in house anthropologist at Intel, Peter Vesterbacka, the “Mighty Eagle” of Rovio,

Alex Rampell, CEO of TrialPay, Mike McCue, CEO of Flipboard, Bret Taylor, CTO of Facebook, Salman Khan, founder of Khan Academy, Susan Wojcicki, SVP at Google, Deb Roy, Founder of Bluefin, Richard Rosenblatt, CEO Demand Media, Mike Olson, CEO of Cloudera, and even MC Hammer.

ballmer.jpeg

That’s a lot of names, and we’re not close to being done. Highlights of day three include James Gleick, who has written one of the most important books about data in recent years (“The Information”), and a passel of Facebook alums: Sean Parker, who has yet another startup to discuss, Dave Morin, of Path, and Charlie Cheever together with his co-founder Adam D’Angelo, of Quora. More High Order Bits will come from Hilary Mason, of bit.ly, Jeremie Miller, of Singly, and Josh James, of Domo.

Rounding out the day are Andrew Mason, of Groupon fame, and Vic Gundotra, the man behind Google+.

Whew. And that’s not even all the great folks who are coming. It’s going to be a spectacular three days. I hope you’ll join us!

My deepest thanks go out to my Web 2 Advisory Board, which gave me a lot of great input on the program, and to the teams at O’Reilly, Techweb, and FM. As well as all our amazing sponsors, of course, and my producer extraordinare, Janetti Chon. It’s almost showtime!

PS – Look for our announcement next week about the new “Data Layer” on our “Points of Control” map. It’s going to rock!

Maybe There Really Will Only Be Five Computers…

By - September 01, 2011

File:Thomas_J_Watson_Sr.jpeg

Thomas J. Watson, legendary chief of IBM during its early decades and the Bill Gates of his time, has oft been quoted – and derided – for stating, in 1943, that “I think there is a world market for maybe five computers.” Whether he actually said this quote is in dispute, but it’s been used in hundreds of articles and books as proof that even the richest men in the world (which is what Watson was for a spell) can get things utterly wrong.

After all, there are now hundreds of millions of computers, thanks to Bill Gates and Andy Grove.

But staring at how things are shaping up in our marketplace, maybe Watson was right, in a way. The march to cloud computing and the rush of companies building brands and services where both enterprises and consumers can park their compute needs is palpable. And over the next ten or so years, I wonder if perhaps the market won’t shake out in such a way that we have just a handful of “computers” – brands we trust to manage our personal and our work storage, processing, and creation tasks. We may access these brands through any number of interfaces, but the computation, in the manner Watson would have understood it, happens on massively parallel grids which are managed, competitively, by just a few companies.*

It seems that is how Watson, or others like him, saw it back in the 1950s. According to sources quoted from Wikipedia, Professor Douglas Hartee, a Cambridge mathematician, estimated that all the calculations required to run in England would take about three “computers,” each distributed in distinct geographical locations around the country. The reasoning was pretty defensible: computers were maddeningly complex, extraordinarily expensive, and nearly impossible to run.

Now, that’s not true for a Mac, an iPhone, or even a PC. But very few of us would want to own and operate EC2 or S3.

Right now, I’d wager that the handful of brands leading the charge to win in this market might be Google, Amazon, Microsoft, Apple, and….IBM. About five or so. Maybe Watson will be proven right, even if he never was wrong in the first place.

* Among other things, it is this move to the cloud, with its attendant consequences of loss of generativity and control at the edges, which worries Zittrain, Lanier, and others. But more on that later.

Physics of the Future, by Michio Kaku

By - August 31, 2011

science-book-physics-of-the-future-michio-kaku.jpeg

As part of my ongoing self-education – so as to not be a total moron while writing “What We Hath Wrought” – this past weekend I read “Physics of the Future” by Michio Kaku.

I was excited to read the book, because Kaku is a well regarded physicist, and that’s a field that I know will inform what’s possible, technologically, thirty or so years from now. I will admit I did not read the reviews of the book before hitting “purchase” on my Kindle. The topic alone made it worth my time, and the book was on the NYT bestseller list for five weeks, after all. Turns out, the book was worth the time….but perhaps I should have read the reviews so my expectations were more properly set.

I thought I was going to learn some fundamentals about what’s possible in the next few generations, and if you work hard enough, you will learn some of that. But the book reads more like a string of popular science articles meant for a *very* broad audience, and far less like a serious investigation of how physics might inform our world in the coming decades.

The New York Times’ review might sum it best (and I know, it’s a cliche to depend on the Times, but…it pretty much sums up my thoughts on the book):

“…Mr. Kaku thinks in numbers better than he thinks in words, which is a problem only in that he’s written a book and not a series of equations….This is not boring stuff, and it all somewhat makes me wish that I (born in 1965) were going to be around to witness it all. In terms of data delivery, “Physics of the Future” gets the job done. But airplane food gets the job done, too, and airplane food — bland and damp — is what Mr. Kaku’s prose too often resembles.”

Ouch. I hope I never get a review like that. But then again, I rarely think in numbers.

Kaku does not lack for ambition – he sets out to explain, in great detail, how we will live in 2100, and how we’ll get there. But he often uses the conditional tense, and swaps between “this might happen” to “this will certainly happen,” sometimes on the same page. It makes for a general lack of trust when it comes to whether or not you want to buy into his proclamations. He also leans heavily on a thesis he calls “The Cave Man Principle,” which, to simplify, says that we are still pretty much driven by the same impulses we had when we lived in caves. The idea gets old and is often used as a salve when the future starts to get hard to predict. He also loves to drop pop culture references – in particular to sci fi movies – as a way to explain how things might shape up. I’m not sure I’d want Hollywood to loom that large in *my* future…

Still, if you are looking for a relatively fast read that covers a lot of ground around flying cars, the Internet on contact lenses, and palm sized MRI machines, this book is worth a look, despite the sometimes artless prose. Better yet, however, I’d recommend you watch a few of Kaku’s television shows (he’s made a number of them for Discovery Channel, among others), and we’ve enjoyed watching those as a family.

I’m still reading Kevin’s “What Technology Wants,” which so far I’m really enjoying. Hope to write about that shortly.

Other books I’ve reviewed recently:

Alone Together by Sherry Turkle

The Information by James Gleick

In the Plex by Steven Levy

The Future of the Internet (And How to Stop It) by Jonathan Zittrain

The Next 100 Years by George Friedman

We Need An Identity Re-Aggregator (That We Control)

By - August 29, 2011

FINGERPRINT.jpeg

The subject of “owning your own domain” has been covered to death in our industry, with excellent posts from Anil Dash and others (Fred) explaining the importance of having your own place on the web. I’ve also weighed in on the importance of “The Independent Web,” where creators have control, as opposed to the Dependent Web, where platforms ultimately control how your words, data, and expression are leveraged.

But not everyone gravitates toward having their own, independent site – at least not initially. Even those who do have sites don’t necessarily see those sites as the best place to express themselves. I was reminded of this reading a Quora thread over the weekend entitled “What’s it like to have your film flop at the box office?” (The subtitle of the thread is hilarious: “Don’t they know how bad it is before it comes out?”)

The question elicited a well written, funny, and informative post by one Sean Hood, a professional “fixer” of scripts who had worked on the recent “Conan the Barbarian” movie – apparently a big-time summer flop.

It’s clear that Hood was inspired to write a wonderful post not because he wanted to muse out loud on his own blog (he does, it turns out, have one), but because of something particularly social in nature about Quora.

The same, I’d wager, can be said of Google+, where a lot of folks, including well know “traditional bloggers” like Robert Scoble are content to post at length, regardless of the fact that Google+, unlike blogging software like WordPress, is not a platform that they “control.” Ditto places like the Huffington Post, Facebook, ePinions, Amazon Reviews – you get the picture. The web is full of places where the value is created by authors, but control and monetization accrues, in the main, to the company, not the individual.

Scoble, who is paid by the hosting company Rackspace to be nearly omnipresent, is clearly an edge case. He’s a professional blogger, but he doesn’t really care where his words live, as long as they get a lot of attention. Traditional authors, like, for example, the folks behind Dooce or The Awl, are far less likely to leave their core value – their words – all over the web, and in particular, they don’t see the point of given that value away for free, when their own sites provide their economic lifeblood (both sites are FM partners, but there are tens of thousands of others as well.).

The downsides of not owning your own words, on your own platform, are not limited simply to money. Over time, the words and opinions one leaves all over the web form a web of identity – your identity – and controlling that identity feels, to me, a human right. But unless you are a sophisticated netizen, you’re never going to spend the time and effort required to gather all your utterances in one place, in a fashion that best reflects who you are in the world.

Every site has a different terms of service – rules which guide what rights you have when you post on the site. I haven’t read them all (most of us don’t), but I’d imagine most of the would allow you to take your own words and cut and paste them on your own site, should you be so inspired. On his personal blog, Sean Hood, the film writer, has linked to many of his past answers on Quora. But he hasn’t “re-posted” them – which I think is a shame. Because while Quora is a great service, should it go dark, Sean’s words will be lost.

Earlier in the year I wrote a piece called “The Rise of Meta Services,” in which I posited that we need a new class of services that help us make sense of the fractured nature of all the sites, apps, and platforms we’re using. I’d wager there’s a great opportunity to create such a service that follows individuals around the web, noting, indexing, and reposting everything he or she writes back to his or her own domain.

Or maybe there’s already a WordPress plugin for that?!

With Tech, We Are Not Where We Want To Be (Or, This Cake Ain't Baked)

By - August 21, 2011

alonetogethercover.jpg

Last week I finished reading Sherry Turkle’s “Alone Together“, and while I have various disagreements with the work (I typed in more than 70 notes on my Kindle, even with that terribly tiny keyboard), I still found myself nodding in agreement more than I thought I would.

In her book, Turkle explores our relationship with technology, in particular what she calls “sociable robots” (toys like the Aibo or My Real Baby), as well as with email, IM, and shared virtual spaces like Second Life and Facebook.

Turkle is one of the most important sociologists of technology working today, and her new book reads like a personal field notebook, rife with anecdotes about how children and teens, in particular, are responding to these new technological artifacts.

I came to “Alone Together” a skeptic – it was clear from almost the first page that Turkle is troubled by what her field work yielded. Children projecting “life force” onto robots, adults fretting about the morality of robots caring for their failing elders, parents so distracted by their smart phones that they lose connection with their kids. That’s the framing of most of her reviews, and it was the framing I’ll admit I had when I began reading.

The book has a clear posture about our collective abilities to fend off seductive but ultimately damaging technological crutches – and that posture is that as we engage with machines, we’re losing important parts of our humanity. And Turkle is clearly worried about that.

When it comes to our relationship to technology, I tend not to be a worrier. My early marginalia on Turkle’s pages included “false premise!” and “what is the problem here?” and “so is a damn teddy bear!” (that last one in response to Turkle’s fretting about a child’s conception of whether a Tamagotchi is “alive.”)

The reason for my skepticism is simple: As Turkle describes page after page of people losing “true connectedness” in their lives and falling instead for the false thrill of tech, I keep thinking: Our tools have not caught up with our brains, and vice versa. We have shaped technology, and now it is shaping us – sure – but we can keep shaping it till we get the feedback loop right. So far, we simply have not – the music ain’t flowing, so to speak. In our relationship to what Kevin Kelly* calls the technium, we’re awkward pre-teens.

Or put another way, this cake ain’t baked. I mean, think about it. Facebook: Not quite right. Smart phones? Not quite right. Desktop computing? Even though we’ve had nearly three decades of interaction, it’s still not quite right.

One of “Alone Together’s” greatest failings for me – or perhaps it’s a lesson of sorts – was the parade of examples based on technological products which, after an initial period of cultural uptake, have been discarded or marginalized over time. Tamagotchis, Teddy Ruxpins, My Real Babies, Second Life, Blackberries, even, dare I say it, Facebook, are ephemeral in the sweep of a generation or two (or in some cases, in a year or two). We shouldn’t draw stern conclusions from our pre-teen love affairs, so to speak. We are learning, failing, trying again.

However, as the book unfolded, and I thought more personally about the issues Turkle raises, I began to agree with some of her concerns. We’re only on this earth for a short time, and the time we lose to poor relationships with technology is time we can’t get back. When Turkle describes a young parent pushing his child on a swing while checking email on his Blackberry, I saw myself as the CEO of The Standard back in the 1990s, and I winced. I think I was a pretty good Dad to my kids when they were young, but I do mourn the time I lost to my obsession with …. well, not technology, to be honest. But my work, and my career. Then again, for me, anyway, that career has been about technology…

So as I get a bit older, I do feel a need to reflect on how and whether my impulse to connect is impairing my most important relationships. And that’s a fair and good reflection to take.

After reading “Alone Together,” I happened to be watching television late one night with my wife, and while flipping around, we found the last half hour or so of Bladerunner. This 1982 film, based (loosely) on Philip K. Dick’s “Do Androids Dream of Electric Sheep”, is both lovely and a bit over the top. Turkle cites it near the end of her book. Its core question is simply this: What is it to be human, and when might machines reach that threshold? And…then what?

As I watched the film for what must have been the hundredth time, I found myself certain that this question will be central to our experience over the next generation or two (not a new thought, of course, but still…). I feel better prepared to debate the answer having read Turkle’s book.

—-

* NB: I am reading Kevin’s excellent “What Technology Wants” right now. I got about a third of the way through it when it came out, but was not in “deep reading” mode then, and wanted to do it justice. With an extraordinary crew of thinkers, dreamers, and makers, Kevin and I worked together to bring Wired to life from its inception in 1992 to1997, when I left to start The Standard. Turkle devotes the conclusion of her book to what amounts to an argument with Kevin’s premise in “What Technology Wants.” I emailed Kevin and asked him about it. Turns out, the two are close friends, which, of course, I should have known. By disagreeing, debating, conversing, and resolving, we become better people, more connected. More on Kevin’s book soon.

Other books I’ve reviewed recently:

The Information by James Gleick

In the Plex by Steven Levy

The Future of the Internet (And How to Stop It) by Jonathan Zittrain

The Next 100 Years by George Friedman

More on Twitter's Great Opportunity/Problem

By - August 10, 2011

Itwitter-bird.pngn the comments on this previous post, I promised I’d respond with another post, as my commenting system is archaic (something I’m fixing soon). The comments were varied and interesting, and fell into a few buckets. I also have a few more of my own thoughts to toss out there, given what I’ve heard from you all, as well as some thinking I’ve done in the past day or so.

First, a few of my own thoughts. I wrote the post quickly, but have been thinking about the signal to noise problem, and how solving it addresses Twitter’s advertising scale issues, for a long, long time. More than a year, in fact. I’m not sure why I finally got around to writing that piece on Friday, but I’m glad I did.

What I didn’t get into is some details about how massive the solving of this problem really is. Twitter is more than the sum of its 200 million tweets, it’s also a massive consumer of the web itself. Many of those tweets have within them URLs pointing to the “rest of the web” (an old figure put the percent at 25, I’d wager it’s higher now). Even if it were just 25%, that’s 50 million URLs a day to process, and growing. It’s a very important signal, but it means that Twitter is, in essence, also a web search engine, a directory, and a massive discovery engine. It’s not trivial to unpack, dedupe, analyze, contextualize, crawl, and digest 50 million URLs a day. But if Twitter is going to really exploit its potential, that’s exactly what it has to do.

The same is true of Twitter’s semantic challenge/opportunity. As I said in my last post, tweets express meaning. It’s not enough to “crawl” tweets for keywords and associate them with other related tweets. The point is to associate them based on meaning, intent, semantics, and – this is important – narrative continuity over time. No one that I know of does this at scale, yet. Twitter can and should.

Which gets me to all of your comments. I heard both in the written comments, on Twitter, and in extensive emails offline, from developers who are working on parts of the problems/opportunities I outlined in my initial post. And it’s true, there’s really quite a robust ecosystem out there. Trendspottr, OneRiot, Roundtable, Percolate, Evri, InfiniGraph, The Shared Web, Seesmic, Scoopit, Kosmix, Summify, and many others were mentioned to me. I am sure there are many more. But while I am certain Twitter not only benefits from its ecosystem of developers, it actually *needs* them, I am not so sure any of them can or should solve this core issue for the company.

Several commentators noted, as did Suamil, “Twitter’s firehose is licensed out to at least publicly disclosed 10 companies (my former employer Kosmix being one of them and Google/Bing being the others) and presumably now more people have their hands on it. Of course, those cos don’t see user passwords but have access to just about every other piece of data and can build, from a systems standpoint, just about everything Twitter can/could. No?”

Well, in fact, I don’t know about that. For one, I’m pretty sure Twitter isn’t going to export the growing database around how its advertising system interacts with the rest of Twitter, right? On “everything else,” I’d like to know for certain, but it strikes me that there’s got to be more data that Twitter holds back from the firehose. Data about the data, for example. I’m not sure, and I’d love a clear answer. Anyone have one? I suppose at this point I could ask the company….I’ll let you know if I find out anything. Let me know the same. And thanks for reading.