As we’re working on the book, Sara and I are planning on sharing some of the news items and blog posts that catch our attention each week. We’ll also plan on talking through some of the things we’re reading and working on in this space. In keeping with boolean condition logic of the if/then working title for the book, we’ll be tagging these posts as “else.” Links aren’t necessarily endorsements, but they do point to ideas that got us thinking this week.
This week we look at challenges in the using quantified self data, developments in the NSA surveillance coverage, and round out with a few throwbacks to the Victorian age of technology. On to the links:
TechStars accelerator working on Nike+ to build innovation on the Nike platform. Article talks a lot about the importance of opening up the Nike+ API for development innovation (right now it’s only open to these ten accelerator companies).
Why The Quantified Self Needs A Monopoly – ReadWrite
Highlights one of the big barriers to consumer adoption right now, that is correlating all these quantified self data sources into one, meaningful view. To do that you need to a) be able to get the data into one place, b) have it speak to each other, and c) know what you are looking at once you can see it all in one place. We might argue that you don’t need an Apple or Microsoft monopoly for that necessarily. But we will need tools that pull this together; maybe something more along the lines of Mint.
The Public-Private Surveillance Partnership – Bloomberg
Bruce Schneier walks us through the implicit business models that got us into the current surveillance state: “Imagine the government passed a law requiring all citizens to carry a tracking device. Such a law would immediately be found unconstitutional. Yet we all carry mobile phones.” [Incidentally, Schneier is also a Fellow at the Berkman Center this year along with Sara].
This Recycling Bin Is Stalking You – The Atlantic Cities
Recycling bins in London are tracking MAC addresses from passing smartphones and Wifi-enabled devices, essentially bringing tracking cookies from the internet into the physical world. Turns out this might actually be illegal.
Security researcher Brendan O’Connor uses cheap Raspberry Pi devices to monitor Wifi signals, proving that conducting surveillance is becoming easier, no matter who you are.
Once the data is in a format where it can be activated, others will find new uses for it.
The United Nations Global Pulse team is using sentiment analysis and mobile data to catch early signals for global economic trends to develop faster, more adaptive and responsive aid programs.
Car sharing dropoffs at airports are started to see a crackdown in SFO. Policies still protect taxi and limo service domain here and new regulations requiring insurance companies could increase operating costs. This could slow down the markets where consumers are taking underused assets and making them liquid. John recently wrote about how Uber saved the day in a recent travel snafu.
3-D Printing the 19th Century – NYTimes
Martin Galese is bringing back patents from a bygone era, 3-D printing them in all their beautifully-designed glory. Some of these designs might not have been easily manufactured in their time.
Last week Sara was reading Rebecca Solnit’s River of Shadows: Eadweard Muybridge and the Technological Wild West. The book looks at the historical context around Muybridge’s photographic technology developments that increased the shutter speeds and introduced the ability to almost slow down time into smaller knowable bits. These developments paved the way for modern cinema, but also ran parallel to Victorian explorations of scientific discovery. Sara wrote about some interesting parallels with Muybridge’s body movement studies and the Quantified Self movement; film allowed us to slow down and dissect the bodies’ gate; sensors like the Fitbit allow us to track a walking gate all day long.
Earlier this year I sat down with a videographer at the Bazaarvoice Summit in Austin. He asked me about the future of marketing, in particular as it related to data and consumer behavior. Given what I announced earlier this morning, I thought you might find this short video worth a view. Thanks to Ian Greenleigh for doing all the work!
Today on the Federated site, I’ve posted a preview of something we’re working on for a Fall release. I’m cross posting a portion of it here, as I know many of you are interested in media and data-driven marketing.
It’s no secret that Federated Media has deep roots in content marketing: We re-imagined CM for the modern web eight years ago, and since then have executed thousands of content-driven programs with hundreds of awesome publishers, services, and brands. “All Brands Are Publishers” has been one of our core mantras since our founding. And each year we run the CM Summit, where the topic of native, content, and conversation-driven marketing across all digital platforms is dissected.
Back when I was first studying the intersection of brand marketing and technology – about the same time as The Search and the founding of FM – I started talking and writing about “The Conversation Economy.” Its core theme is this: “In the future, all companies must learn how to have 1-1 conversations with their customers at scale, leveraging digital technologies.”
Back then, actually executing on such an idea seemed a pipe dream. Recall, this was before Twitter, before Facebook, and before the Lumascapes. But one reason I love this industry is that we can dream big, and a few short years later, those dreams can become reality.
With the proliferation of “native” platforms like Twitter, Google, Facebook, Tumblr, and blogs, the idea of “branded publishing” has truly caught on. Every major agency (and publisher) has a brand storytelling shop, some have gone so far as to declare publishing to be central to their future. This is a very good thing – the massive infrastructure of media and marketing is slowly reshaping itself to become more nimble and responsive to how the world actually communicates.
But storytelling alone isn’t enough to get the job done. As an industry we need a platform that allows us to distribute those stories to just the right people, at just the right time, in just the right context. Up until recently, the only platform that allowed that kind of precision was search – hardly a great story telling medium for marketers, and driven by direct response dollars, in the main.
In the past few years, programmatic adtech has erupted onto the scene, but again, this technology platform has been used primarily for direct response. Programmatic’s rise has in large part been driven by “retargeting” – the practice of identifying a customer who visits your site, then finding him or her across the web and serving ads related to what they saw during their visit. Retargeting is now a core conversion tool for sophisticated direct marketers. It’s why that pair of shoes you looked at on Zappos keeps following you around the web.
Two years ago, we developed a thesis at FM: Programmatic adtech was going to drive brand marketing, and the bridge between the two would be content marketing. That’s why we bought Lijit Networks, one of the largest independent adtech companies in the United States. We believed then, and even more so now, that programmatic + content marketing = brand building.
While direct response is important, building brand awareness, preference, and loyalty remains a fundamental need. Brands need a scaled way to tell their stories to the right people in the right context. In the past 18 months, “scaled walled gardens” like Facebook land Twitter began to offer native advertising suites that offered just that promise (Tumblr offers a similar promise, one Yahoo! believes it can deliver upon).
But what about the “rest of the Internet”? While it’s fun to try out new “native” sites like Buzzfeed, the web wants a scaled play in “content marketing” that also checks the boxes of efficiency and highly evolved targeting.
Well, we’d like to introduce you to FM’s newest product suite, which (for now) we’re calling “Content Reachtargeting.” Internally, we like to refer to this effort as the “Reese’s Peanut Butter Cup” of marketing – you have your chocolate of high-quality content mixed with the peanut butter of programmatic retargeting. A perfect combination.
(To read more about it, head over the FM site….)
One of the key themes in our upcoming book has to do with the interaction of information and the physical world – in particular, how all things physical become “liquid” when activated by just the right information. But when you’re writing (and thinking out loud) about this topic, it’s easy to fall into an academic cadence, because information theory is a thicket – just try reading “The Information” in one sitting, for example.
So I’ve found it’s best to just tell stories instead (and to be honest, I’d wager that nearly all information theory should be reduced to narrative, because narrative is how we as humans make sense of information, but I digress). Here’s a story that happened just this past weekend.
If you’ve been reading for more than a year, you know that I spend a good part of August working on an island off the coast of Massachusetts. It’s a special place where my great grandmother settled in the 19th century, the kind of place where you visit graveyards with your kids to remind them of their own history, then hit a carousel and ice cream shop in the afternoon.
Anyway, this year my time on the island is unfortunately brief, what with my coming back to FM and various other entanglements. So every day “on island” is a precious one. Last week my son and I went on an East Coast college tour, driving from Washington DC (Georgetown, American) through Pennsylvania (Bucknell, etc) to Boston (Northeastern, BU, etc). After touring MIT on Friday, we decided to come to the Island a day early, on Sat. That way we could open up the rental house, get the car, and prepare for the rest of our family – my wife and two daughters – who were flying in on Sunday.
On Sunday my son and I were settled into our rental, eagerly awaiting Mom and sisters’ arrival. They landed in Boston just fine, but hit a major hitch with their connection on a tiny airline called Cape Air. Now Cape Air’s largest plane seats about 8 people, as they specialize in one thing – hopping from Boston down to the Islands. Apparently some cross winds and rain fouled up the routes, and long story short, my family’s flight was delayed for two hours, then cancelled altogether (oh, and they lost my wife’s luggage too). By then, it was almost 9pm, and too late to drive the 80 miles down from Boston to the ferry in Falmouth, which is the only other way to get to the island. (The last ferry leaves at 9.45 pm).
We were all distraught – there they were, just 80 miles away, but with no way to get to us. We were going to lose one of our precious days on the island, and it just stunk.
Then a fellow stranded passenger mentioned a possible solution to my wife: There was a service in Falmouth that aggregated private commercial boats for use as water taxis. Maybe they’d be able to help?
My wife mentioned this to me, and sure enough, a quick Google search found them. At 8.30 pm I called the service and “Captain Jim” hooked me up with another lobster boat captain ready to take my family across the sound, even late at night. Awesome!
But we still needed to get the girls that 80 miles down to Falmouth, and it was past “business hours.” I’ve used a lot of car services in my business life, and I know that they are not exactly very flexible – you have to make a reservation well in advance, and they cost a lot of money for long drives. I wasn’t expecting much, but I started calling as many as I could find, asking if they had any cars near Boston’s Logan airport *right now*.
The car services I called acted exactly as one might expect. Two put me on interminable hold while the “checked to see if they had a car near the airport.” A third flat out refused to try. A fourth asked me to put the girls into a taxi and send them back into downtown Boston, where they might have a car. And so on.
That’s when I tweeted this out:
Hey @uber you need the ability for people to drop pins for their friends/family in other locations. I’d do that ALOT
— John Battelle (@johnbattelle) July 29, 2013
And this is when the story starts to get interesting, in terms of liquid, information-driven markets interacting with the physical world. It turns out, Uber *does* have the ability for someone to drop a pin remotely, I just didn’t know it – I didn’t have *the information* I needed. Twitter solved that in an instant, as one of my followers quickly clued me in about how to do it. In two minutes I was called by an Uber driver at the airport, who was ready to whisk my family down to the waiting lobster boat. An hour and a half later, I picked them up on a private dock near the house. What the private boat service and Uber did was take inactive, physical objects, in this case a Lincoln towncar and a fishing boat, and turn them into kinetic, liquid, real-time addressable assets. And the main reason this was possible? Information cycling through a digital foundation of cell phone towers, Google, and apps like Uber.
What’s even more interesting about this story are the economics: The cost to fly three people via Cape Air to the Island was nearly 25% *more* than taking Uber and the chartered water taxi. Add to this the fact that the Uber driver was far more friendly and eager to please than your typical car service guy, and the water taxi was both fun and nearly twice as fast as the ferry.
In short, having the a liquid, information-driven market of cars and boats created a cheaper, faster, and way more enjoyable experience for my family. That’s a great thing, to my mind, and it makes me optimistic about the coming liquid economy. But if I operated Cape Air or Carey Limousine, I’d be more than nervous right about now….
Yesterday I took my son to the MIT Media Lab, hallowed ground for me, as reading Stewart Brand’s 1988 “The Media Lab” propelled me toward helping to create Wired magazine, where I edited the founding Director of the Lab, Nicholas Negroponte, for five years (he wrote the back column of the magazine).
For this visit, I met up with David Kong, one of the lab’s alumni wizards, who took us on a whirlwind tour of the place (David’s work on microfluidics is, I believe, some of the most important stuff being done today, but more on that in another post). I spent a day there last summer with Director Joi Ito, and it’s amazing to see how much progress can be made in a year.
Instead of describing everything, I think I’ll let video do the work – one of the Lab’s core values is to always be demo’ing, and my son and I saw half a dozen incredible projects, all demo’d by the people who created them.
First up was the Opera of the Future group, which evolved out of the HyperInstruments lab. Akito Van Troyer gave us a tour of the components used in the recently staged “Death and the Powers” piece, which was a finalist for the Pulitzer in music. Here’s some video of that performance:
Akito also turned us onto a very cool beat machine he built as a side project, a hack based on actuators and tempo that turns anything into a percussion instrument. Here’s a video of that I found on YouTube:
After that we went to see Xiao Xiao, who works in the Lab’s Tangible Media group (this is the part of the Lab most directly connected to the themes of the upcoming book). She showed us the MirrorFugue, which is just amazing, in particular, to sit down at the keyboard as it’s playing. It’s magical, which is pretty much the goal of the entire Lab. Here’s a video of that:
You can probably sense a theme by now – all this work is about blurring the lines between physical and digital, atoms and bits. An extraordinary world is soon to be settled by pioneers in this space, and we’re all of us fascinated by it – it’s why we love the idea (if not necessarily the look) of Google Glass, or 3D printing (I met the co-founder of FormLabs while at the Lab), or cool gadgets like the NFC ring.
The Media Lab is a place where folks are actively creating the future. Over and over, I heard this refrain: “I took some off the shelf parts, hacked them together, and wrote some code.” Simple, right?
One example: Makey Makey, which went viral earlier this year with the “banana piano.” The idea is bigger than turning fruit into keyboards, however. It’s about making nearly anything physical a portal into the digital world, and bringing the digital right back into the physical. I met with Eric Rosenbaum, one of the creators, in his lab, which is called “Lifelong Kindergarten” (yeah, I know.) Here’s a short video about Makey Makey:
As the border between physical and digital gets more permeable, a new kind of literacy emerges. And that literacy is built on a foundation of code – whether it’s the codes of letters and words, or the code of bits and algorithms. Rosenbaum showed me Scratch, a graphical programming language used by hundreds of thousands of kids across the world. I’m determined to learn how to code, at least enough to be dangerous (I took classes in Pascal about 30 years ago…). Maybe Scratch is where I’ll start.
Next up I met Dan Novy, from the Lab’s Object-Based Media Group. He showed us a number of great projects he’s working on, including holo-presence (with a sense of humor, see photo at top) and new forms of augmented experience. Check out this video about redefining the home entertainment experience:
Dan also took us into a small room with a voice aware projection device in the center. Using his voice, Dan told a children’s story, and the four walls of the room lit up with visual images related to the storybook. It’s early days, but we discussed what might happen when this device is miniaturized and connected to consumer “narrative catchers” like Facebook, Path, Google Glass, and the like. Also next to the projector was an object – what it is, it doesn’t matter, but for this example it was a bottle of perfume – and when you pick up that object, “memories” related to that object are projected onto the walls. So imagine what might happen when you pick up that ornament from Christmas three years ago and hang it on the tree, and images from that Christmas past flash onto your home’s walls….
The last demo we saw was perhaps the most well known of Dan’s group’s work. In essence, they turned a basketball net into a data collection device, so as to measure the force of a slam dunk. The technology is amazing, watch Dan talk about it here:
The Media Lab is truly an extraordinary place, and seeing it with my son made it even more magical. I’ve toured it a few times now, but I’ll never tire of coming back. The work happening there is helping to define the world all our kids will be living in soon.
It’s been pretty obvious from the stock price, but LinkedIn, which I’ve written about every so often, is really on a roll lately. The influencer content play (which I will admit I’ve been part of, in a small way) is a clear winner, the company is enjoying very positive press, and its premium services are getting really interesting as well.
Just today I got an email from the company titled “What’s new with people you know?” I found it compelling in a way that emails from nearly every other service I use – Twitter, Facebook, or Google – are not. CEO Jeff Weiner tells me that this email has been sent out every six months for the past three years, but it’s clearly been redesigned as more of a media product. I care about my network on LinkedIn, and the email was full of pictures of people who really matter to me, all of whom have gotten new jobs. It’s one of the most engaging messages I’ve ever gotten from a “social network.” (In case you want some history, I called LinkedIn out as a media company more than a year ago here.)
I also found the focus on numbers very interesting. 10% of my network – which is pretty big – have gotten new jobs in the past six months. That’s quite an intriguing lens on how things are changing in our industry. LinkedIn has always been a data-centric company, and each time I speak with Weiner, he’ll cite engaging statistics his team has culled from the network’s servers. This rolls up to Weiner’s big hairy audacious goal (BHAG) for the company – to map the global “economic graph.” As he puts it:
..we want to digitally map the global economy, identifying the connections between people, jobs, skills, companies, and professional knowledge — and spot in real-time the trends pointing to economic opportunities. It’s a big vision, but we believe we’re in a unique position to make it happen.
It seems Wall Street agrees. I’ll be watching LinkedIn more closely over the coming year, and I bet Google and Facebook will be too. Hoffman, Weiner and team have built something that both those companies, and many more, must find quite enviable.
Sometimes when you aren’t sure what you have to say about something, you should just start talking about it. That’s how I feel about the evolving PRISM story – it’s so damn big, I don’t feel like I’ve quite gotten my head around it. Then again, I realize I’ve been thinking about this stuff for more than two decades – I assigned and edited a story about massive government data overreach in the first issue of Wired, for God’s sake, and we’re having our 20th anniversary party this Saturday. Shit howdy, back then I felt like I was pissing into the wind – was I just a 27-year-old conspiracy theorist?
Um, no. We were just a bit ahead of ourselves at Wired back in the day. Now, it feels like we’re in the middle of a hurricane. Just today I spoke to a senior executive at a Very Large Internet Company who complained about spending way too much time dealing with PRISM. Microsoft just posted a missive which said, in essence, “We think this sucks and we sure wish the US government would get its shit together.” I can only imagine the war rooms at Facebook, Amazon, Google, Twitter, and other major Internet companies – PRISM is putting them directly at odds with the very currency of their business: Consumer trust.
And I’m fucking thrilled about this all. Because finally, the core issue of data rights is coming to the fore of societal conversation. Here’s what I wrote about the issue back in 2005, in The Search:
The fact is, massive storehouses of personally identifiable information now exist. But our culture has yet to truly grasp the implications of all that information, much less protect itself from potential misuse….
Do you trust the companies you interact with to never read your mail, or never to examine your clickstream without your permission? More to the point, do you trust them to never turn that information over to someone else who might want it—for example, the government? If your answer is yes (and certainly, given the trade-offs of not using the service at all, it’s a reasonable answer), you owe it to yourself to at least read up on the USA PATRIOT Act, a federal law enacted in the wake of the 9/11 tragedy.
I then go into the details of PATRIOT, which has only strengthened since 2005, and conclude:
One might argue that while the PATRIOT Act is scary, in times of war citizens must always be willing to balance civil liberties with national security. Most of us might be willing to agree to such a framework in a presearch world, but the implications of such broad government authority are chilling given the world in which we now live—a world where our every digital track, once lost in the blowing dust of a presearch world, can now be tagged, recorded, and held in the amber of a perpetual index.
So here we are, having the conversation at long last. I plan to start posting about it more, in particular now that my co-author Sara M. Watson is about to graduate from Oxford and join the Berkman Center at Harvard (damn, I keep good company.).
I’ve got so many posts brewing in me about all of this. But I wanted to end this one with another longish excerpt from my last book, one I think encapsulates the issues major Internet platforms are facing now that programs like PRISM have become the focal point of a contentious global conversation.
In early 2005, I sat down with Sergey Brin and asked what he thinks of the PATRIOT Act, and whether Google has a stance on its implications. His response: “I have not read the PATRIOT Act.” I explain the various issues at hand, and Brin listens carefully. “I think some of these concerns are overstated,” he begins. “There has never been an incident that I am aware of where any search company, or Google for that matter, has somehow divulged information about a searcher.” I remind him that had there been such a case, he would be legally required to answer in just this way. That stops him for a moment, as he realizes that his very answer, which I believe was in earnest, could be taken as evasive. If Google had indeed been required to give information over to the government, certainly he would not be able to tell either the suspect or an inquiring journalist. He then continues. “At the very least, [the government] ought to give you a sense of the nature of the request,” he said. “But I don’t view this as a realistic issue, personally. If it became a problem, we could change our policy on it.”
It’s Officially Now A Problem, Sergey. But it turns out, it’s not so easy to just change policy.
I can’t wait to watch this unfold. It’s about time we leaned in, so to speak.
I had a chance to be interviewed with Fred Wilson by Dave Morgan of Simulmedia (and Tacoda and and and…). The video is fun and ranges around from OpenCo to the future of the Web, so I thought I’d share it here:
I was interested to read today that Esquire is currently experimenting with a per-article paywall. For $1.99, you can read a 10,000-word piece about a neurosurgeon who claims to have visited heaven. Esquire’s EIC on the experiment: “…great journalism—and the months that go into creating it—isn’t free. So, besides providing the story to readers of our print and digital-tablet versions of the August issue, we are offering it to online readers as a stand-alone purchase.”
I predicted that payment systems and paid services/content were going to take off this year (see here), but this isn’t what I had in mind. But it did get me thinking. What if you added social and elastic elements to the price? For example, the article would initially cost, say, $1.99, but if enough people decided to buy it, the price goes down for everyone. The more people who buy, the cheaper the price gets. It’d never go to zero, of course, but there’d be some kind of a demand/price curve that satisfies the two most important things publishers care about: readership (the more, the better) and revenue (ideally, enough to cover the costs of creation and make a fair profit).
The tools to do this already exist. There are plenty of sites that crowdsource demand to create pricing leverage, and sites like Kickstarter have gotten all of us used to the idea of hitting funding goals. And the social sharing behaviors already exist as well: Nearly all content has social sharing widgets attached these days. Why not combine the two? Those who initially paid the highest price – $1.99 say – would be motivated to share a summary of the article with friends and encourage them to buy it as well. They are economically incented to do so – the more friends who buy, the greater the chance that their initial $1.99 charge will decrease. And they’re socially incented to do so – perhaps they could get credit for being one of the early advocates or tastemakers who recognized and surfaced a great piece of content before anyone else did.
Let’s break down the economics to see how it might work. A really great piece of long form journalism in a magazine like Esquire pays around $15,000 (sometimes more, sometimes less, depending on the author, subject, length, and title). But for this model, let’s say the payment to the journalist is $15K. Then you need to factor in the cost of the editor, copy editor, production, sales and design, as well as general overhead of the publication per piece. Let’s call that another $5K per piece (I’m spitballing here but probably not too far off). So for this article to make a profit, it needs to make $20,000 – or sell roughly 10,000 copies. Of course, the article is also monetized through the regular magazine and tablet editions, so the real number it has to hit is probably far less – let’s cut it in half and say it’s $10,000. Now to clear a profit, the article really just needs to sell 5,000 copies at $1.99.
Let’s not forget that Esquire also shows advertising against its articles. If it maintains a healthy $25 CPM, and shows two “spread” (two-page) ads between those 10,000 words, that’s roughly $100 per 1000 readers that Esquire can make. If it indeed does sell 5,000 copies of that article, that’s $500 of advertising revenue earned. And if it gets more readers, it can earn more advertising revenue – and decrease the paid content price in some correlated fashion. (No matter what, Esquire wants more readers – both to increase its advertising revenue, but also to accomplish its journalistic mission – all authors want more readers).
Perhaps a model could work like this: The piece costs $1.99 for the first 5,000 articles sold, garnering $10,000 in revenue (Ok, $9,500 for you sticklers). Once that threshold hits, the price adjusts dynamically to maintain at least $10,000 in overall revenue, but adjusting downward against the paying population as more and more readers commit (which also earns Esquire additional advertising revenue). A “clearing price” is set, perhaps at 50 cents, after which all profits go to Esquire. In this case, the clearing price kicks in at 20,000 copies sold – everyone would pay .50 at that point, and it’s a win win win for all.
Just spitballing, as I said, but I think it’s a pretty cool idea. What do you think?