I have been thinking about this a lot. How we are finally taking technology and making it serve our evolution, the two major breakthroughs of being human – our fingers – finely tuned gesticulation as a reflection of our minds – and our voices – again, finally tuned expression of our minds.
Pattie is on to something here.
http://video.ted.com/assets/player/swf/EmbedPlayer.swf
This post is really a bookmark of sorts, for more thinking that I’ve been doing about how this relates to search, real time search, and interface.
Props to Jeff Kravitz (among others) who reminded me how important this is. Jeff is a wonderful photographer, check out his work here.
Great thinking on every possibilities on those stuff
I don’t know John. I’d put my money on VR eyewear. This Vuzix model is coming out later this year.
http://blog.wired.com/gadgets/2009/01/new-vuzix-vr-gl.html
As well as the MyVu Shades, also in the next few months.
http://www.myvu.com/Myvyu-Shades-C25.aspx
Someone will do for VR Glasses what Apple did for mp3 players. These will hit the shelves this year. Two or three product generations later and they’ll hit the mainstream.
I don’t know John. I’d put my money on VR eyewear. This Vuzix model is coming out later this year.
http://blog.wired.com/gadgets/2009/01/new-vuzix-vr-gl.html
As well as the MyVu Shades, also in the next few months.
http://www.myvu.com/Myvyu-Shades-C25.aspx
Someone will do for VR Glasses what Apple did for mp3 players. These will hit the shelves this year. Two or three product generations later and they’ll hit the mainstream.
Singularity Hub recently did a post on the next generation of human computer interfaces which included sixth sense, but also much more:
http://singularityhub.com/2009/03/04/the-next-generation-in-human-computer-interfaces-awesome-videos/
Interesting stuff. The interface still seems rather cludgy. I think the ultimate goal is a chemical base synaptic interface. Bbut, that makes one wonder about the future of malware, viruses.
Who will be the Norton Utilities of the future?
Thank You..
It’s great to see substantive work coming out of the Media Lab. This technology raises more questions for me than it answers–the automatic inference of what is contextually useful seems very tailored to a demo path. Still, it’s an impressive interface, and I could see it evolving into something useful.
That’s just awesome stuff. I’ve seen videos around of companies playing with gesture control attached to stands and displays – but to take it with you is just brilliant isn’t it? Did they say what sort of mobile it would be attached too? It would have massive educational implications too. Teaching you to cook, operate, build, play sport etc. Can’t wait… but you can keep it out of my brain thanks…