Speculation on What’s Next For Google Desktop

From Boing Boing: BoingBoing reader Adam says, There's some idle suspicion that Google intends to expand their functionality to include sharing of desktop files. This seems pretty likely given their acquisition of Picasa, which included something called "Hello" – an IM-like application for chatting and sharing pictures. Moreover, if they…

From Boing Boing:

BoingBoing reader Adam says,
There’s some idle suspicion that Google intends to expand their functionality to include sharing of desktop files. This seems pretty likely given their acquisition of Picasa, which included something called “Hello” – an IM-like application for chatting and sharing pictures. Moreover, if they decide to merge this with orkut, to allow file sharing just with your friends network, then that’s a pretty compelling offering.

HOWEVER… The orkut terms of service are still extremely unfavorable to the end user. This is not too bad when it just applies to your profile and to chats on their message boards. It is REALLY bad when it applies to other forms of personal content that may be shared using the system.

For some reason I found myself thinking late last night about what it means when there are millions of local google HTTP servers running on millions of individual PCs, yearning to be connected. What kind of innovation might spring from such an ecology? Of course uploading your local content – just that which you want to upload, of course – to Google’s index is one possibility. If you could do that, you could pretty much wipe out hosted solutions for micropublishing (ie blogs), for example. Your machine, given advances in broadband and computing power – would become your web server, just as it was in the beginning, when there was so little traffic on the web (before the hosting business took over…). GDS could also lay down the framework for some killer distributed computing hacks, stuff that Google has demonstrated an affinity for in the past (Google Compute is built into the Toolbar).

But thoughts fail me. What comes to mind from you all when you think of a world where all our hard drives can seamlessly be connected to the Mother Index/Platform?

10 thoughts on “Speculation on What’s Next For Google Desktop”

  1. Well, closing the feedback loop for one thing: http://www.brendonwilson.com/ideas/p2psearch/

    This would get especially interesting when crossed with social networking – after all, who cares what some random stranger thought might be relevant? People don’t pick their friends in a vacuum, so there’s a strong likeliness that the things I’ve searched for (and found) might also be of use to serving the searching needs of my friends.

  2. Unique personalised search results. Why wouldn’t this be possible if Google knows what kind of information I a)have searched for in the past, and b)have stored on my PC (documents, emails, etc. It then uses this as one of the ranking/relevance/theme factors to overlay standard SERPS to provide me with a set of unique and personalised results.

  3. Unique personalised search results. Why wouldn’t this be possible if Google knows what kind of information I a)have searched for in the past, and b)have stored on my PC (documents, emails, etc. It then uses this as one of the ranking/relevance/theme factors to overlay standard SERPS to provide me with a set of unique and personalised results.

  4. For some reason I don’t personally see Google merging Orkut with desktop search – but the idea of Google setting up network file-sharing and hosting for microsites – now that’s a different idea. That would make a lot of sense.

  5. The New Moore’s Law

    “The affordability and ever increasing capacity of data storage is affecting all aspects of business and daily life. Peter Cochrane predicts what’s coming next and what sort of changes will occur once the petabyte is commonplace.”

    Peter Cochrane has some wonderful observations:

    “…TB storage systems anywhere. It seems that Moore’s Law has migrated to storage. We are now within an ace of the TB PC and laptop…”

    “Only four years ago my son built and installed our first terabyte (TB), or about 1,000GB, home storage server at a cost of $4,000. A month ago we installed the second TB for only $600. What a drop in price and physical size – two parallel drives instead of five.”

    “We are now within an ace of the TB PC and laptop. My guess is that by 2015 we will also be using around 10GB of RAM. You may think this ought to be enough for most individuals and businesses but when you imagine movies may soon be distributed digitally at around 6 to 12GB each, you can see it all being eaten away. And how about all 26 million books in the Library of Congress? It will need around 1TB too.

    The technology to realise hard drives up to and beyond 100TB is emerging in the R&D labs whilst petabyte (PB), or about 1,000TB, systems have already been engineered at great expense for specialised applications. I see no limit on the horizon to the growth of information storage as we have yet to enter the quantum world of the really small.”

    “But I think there is a more fundamental question we should be asking. We will be faced with an infinite and distributed sea of bits – much of which will not be itemised, catalogued, categorised or labelled in any way. Moreover, there will be duplication and storage corruption on a global scale. The really big question is: how the heck are we going to identify and locate anything in this mostly uncharted sea?

    Will today’s search techniques fit the bill? I think not – they are far too crude. We are going to need much higher degrees of sophistication to the point of machine cognition in recognising scenes, contexts and relationships in our data. The first machine capable of realising such a capability will probably arrive around 2010 in the form of an adaptive supercomputer and, given our rate of progress, on a PC scale by 2030 or earlier.

    Now for the kicker: I cannot see how a fully centralised or fully distributed system spanning the global network will satisfy our future needs. Seems to me an agent-based hybrid scheme is the only contender that will do. Right now I don’t see how we are going to do that but artificial life plus artificial intelligence in an evolvable form seem to be strong contenders.”

    http://hardware.silicon.com/storage/0,39024649,39125132,00.htm

  6. Google Broadband

    “But looking at Google Desktop and its local web server comes a more intriguing thought. How about partnering with or acquiring a large ISP/WISP (say, Earthlink) to deliver an affordable service bundle with symmetrical bandwidth, static IPs, reliable DNS, and self-publishing with Blogger, Picasa and Hello. Let millions of personal web servers bloom and piggy back on that big wave of user-generated content.

    Google would basically reindex their customers’ sites (just a directory on their desktop really) on the fly, and share the results with the rest of the world (or not) based on user settings (do not confuse the wedding pictures and the honeymoon video, ok). And now it makes sense to give software for free because you have other ways to bill consumers and learn about them. How’s that for increasing targeted ad inventory while diversifying your revenue sources, and wiring yourself into people’s life as well as within the fabric of the internet?”

    http://www.oliviertravers.com/archives/2004/10/18/outofleftfield-idea-of-the-day-google-broadband/

  7. To follow-up on Brendon’s comment – It’ll be a good idea to be able to filter/xref google search results against your p2p search result. That way the result set will be pruned to the stuff that others had something to say.

Leave a Reply to Brian Turner Cancel reply

Your email address will not be published. Required fields are marked *