Larry Page on AI

You know I trend toward the mystic when it comes to the emergence of AI, and in the book I explored the idea of Google using brute computation and comprehensiveness to allow AI to emerge in its network. Here (Cnet video) Larry Page discusses this very idea, ending with…

Larry Ai

You know I trend toward the mystic when it comes to the emergence of AI, and in the book I explored the idea of Google using brute computation and comprehensiveness to allow AI to emerge in its network. Here (Cnet video) Larry Page discusses this very idea, ending with “it’s not as far off as many people think.” Thanks KK.

16 thoughts on “Larry Page on AI”

  1. This was the best net news in some time. It’s reasonable to assume that human quality machine intelligences will be developed within a decade and it’s reasonable to assume these will change society in profound and fundamental ways. Page notes that most human advances have come from technological improvements, so it’s hard to imagine that the advent of “better than human” intelligences and technologies will be anything but the biggest shift in human history.
    (enter wildly optimistic violin music here)

  2. Unfortunately, it’s not at all reasonable to assume that human-quality AI can be developed within a decade. That is, “strong AI” involving self-awareness/consciousness is not likely to happen in the current timeframe.

    Weak AI, on the other hand, may be the best bet, and may provide Google the practical applications that will be useful to their business.

  3. Oh, such nonsense. Having a huge amount of information does not make intelligence. In particular it does not make human intelligence. AI has *always* been coming in the next 5 or 10 years, and it always will be so long as people keep wishing machines to exhibit human-like intelligence. Machines are not humans why should they exhibit the same kind of intelligence? The Turing test is also nonsensical : something is intelligent because I can have a conversation with it? Please. I have conversations with *people* I know who are not intelligent!

    And please don’t think from this that I am against the kind of work that AI people do – they introduce many useful and itneresting programming techniques. I just wish they wouldn’t wrap it in the human baggage. (AI has tendencies to being a religion rather than a science)

  4. Soren, i’m on your side. As one of the many people clamouring to achieve semantic search true AI is simply not a fast enough route to produce search results yet.

    Using statistical analysis of vast amounts of data though gives an impression of understanding and is something I use to pretty good effect.

  5. Silver – he was obviously talking strong AI. Page noted that it’s unlikely the human intelligence “algorithm” is as unfathomably complex as many here seem to think.

    Certainly the human *ego* is incomprehensibly large, but the ego is not a critical component of intelligence. Machines exceed our abilities in many realms already – it takes no great leap of faith to assume that machines will eventually become conscious – perhaps even as soon as the number of interconnected processes approaches the number found in a human brain.

    More relevant than *when* we’ll have conscious computing is what we and the conscious machines will do as they begin to provide us with the means to optimize resources and manage things.

    Luis – I don’t know but assume he’s looking at the size of a file that would store the “information blueprint” which is about 3 billion nucleotide pairs, or 26,000 DNA molecules. Since it’s becoming increasingly clear that most of our DNA is not of informational significance he’s probably too high if this is the approach.

    I’m curious – do those who think human quality intelligence is such a big deal also think artificial mouse quality intelligence is unattainable?

  6. Harish I just read your blog post but did not understand why you think Page is “inaccurate”, and I *am* a biology guy. He was speaking figuratively more than literally. Of course the brain is a biological processor, but many believe there are enough similarities to mechanical processes that a synthetic, human quality intelligence is attainable. What’s the big deal here – do people think humans have some sort of copyright protection on our feeble intellects?

  7. Joseph, Sir Roger Penrose, the esteemed mathematician/physicist, outlined the complexities of intelligence in his book, The Emporer’s New Mind. It’s not at all as simple as Page’s brief mentions might lead people to believe – Penrose’s philosophical and scientific exploration makes that clear.

    I think that setting a machine to pass a Turing Test might be an altogether simple hurdle – but, I don’t think that would really be a test for Strong AI.

    But, it would be easy for them to say that they’ve attained AI, if they don’t first tell people what definition of AI they’re using, and what metrics they might use to measure whether they’ve achieved it…

    I think Harish makes some good points. Page sounded as if he confused the concept of the amount of data stored in DNA with the amount of memory or processing power available in the human brain. While he was making an informal speach and made other good points about science needing to better promote and communicate the value of their work, he was speaking to an audience of scientists and might ought to’ve been a bit more accurate and use fewer mixed metaphors.

    Perhaps he meant that DNA was relatively small, yet packs enough data to program a human into existence. Unfortunately, what that misses is that DNA alone really is not the complete picture for a software program to bring a human into being. There are a whole lot of other developmental cues and instructions which go along with the DNA in making a human. DNA is certainly also not the program that’s booted up to run the human mind. If that’s what was being implied, it’s a gross understatement and underestimation of how consciousness comes into existence.

  8. Right, but the relative simplicity of DNA and brain structures means they should be able to reverse engineer the human brain relatively soon

    Kurzweil reasonably suggests this is the most likely path to creating a conscious machine. Penrose is brilliant in physics, but his views about AI are not generally accepted by leaders in the field.

    It sounds like you are suggesting there is some magical/impossibly complex element to animal intelligence. Where is there ANY evidence for that? Sure it’s possible we are magically brilliant, but seems *very* unlikely. My money is on the machines.

Leave a Reply

Your email address will not be published. Required fields are marked *