The Singularity Is Weird

It’s been a while since I’ve posted a book review, but that doesn’t mean I’ve not been reading. I finished two tomes over the past couple weeks, Ray Kurzweil’s The Singularity Is Near, and Stephen Johnson’s Where Good Ideas Come From. I’ll focus on Kurzweil’s opus in this post.

Given what I hope to do in What We Hath Wrought, I simply had to read Singularity. I’ll admit I’ve been avoiding doing so (it’s nearly six years old now) mainly for one reason: The premise (as I understood it) kind of turns me off, and I’d heard from various folks in the industry that the book’s author was a bit, er, strident when it came to his points of view. I had read many reviews of the book (some mixed), and I figured I knew enough to get by.

I was wrong. The Singularity Is Near is not an easy book to read (it’s got a lot of deep and loosely connected science, and the writing could really use a few more passes by a structural editor), but it is an important one to read. As Kevin Kelly said in What Technology Wants, Kurzweil has written a book that will be cited over and over again as our culture attempts to sort out its future relationship to technology, policy, and yes, to God.

I think perhaps the “weirdness” vibe of Kurzweil’s work relates, in the end, to his rather messianic tone – he’s not afraid to call himself a “Singulatarian” and to claim this philosophy as his religion. I don’t know about you, but I’m wary of anyone who invents  a new religion and then proclaims themselves the leader of it.

That’s not to say Kurzweil doesn’t have a point or two. The main argument of the book is that technology is moving far faster than we realize, and its exponential progress will surprise us all – within about thirty years, we’ll have the ability to not only compute most of the intractable problems of humanity, we’ll be able to transcend our bodies, download our minds, and reach immortality.

Or, a Christian might argue, we could just wait for the rapture. My problem with this book is that it feels about the same in terms of faith.

But then again, faith is one of those Very Hard Topics that most of us struggle with. And if you take this book at face value, it will force you to address that question. Which to me, makes it a worthy read.

For example, Kurzweil has faith that, as machines get smarter than humans, we’ll essentially merge with machines, creating a new form of humanity. Our current form is merely a step along the way to the next level of evolution – a level where we merge our technos with our bios, so to speak. Put another way, compared to what we’re about to become, we’re the equivalent of Homo Erectus right about now.

It’s a rather compelling argument, but a bit hard to swallow, for many reasons. We’re rather used to evolution taking a very long time – hundreds of generations, at the very least. But Kurzweil is predicting all this will happen in the next one or two generations – and should that occur, I’m pretty sure far more minds will be blown than merged.

And Kurzweil has a knack for taking the provable tropes of technology – Moore’s Law, for example – and applying them to all manner of things, like human intelligence, biology, and, well, rocks (Kurzweil calculates the computing power of a rock in one passage). I’m in no way qualified to say whether it’s fair to directly apply lessons learned from technology’s rise to all things human, but I can say it feels a bit off. Like perhaps he’s missing a high order bit along the way.

Of course, that could just be me clinging to my narrow-minded and entitled sense of Humanity As It’s Currently Understood. Now that I’ve read Kurzweil’s book, I’m far more aware of my own limitations, philosophically as well as computationally. And for that, I’m genuinely grateful.

Other works I’ve reviewed:

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

28 thoughts on “The Singularity Is Weird”

  1. I haven’t read the book, but have read the arguments for Singularity. And the main argument -more computer power will solve everything- reminds me John Von Neumann’s prediction in the ’50s.

    Von Neumann indeed thought we would not only be able to predict the weather but we could *control* it. 60 years later we’re not even close to accurately predict the weather. Turns out the world around us is more complex than we expected.

    And it might be the same for the human mind. A computer can easily simulate a network of a hundred neurons. But put billions of neurons together to make a brain and you end up with a conscience, a mind, a soul or whatever you want to call it. Where does that come from? Is it a side effect of electricity? Of a chemical reaction in the brain? Something else? Nobody knows.

    I’m not saying Singularity won’t happen. But I’m always skeptical of any grand prediction.

    1. And Ray has an answer for every single objection, I promise you. I can’t say whether his answers are convincing, is the thing. I just don’t know.

      1. And it’s OK not to know. The wise man acknowledges his ignorance and doesn’t try to come up with “prediction” to fill the void.

  2. Penrose-Hameroff Orch OR model
    “Most explanations portray the brain as a computer,
    with nerve cells (“neurons”) and their synaptic connections acting as
    simple switches. However computation alone cannot explain why we have
    feelings and awareness, an “inner life.”
    We also don’t know if our conscious perceptions
    accurately portray the external world. At its base, the universe follows
    the seemingly bizarre and paradoxical laws of quantum mechanics, with
    particles being in multiple places simultaneously,
    connected over distance, and with time not existing. But the
    “classical” world we perceive is definite, with a flow of time. The
    boundary or edge (quantum state reduction, or ‘collapse of the wave
    function”) between the quantum and classical worlds somehow involves

      1. “dumb” as in silence? you might appreciate John Muir’s description of singularity.

        “When we try to pick out anything by itself, we find it
        hitched to everything else in the Universe.

        “Plants, animals, and stars are all kept in place, bridled along appointed ways, with one another, and through the midst of one another — killing and being killed, eating and being eaten, in harmonious proportions and quantities.

        and another way to apply the “Eat the Meat and Spit out the Bones” principle.

      1. People I’ve spoken with have the tendency to hear the most fantastical of the scenarios that the singularity enthusiasts lay out and are overwhelmed.  

        They have a (reasonable) reaction:  “You’re crazy.  You’re telling me that you think that robots are actually going to go all Decepticon on us and begin creating super-intelligent human eating robots in 20 years, and thus, take over the world?  All because of Siri?  You’re completely nuts and so I’m going to dismiss pretty much anything you’re saying.”

        I like to eat the meat…trying to pick apart those fantastical visions and look for where there are hard truths emerging. 
        As an example, I put all of my calendar events into Google Calendar and all of my phone numbers into my iPhone.  I archive all of my conversations in Gmail and tag them for searching later and I maintain all of my relationships in Facebook.  I write down everything I want to remember in Google Docs, and all of my critical documents are stored on DropBox.  I know I don’t have to remember formulas, definitions, or concepts, because I can look them up via Search.  

        Thus, I have basically uploaded a huge portion of my brain functioning to the cloud.  In a sense, I am storing my memory in the cloud.  

        Moreover, all of my tasks are stored in my cloud-based task management services, which I have often joked, helps relieve my brain’s RAM requirements.  I don’t have to have a jumble of tasks rolling around and constantly re-prioritizing themselves in my brain…so I can focus on the task at hand. Thus, a huge % of what I need to remember is now on the cloud & I have already trained myself to step worrying about remembering this stuff.  

        Stressing about everything I have to do tomorrow?  Just put it in my task manager & forget about it.  Get a new email from a friend?  Quickly connect with them on LinkedIn and forget about it.  Propose a meeting?  Send out that calendar invite & I’ll think about it when my iPhone pings me three hours in advance.

        So the fanciful Singularity talk that’s a huge turnoff to some says “someday some type of embedded thing will download our brain to the web…and we will all become super-computers because we will be talking to computers through that magic brain chip!!”.  That seems unrealistic.  

        But the reality is, I have already manually input (or now dictated, or copy-pasted) much of both the things I used to remember (Memory) and things I used to worry about (RAM), and they are now sitting on a server somewhere.  

        And that’s me.  I was 6 when we got our first Apple IIE.  What about the 11-year-old who learned to search before they could read?  

        So…I spit out the bones:  threatening humanoid robots.  And I eat the meat:  medical and technological advances are doing very interesting things to the way we live our lives and are slowly affecting the essence of what it means to be human.

      2. I would argue we store a very SMALL percentage of our brain on the cloud – the part we notice the least : how to walk, speak, get dressed, open a door or drive a car. How to turn on a computer and where is the ‘W’ letter on the keyboard. In which cabinet of our kitchen are the cereals, what not to say in polite company. Memories from our childhood, lessons we’ve learnt, etc. And that’s just the memory part. The cognitive part is a whole other matter and is not on the cloud.

        And yes, technological advances are doing very interesting things. But technological evolution is everything but linear, and tends to lead us to unexpected places. In the ’60s people had great hopes for the space age. Sci-fi up to the ’80s imagined year 2000 with robots living among us and everybody having their own private space ship. But the walkie talkies would still be huge and there would be no Internet. Turns out that space travel wasn’t the expected boom. On the other hand, satellites are helping us in unexpected ways like the GPS.

        Right now there is a lot of buzz around nanorobots, but it is science fiction at this point, and we don’t know what actual use we’ll have for it. Maybe it will help us get a copy of our brain, but maybe it will be used for an entirely different purpose.

      3. I agree with your points.  Especially the part about the patterns of discovery being unpredictable.  

        We don’t all have robots in our home (roomba be darned).  But robots are starting to do interesting things in things in even more sensitive arenas, like surgery.  They have indeed become an important part of manufacturing/industrial process…and remember the robots that were used to diffuse Fukushima?  

        You’re probably right that it’s technically a very small % of the brain that is on the cloud.  But it’s also the portion of the brain that I’m most aware of.  I certainly need my brain to remember how to tie my shoes and chew…and you’re right, that’s probably computationally much more difficult than remembering all of my friends’ phone numbers.  

        However, if the stuff that I am not really aware of happening (my brain working to help me tie my shoe) is still done in my brain, while the stuff I am REALLY aware of (like what to do, and who to do it with), is on the cloud…then a materially significant % of my brain function is on the cloud.  

        Take, for example, who we interact with.  Now, through Facebook algorithms, a non-zero % of our decision making about who we stay in touch with is auto-controlled through Facebook.  We may still use our own brains to remember how to type, how to smile, how to feel, etc.  But if Facebook has a huge impact on WHO we stay in touch with, then that small % of the part of our functioning which is exported to the cloud is very materially impactful on our lives.

      4. I’d say it’s a pretty minor portion of your brain, but it’s interesting and it’s growing…

  3. The Singularity is an old idea, especially in science fiction.  But Kurzweil articulates it very well.

    It is one of those ideas that you will now gnaw on for the rest of your life, John.  “Eat the meat and spit out the bones” is a great metaphor.

    Some random observations from my thoughts about this idea:

    – William Gibson in his BBC interview last year said that cyberspace and reality are merging. i agree.

    – There is lots of great sci-fi written about this.  Ask one of your sci-fi addict friends to suggest a few if you want to read more.  Much modern sci-fi is more accurately called speculative fiction – really smart authors think about social or technological concepts and extrapolate them to the future, so the rest of us can think and discuss them.  (Much advanced, modern speculative fiction just assumes self-aware computers and moves on.)

    – Most arguments against intelligent design advanced by scientists don’t disprove it – they just try to demonstrate that it isn’t necessary.  But that’s not the same as disproving it. So maybe there is some special sauce in humans.  But “if a tree falls in the woods and no one is around, does it make a sound?”  The answer of course is it doesn’t matter.  If something is not discerned, ever, it doesn’t exist.  We discern a tiny amount of the world around us. (Remember, Buddha taught that God is too big an idea for people with minds our size to understand, so we shouldn’t waste time speculating about it.)

    – The AI guys I worked with in the ’80s always reminded me that the question is not whether machines think, but more importantly, do humans think?

    – This highlights the biggest problem I see w/ the Singularity.  What does it mean to think?  What does it mean to be human?  We don’t understand either of those questions very well.  Are we going to make a self-aware silicon life-form that is a Hitler or a Gandhi?  How would we tell the difference while we are building it?

    –  I think the problem above is the same problem we have for the 21st century: what kind of a future are we going to build out of our increasingly connected world? So by solving that problem, we’ll be better equipped to tackle the task of building a Gandhi, rather than a Skynet.

    – In line w/ Gibson’s observation, the notion of augmented thought has been around in sci-fi for a long time.  That is, we will someday build a high-bandwidth interface between a human and a computer.  The work in direct neural interfaces for prosthetics moves us in this direction.  An example of the UI is that you would look at a math equation, the computer picks it up in your visual field, recognizes it, solves it, and injects a floating image of the answer back into your visual field.  Obviously this could get much more sophisticated.  If you’ve ever taken psychedelic drugs, you have some feeling for how this might seem.

    – I am convinced that we will evolve into silicon based life forms, just not in 2043.

    – Space travel will be infinitely easier when we have evolved to silicon.  That is why I no longer support the manned space program, centered around the carbon version of humans.  The money is better spent on robotic instruments.

    – Having said all that above, I am glad that I read Kurzweil.  I admire him tremendously and rank him up there with Buckminster Fuller and Kevin Kelly.

    1. Oh yes, indeed. I was well aware of it before, have read just enough sci fi (in my Wired days) to appreciate it. Time to read more!

  4. Let me grouse a bit: We’ve had 2,000 years of bright and not-so-bright people of faith make up the Christian tradition. The rapture folks are an easy dig, because the concept is so farcical, but few throughout religious history would even be aware of the silly idea. Not sure why the rapture idea originated in the late 1800s in Britain and found a life in modern and postmodern America. Most thoughtful people of faith are embarrassed by the association.

      1. No offense taken… and you’re probably right to compare the emotion of the two. Just saying …

  5. Is “Singulairy” a religion? It seems more like a collection of graphs all pointing hockey-stick like into an inevitable future. A future where our biology collides with our technology, one that will create experiences indistinguishable from reality, but in controlled, programmed settings. Reality will still be reality but co-generated by our technologies. All within a generation or two. My view is that this future is inevitable and that if we are just a few years away from it, how do we know we aren’t already living in that “post” Singularity future, and have been for some time. 10,000 years ago were still living in caves, on the brink of discovering agriculture and creating civilizations, and science, etc. That’s a remarkably short time to make such amazing progress.

  6. It’s been a while since I read Singularity, but I think the thing that bothered me most about it is Kurzweil’s fix on death as being tragic and that the Singularity solves that ‘problem’ by giving human beings a shot at life ever after.  Hm. Maybe.  What this idea does hint at tho, and what technology is forcing us more and more to consider, is the definition of a self.  The prevailing assumption is that the self is some kind of personality inside a body. When it dies, so do we. Not to say that isn’t totally true, but maybe instead of being just inside our bodies, our bodies are also inside of us.

    Currently our bodies command most of the attention, but more and more our selves are becoming  distributed over media – asynchronously, in real time, in a tweet, a post, on tablet, etc, you name it. Not that that wasn’t true pre internet. Even long before electronic media – a self was defined as much by family, culture, language etc then as it is now.  It’s just that technology is forcing us to confront this notion of a distributed self much more profoundly than ever before, and we seem to be pretty eager to dive right in given the ease with which we’re willing to surrender our private lives to the public web.

    So if a self isn’t body bound (only), what the heck is it? And where will it live 40, 140 or 1040 years from now? In Kurzweil’s utopia, in a machine. Personally, I think that’s just barely scratching the surface…

Leave a Reply

Your email address will not be published. Required fields are marked *