The Singularity

singularity

 

I recently watched “Transcendent Man”. I found it to be entertaining as well as thought-provoking.  I don’t know if it gave me a lot of insight, or taught me something I didn’t know, but it definitely forced me to appreciate the complexity of the world we live in, and how we can easily develop systems that go out of our control. Some people, like Sam Harris, have gone on to say that it’s the most important question of our time. I don’t know if I would go that far, but at this pace, it might be quite soon.

http://www.imdb.com/title/tt1117394/

Here’s a description.

Ray Kurzweil uses the “singularity” analogy to illustrate a fundamental point, that it will mark the beginning of an entirely new paradigm of human existence. One that is infinitely more complex than ours today, and one in which human beings will merge with AI to become immortal.

The documentary constantly switches between two points of views. One advocates Kurzweil’s hypothesis and enforces his authority on the subject by referencing his past achievements and successful predictions. The other point of view scrutinizes him for being too optimistic and brings up his father’s death to illustrate his underlying motivation to being optimistic.

The basic premise is this, according to Kurzweil. And from a purely armchair philosophical point of view, it makes perfect sense. Scientific change has been happening for a few hundred years. For the majority of this time period, progress has been pretty slow. Today, things are starting to pick up, and the time it takes for breakthrough innovations to occur in any given field is becoming exponentially smaller. This builds on Moore’s observation decades ago.

Moore’s law refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention. Moore’s law predicts that this trend will continue into the foreseeable future. (Source: Investopedia) 

Back to the premise. Things will become so complex that we will no longer be able to control it. Artificial Intelligence would be infinitely better than us at every conceivable task and would outsmart and outthink us. Our relationship with AI would essentially flip completely. Instead of controlling AI’s to advance our interests, AI would control us to advance theirs. This is analogous the Terminator series as well as a number of other pop culture references to this morbid, yet inevitable destiny.

A future to fear is certainly one in which the master-slave relationship we have maintained with AI completely reverses. But it isn’t the only thing to fear, and it isn’t entirely the future.

I would argue, that out of a series of infinite doomsday possibilities, this is only one. Any kind of scientific advancement whatsoever requires us to expose ourselves to the externalities of the unknown. And since these externalities are unknown (obviously), we have no idea how things could go awry.

While it is interesting and important to consider where we’re heading, it’s probably futile to do so.  To illustrate this point, think of the world we know today. How much do we control it? To what extent are already dependent on AI? What about in 5 years?

We rely on technology for basic life sustenance. Generations in the future will find it easier to operate in the world of AI, but almost impossible to operate in a world without it. Another way to think about it is we have effectively switched our constraints.

In the past, it used to be the natural world. Our bodies would get sick and die because of disease. Today, our risks have slightly shifted. Yes, to a large extent we are susceptible to disease but far less so. We are more susceptible to the constraints set forth by the set of rules and structure created by man, within the context of the free market or in some areas, government.

In the future, we can imagine that this pattern will continue. Nature would become more controlled, not less. And the set of structures created by man will become more powerful. That includes structures that contain AI as their primary component. This is already happening, to a large extent, today.

The way we interact with each other, whether socially or for business, or even for entertainment is confined to a manufactured set of hierarchical set of virtual rules and structures that are constantly changing and mostly controlled by a very few number of players.

One can make the argument then, that a future in which AI controls our lives and make us subservient to it due its superiority in intelligence in multiple dimensions is not a dystopian dream, it’s life in 2017.

 

Advertisements