Glenn Reynolds recently interviewed Ray Kurzweil, the futurist thinker who has recently come out with a new book, The Singularity is Near: When Humans Transcend Biology. I first came into contact with Kurzweil’s ideas when I read his earlier book, The Age of Spiritual Machines. In this, he expounds his idea of the Law of Accelerating returns, which holds that technological progress grows at an exponential rate.
If you are not familiar with the idea of the Singularity, the Wikepedia page is a good place to start.
It must be emphasised that the ideas and predictions made by Singularity enthusiasts should be examined with caution. I myself am hugely optimistic about the possibilities, however, I should point out that futurists have a patchy record. This is not because they are bad; it is rather a reflection that our technological society is now so complex that understanding the various trends of our society is becoming too much for a single individual. Singularity enthusiasts concede this, and part of the reason why the term ‘singularity’ is used is that beyond the ‘singularity’ we can not really comprehend what will happen. (Just as an event horizon clouds everything within a black hole.)
However there are some features that most futurists agree will occur as progress nears the Singularity, and my purpose is to ask some questions about how they will affect issues dear to the heart of this blog. The first question concerns artificial intelligence. As Kurzweil says:
The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.
Basically, the point is that we are going to build machines that have artificial intelligence, that can learn and become self aware. This idea has been kicking around Hollywood for a long time and, indeed, is screening at the moment (Stealth). While Hollywood invariably presents such technology as being hostile to its own creators, there seems no logical reason for a self-aware machine to be hostile. However, it might well decide that it is alive, and demand rights, as in Bicentennial Man. The issues that movie raises about the definition of ‘life’ and ‘rights’ could provide enormous headaches in this century, and could keep lawyers busy for decades.
The prospect of this may cause governments to interfere to prevent the development of self aware technologies.
Complicating this is the prospect of not only machines becoming like humans, but, humans becoming more like machines. Geeky technophiles are likely to fall in love with the idea of using technology to enhance themselves. I would love to have memory implants and the like myself. So the definition of ‘human’ and ‘robot’ possibly will get cluttered; fear, uncertainty and doubt from the wider community will be another huge issue.
Another issue that the Singularity might well bring into sharp political focus is healthcare. Life extension technology is one of the promises of the Singularity; ageing is likely to be reversible. The prospect of immortality, a human dream since the dawn of time, could be realisable within the lifetime of many of us.
This is sure to attract more political attention. The temptation of governments to interfere, to attempt to ‘socialise’ medical progress as an attempt to spread the benefits could well be irresistible. It is the natural way of things that wealthy people are best place to take advantage of new technologies, and this may cause considerable political stress in democracies and give new powers for the State in non-democracies. Consider the prospect of an immortal Robert Mugabe.
There are bound to be many more problems and questions that the technologies of the Singularity will bring to us. I have not even touched on the huge privacy implications. The sooner that friends of liberty start to ponder these questions, the better chance free societies will thrive rather then suffer.
Like all past technologies, the question of whether or not they are a blessing or a burden is likely to be decided by how humans decide what to do with them, rather then by the technology themselves. Although, with self-aware machines, this might be the last time we have to answer such questions alone.
I think they have their technologies all wrong. AI is going nowhere and never will. Nanotech is a materials science. Biotech is tied up in red tape.
When folks look back on the singularity, they’ll name it’s founders thus: Amazon, Google, Blogger, Wikipedia.
When folks look back on the singularity, they’ll name it’s founders thus: Amazon, Google, Blogger, Wikipedia.
Perhaps when they look back to the start of the Singularity.
I’m not sure why you say A.I. is a dead end. We managed to get here without our own intervention.
I say it’s a dead end because I have seen zero evidence that human intellect factors into Turing computations. So far, computers and minds seem as dissimilar as roses and the number 2. About the only progress in AI has been to (a) disprove all the naive ideas about how we think (b) find uses for these various sorts of not-minds in augmenting real intelligence, or as simplistic mind-simulations within very finite systems (such as games).
I have to echo Julian’s comments. I’m a philosophy grad. student at U of Arizona – we host one of the biggest consciousness conferences in the world every two years (Kurzweil is a frequent attendee) and while many of the cognitive scientists, AI researchers, and philosophers are still computationalists (the view that the mind is to the brain as software is to hardware) is shrinking. There are strong philosophical challenges to such views and we are nowhere even close to understanding the relationship between the mental and the physical – and thus Strong, mental property bearing AI is not only a long way off, but we don’t even know if it is possible. In fact, an increasingly popular view is called mysterianism – the view that the mind-body problem is *in principle* unsolvable.
For my part, I am inclined to agree with John Searle that a computer cannot in virtue of its material parts give rise to consciousness because a program is a social object that requires consciousness for its creation. Now, don’t misread me: I am not claiming that conscious beings can’t produce other conscious beings – that’s clearly false. I’m saying that programs are like contracts are promises – they’re social objects – they are logically subsequent to conscious, intentional beings. Thus, no social object could possibly be the *explanation* for mentality, as they are mentality’s products. It is as Searle says: “Syntax is not intrinsic to physics.” We read the program into reality. 1s and 0s are simply social conventions for how to count ons and offs when transistors exhibit current flow above or below socially assigned limits. Computers don’t run programs; we run programs with computers.
This is not to say that the concept of “Singularity” is totally bogus. We can certainly enhance our intelligence with machinery outside of the skull – there is no reason to think that we cannot do so with machinery inside of it. Many amazing (and horrifying) things do indeed lie ahead of us. But I do not think part of it will be self-aware machines, at least not in the foreseeable future.
And I have to say, I think Ray Kurzweil is more the spokesman of a new-age religion than anything else. A lot of what I’ve read of his sounds an awful lot like the end of time mythology in many world religions. Eternal life, eliminating suffering, disease, etc. I suppose I think any sensible, rational person ought to be skeptical of such claims.
I’m more of a hopeful skeptic when it comes to the Singularity. For me it comes down to 3 issues:
1) I’m unaware of an objective test of sentience, so I see no logical basis in assuming that we’ll have an undefinable problem solved in a given period of time. Eventually we may be able to create a sentient machine, but it is not a guarantee until we know what sentience truly is.
2) Increasing computational power does not necessarily relate directly to knowledge gained. In the near future it is almost certain that our knowledge will increase in proportion to calculation speed, but it is not an absolute. Efficiency and proper application of the computation are factors that are typically ignored.
3) Increasing knowledge does not directly relate to increased capacity. A specific future technology could take extreme amount time to develop, regardless of now much we learn in the process.
There is no doubt that life in 2100 will be significantly changed from today, but a true singularity is not a guarantee. Certainly possible, but not certain. With luck (and Aubrey de Grey’s help) I hope to see one in the next century or so.
I still think a Singularity is possible. (Indeed, already running.) I just think that the outcome will look a whole lot more like Dune than like The Matrix.
Tech that isn’t bogus:
– Life extension (not immortality, but lifespan limited by luck, not by age).
– Biotech (when it gets untangled from the luddites and NIMBYs).
– Nanotech as a very useful advanced materials science.
– This unprecedented cultural, knowledge and communication expansion we are living through right now.
I think it needs to be emphasised that no life extension/age regression technique can offer immortality to anyone. Using the word in such dismissive fashion simply undermines your otherwise thought-provoking post, Mr. Wickstein. Similarly, the question of Human Rights simply doesn’t apply to any non-human entity, pretty much by definition I would think. Such hypothetical species may inhere (if that word means what it seems to in this usage) some such rights to themselves, but they would remain a separate species from the human. The more interesting question I think (as I think it more likely to arise sooner) is; at what degree of physical/intellectual enhancement does a human obviate the applicability of human rights as such are currently understood to mean? A simple example; a detector that “see’s” the radiation that all objects naturally project has recently been invented. If such is implanted into a human’s eye to what extent do rights regarding privacy and search then apply to him?
The physical expansion of where people can live will positively serve to protect human liberty. The moon and Mars, and the entire Solar System, are within the reach of modern technology. Logistics is the only thing stopping the whole bunch of us from founding Libertopia on Olympus Mons, getting armed to the teeth, and daring anyone to come bother us. Look at what the New World colonies did with our tax revolution.
I can only imagine what will happen with a space-age steam engine.
Re: human rights, those are likely to get complicated in future, particularly because the definition of “human” is going to smear, and cease to cleanly overlap with “person”. Is a creature that looks human but was engineered with the mind of an animal, legal to keep as a pet? What are the rights of, say, chimps, “uplifted” to human level sapience? What if they are only partially “uplifted”, at what level do rights kick in? Stuff like that.
A statement like “AI is going nowhere and never will” just means that you haven’t got a clue of the quite spectacular progress in this field. Just look at machine learning, and many other related fields. By spectacular, I don’t mean that you can expect the final results in a few years, but it is certainly spectacular in comparison to other young sciences.
There is a long list of “I can’t imageine it therefore it is impossible” arguments which turned out to be wrong. Yours is just one in that line. Another, less polite, way of saying it is that it is arrogantly stupid.
Peter: you misunderstand. AI is nifty and does many good things. It’s just that they all amount to “useful automation”. None of them amount to “artificial people”, and they never will.
Kevin Vallier:
“In fact, an increasingly popular view is called mysterianism – the view that the mind-body problem is *in principle* unsolvable.”
In fact, an increasingly popular view is life is in principle unsolvable, springing from ineffable élan vital.
In fact, an increasingly popular view is chemistry is in principle unsolvable, consisting of irreducible emergent properties.
You philosophise; we’ll figure out mind.
Julian Morrisson says we’ll never make artificial people, but I can’t think of any natural law that makes it impossible. It might be very difficult, but it’s not impossible in the way that going faster than the speed of light is.
Here’s a brute force approach: take a human brain; measure the positions of all the atoms therein; enter them into a computer; run some software that simulates atoms interacting.
Why won’t this work?
Heisenberg’s Uncertainty Principle.
EG
I suspect: it will fail because you will soon discover that you have an elegant simulation of a brain without a mind, ie: it will simulate a persistent vegetative state.
My hypothesis is basically: the persistent failure of biology, biochemistry, experimental psychology and AI to locate mind and consciousness in the functioning of the brain (in whole or in part), indicates they’re looking in the wrong place. The “brain equals mind” theory is mistaken.
But why the insistence on mimicking a human mind? This is just making an electronic replication of an organic thing.
Might it not be that a machine intelligence would be simply a different form of intelligence, without the limitations and constraints of the organic structure but with (different) limitations and constraints of its own? For example, it is not necessary for a machine intelligence to experience emotion (which is a glandular response evolved to maximise the survival potential of both the individual and more importantly the species collectively). A machine intelligence lacking emotion would not be any less intelligent because of this, it would just be different.
In any case, since it is not possible to prove objectively that a human has a conscious mind, how can it be expected that it could be proven that a machine has such a thing? If a machine performs in such a way during a series of Turing tests that the judge cannot decide whether it actually is a machine or a human, one has to conclude that it is intelligent. However, it does not need to arrive at this display of what we would call intelligence in the same way as the human mind does.
EG
There is a feeling that an AI is going to be a “person”, a human like intelligence. I’m not convinced by that. I suspect we certainly will have some kind of “uploading” technology happening, probably sooner rather than later. After all, we don’t need to measure atoms (unless Penrose is right) we just need to be able to map and emulate in real time the connection matrix of a brain. New scanning techniques are getting pretty close to this already.
We’re also likely to have AI (Augmented Intelligence), we’re already getting it and it’ll continue to get better and there’ll be an issue between “standard” people and augmneted ones.
Finally, we might get true AGI (artificial general intelligence) – the problem I see with that is it’s going to be alien, really alien and consequently working out if it is sentient is probably going to be even harder than working out if an uploaded person is (which is also likely to be practically impossible).
I think we are doomed to live in some very interesting times.
bwanadik wrote:
“In fact, an increasingly popular view is life is in principle unsolvable, springing from ineffable élan vital.
In fact, an increasingly popular view is chemistry is in principle unsolvable, consisting of irreducible emergent properties.
You philosophise; we’ll figure out mind.”
I had hoped for a thoughtful response to what I said – like a response to my argument or something of that nature. Oh well.
Furthermore, I don’t think you realized in your urge to arrogantly and unjustifiably dismiss my position that I am *not* a mysterianist. I think the mind-body problem can be solved. So your crude sarcasm *doesn’t even apply to me*. Good job looking like *both* a jackass and a dumbass.
Now how about a real response?
Fascinating. I never thought I should see such a metaphysical debate on Samizdata. And that’s what all the essence of personhood (sapience and sentience) part of this debate is about.
Question, will we ever build something with the data processing capacity and complexity of the human mind? Uh, yeah. Will it do things we don’t expect and develop a “mind” of its own? Probably. Especially if Mandelbrot influences its early phases.
On the other parallel topic, the speed of acquisition of knowledge, I’ve been living in my philosophical iso-bubble so I didn’t realize that there was a whole school of philosophy focused on this.
I’ve built my own mental model of it without knowing that others existed. Here goes.
I use nuclear fission as a metaphor. Mass and energy equates to knowledge and intelligence. Early on, k & i were widely distributed in space and energy over civilization. Primitive peoples learned things from each other over generations or even eons.
At a later phase researching monks may have tucked knowledge into pilgrim’s pockets for other monks somewhere else to read.
Knowledge and intelligence are compressed closer and closer in time and space which accelerates their growth and compresses them even tighter.
Each new technology enhances our capacities, and information processing power is no exception. Perhaps computers will remain simple machines or become biodroids. In any case, they will achieve and exceed our mental processing capacity.
At some point,a system running in a chaotic mode will be able to posit and test any number of theories faster than an un-enhanced human could imagine. If the system is connected to a means of manufacture …? Think of it as knowledge critical-mass.
At this point, the knowledge of the universe could be released. Think V-ger to the nth.
I think maybe I’ll go study metaphorical Chinese butterflies.
Kevin:
It was an attempt to illustrate the historical precedent in science for ‘mysterianism’. Philosophers always say ‘ahh, but this problem is really mysterious’, right up until it’s solved. My closing sentence was a tad gratuitous, sorry if you took offence.
Completely agree with Euan Gray and Daveon; the descriptor ‘intelligence’ is itself anthropomorphic, since it carries connotations that hold true in humans, that there’s no reason to think would hold true of mind generally, in all forms permissable by physical law. We need a general theory of mind. Alas, we have but one data point, with lots of weird properties due to the perverse ends of its designer (natural selection).
The human brain’s storage capacity is estimated between 1 and 1000 Tb. My desktop has 1 Tb, and 1000 Tb in the 4-6 year time frame is probably reasonable. Google of course already has way more than that.
Human senses (sight, hearing, touch, etc.) transmit data to the brain at a rate of 30 – 100 Mb/ sec. Computer networking already far exceeds that. HDTV cameras are better than eyes, microphones better than ears, etc.
As for computational speed… well, neurons operate in the Kilohertz range. Megahertz and Gigahertz make neurons look like the tortoises they are.
Where the brain still has modern IT tech trumped is in computational power. The brain is ridiculously parallel. It computes far more operations/ second than the most powerful computers in the world.
Of course, AMD, IBM, NEC, Intel, etc. are just now really accepting what mother nature proved a long time ago – brute force serial computation doesn’t scale. Ergo, dual-core and multi-core processors. Intel’s next round of processors are going to be quite small and computationally-challenged compared to today’s Pentium IV, but each desktop will have 4, 8, 16, … cores per desktop. Eventually we’ll get closer and closer to the conclusions evolution has already arrived at.
This shouldn’t be news to anyone, but considering the tone of the conversation I thought it instructive to go “back to basics.” That modern technology will exceed the capacity, speed, and computational power of our biological hardware isn’t even a question. It’s pretty much inevitable and will happen pretty much on the time frame that Kurzweil sketches.
The question some people are trying to grasp is “Will it be exactly like a human?”, and of course the answer is “No.” Especially not at first. Everyone knows that simulating hardware in software cuts down performance significantly, so a computer modeling a brain would actually have to be significantly more powerful than a brain. Long before a model of an active brain can be computed in real time computers will be able to think in “computery” ways that exceed and are different from human thought patterns. That will sure be interesting.
In the last 25 years 3 Billion minds and working hands have been added to the globablized workforce, essentially tripling its size. Look at what that has done to the price of ‘value added commodities’ like DVD players and t-shirts. Can we begin to imagine what it will be like if you can add a mind and pair of hands to the economy for $1000? What about 10 years after than when we can add 500 IQ super-geniuses for the same cost? That’s why the singularity is so vexing; we just can’t imagine what it will be like.
Oh, and anyone who doesn’t think that machines will be able to learn and adapt as quickly as humans should read this article:
http://www.newscientist.com/article.ns?id=dn7561
Those Aibo pups are cute, but the hardware will get better. A few years before the Turing Test is defeated a Lassie Test will succumb.
Very interesting. I hadn’t reaslised there was a whole school of this sort of though: I recently read Michel Houellebecq’s 1999 novel ‘Atomised’ which describes a similar post-human future.
Houellebecq, for them what’s interested, doesn’t seem to be mentioned on the Wikipedia page.