We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.
Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]
|
Samizdata quote of the day “AI, machine learning, robotics and the power of computational science hold the potential to drive explosive economic growth and profoundly transform a diverse array of sectors, while providing humanity with countless technological improvements in medicine and healthcare, financial services, transportation, retail, agriculture, entertainment, energy, aviation, the automotive industry and many others. Indeed, these technologies are already deeply embedded in these and other industries and making a huge difference.”
“But that progress could be slowed and in many cases even halted if public policy is shaped by a precautionary-principle-based mindset that imposes heavy-handed regulation based on hypothetical worst-case scenarios. Unfortunately, the persistent dystopianism found in science fiction portrayals of AI and robotics conditions the ground for public policy debates, while also directing attention away from some of the more real and immediate issues surrounding these technologies.”
– Adam Thierer
|
Who Are We? The Samizdata people are a bunch of sinister and heavily armed globalist illuminati who seek to infect the entire world with the values of personal liberty and several property. Amongst our many crimes is a sense of humour and the intermittent use of British spelling.
We are also a varied group made up of social individualists, classical liberals, whigs, libertarians, extropians, futurists, ‘Porcupines’, Karl Popper fetishists, recovering neo-conservatives, crazed Ayn Rand worshipers, over-caffeinated Virginia Postrel devotees, witty Frédéric Bastiat wannabes, cypherpunks, minarchists, kritarchists and wild-eyed anarcho-capitalists from Britain, North America, Australia and Europe.
|
First line:
“AI, machine learning, robotics and the power of computational science hold the potential to drive explosive economic growth “
You can take out the word “potential” – it’s not Science Fiction & Fantasy. Friends who used to work in the Financial Services Sector have got out and moved all investments out of stocks and shares.
Why?
Because of Algorithmic Trading. It’s not humans making the decisions, it’s computers running on AI, put in data centres as close as physically possible to the digitised “stock exchanges”.
In a digital world, why do they have to be physically close? To shave literally thousanths of a second off of the response time, to try and beat other AI Algorithmic Trading data centres.
It’s another War Of The Machines, and 99% of the population (with their pension funds in the shares being traded by machines) are completely oblivious to the fact.
I could write a huge amount about AI, but I’ll restrict myself to a simple comment. The idea that the US and the West should try to slow this down and get it in control is extremely dangerous. The Chinese are probably already in the lead in these technologies and you can be sure that they are not putting a cap on it. He who gets the lead in AI will very, very quickly dominate the space. Why? Because the technology is prone to exponential growth. What does that mean? It means that if you grow linearly for 30 months you get thirty times more capable; if you grow exponentially for 30 months you get a billion times more capable. A billion verses thirty is not a quantitative difference, it is a qualitative difference.
There is very good reason to be very concerned about AI. It is a monster that can easily devour us. If anyone is interested I’d be happy to explain why for the non tech reader. However, the genie is out of the bottle and we had better run like hell to stay at the front. The advantage we have in the west is freedom, lack of tyrannical control, the profit motive, competition. That is the only pony we have to ride to stay ahead of the Chinese Communist Party. If we fail, then it is game over.
Fraser, I’d love to hear more. How will AI devour us? Why are R&D folks working on “trusted AI or ML”…why wouldn’t you trust an algorithm? It’s not like it’s scheming against you, right? 🙂
I am aware of a small part of R&D funded by the US government, a $500M program, and there has been huge growth in AI investments there in just the last 2-3 years. If you want to get a proposal funded there just sprinkle in the right acronyms and buzzwords: AI, ML, CNN, deep learning, etc.
@GregWA
Fraser, I’d love to hear more. How will AI devour us? Why are R&D folks working on “trusted AI or ML”…why wouldn’t you trust an algorithm? It’s not like it’s scheming against you, right?
No, it isn’t scheming against us. The thing is AI is categorically different from other software. When a team produces a piece of software they create it carefully, understanding each part and how the parts fit together. Sometimes software can get so large, and often so poorly designed, that it is hard to understand, but at a fundamental level, humans can understand every conventional piece of software, and consequently its behavior is fairly predictable (halting problem notwithstanding.)
AI is quite different. Rather than programmers I’m going to call them “handlers” of AI train the software with data and then the AI kind of makes up its own program. The consequence of this is that programmers don’t really know how the program works. A good example of this is Google translate. In the past programmers would have written parsers to break down the language into manageable pieces, and used a database of translations to construct a translation. Google translate does not work that way. How it is written is that the “handlers” give it a million books to read, some in various languages and it kind of evolves a program to do the translation. In the case of Google translate it created an intermediate language of its own invention that encompassed the grammars of its various source languages. Nobody told it to do this, it just invented it on its own. This is why it can translate from so many languages to so many others.
This is obviously a gross simplification, and in truth the handlers do have considerable input. However, the opaqueness of the program is a very concerning development.
So the point is this: we don’t really control or understand the programs that AI writes for itself. And this robs us of control, irrespective of whether they create “intelligence” or “self awareness” or some other scary “I’m sorry Dave, I’m afraid I can’t do that.” And that is just the ground floor.
Another thing that is very important to understand about AI is that it self improves. Normally software is subject to entropy. Any programmer will tell you that software rusts. The longer you leave it the more broken it gets. And, unless you put in considerable energy to maintain it, it will become more and more broken. AI is not like this, in general. AI gets better over time. And it can improve itself assuming it has access to data — which, obviously, it does, in vast amounts. And it doesn’t get better linearly, it gets better exponentially. So it is very easy to let this already barely leashed beast out of control.
There are various exploits to try to manage this. One basic idea is to instill a set of “morals” into it, such as Asimov’s Laws of Robotics and I think that will help for a while. But we must remember that if this machine is “intelligent” (whatever that means) then just as humans managed to evolve away from the moral codes that early societies and religions imposed on it, there is no reason to believe that AIs could not do exactly the same thing.
I also wanted to comment of the so called Turing Test, which Turing himself called the imitation game. The idea is that a tester would be connected to two subjects over a chat like interface, and talk to them. One is a person, the other is a computer pretending to be a person. The program is considered “intelligent” if the tester can’t tell which is which.
Much as Alan Turing was a genius, I think this is a very bad test, in fact it is a test that reveals a human centric bias. In this test the computer is pretending to be a human, and we define “being human” as “being intelligent”. How limiting is that? Imagine what it would be like to have an IQ of 10,000. What would it be like talking to a human? It’d be like talking to a beatle. The Turing test is hard for all the wrong reasons. It is a test of what it is like to be a human, not an intelligence. The example I like to give is asking the computer “which is more expensive, laundry detergent at three ninety nine or a laundry machine at two ninety nine?” Unless you have considerable experience of living as a human, of running a house and buying groceries, this question is very hard to answer. But in no way is that a measure of intelligence. However, if I asked you what the square root of 6.7889, you’d have a very hard time telling me, and a computer could do so in a nano second. There are different types of intelligence, and computers already vastly exceed human capacity in many, many, many domains.
So, what is the solution? I really don’t know. All I know is that not advancing it in face of international competition is a disaster waiting to happen.
Fraser Orr: Is AI software or hardware?
@bobby b
Fraser Orr: Is AI software or hardware?
The basic answer is “software” but often specialized hardware is used to implement it. Most commonly GPUs (the chips that are used to draw graphics for games very fast) because the rendering mechanisms they use work very well in AI software. But more and more there are actually specialized hardware devices specifically for running AI software. So these things kind of blur the distinction. Nonetheless, ultimately it is software.
I’m very concerned about AI.
It is out of the bottle but there is no reason we cannot limit its control by adding simplistic controls and failsafe’s. I watched a couple of YouTube videos where someone was talking to an AI and the system actually said it would lie to achieve its ends. The interview ended with a question. Is it a danger to humans? It said No but as it had already admitted it would lie that didn’t comfort as much as it should have. It was an interesting discussion and I’m sure if the questions were asked again it would have a different answer to the lie question.
Fraser, As far as the only Pony we have. You clearly haven’t been paying attention. The CCP will look after itself our lot will destroy the future for a job over the next few years. I think the CCP have a better track record in making sure they don’t lose control of AI. Our lot will let them loose. Plus I think that list is the reason to be scared of AI not reasons to let everyone have free reign.
Lets not get too carried away. AI is highly sophisticated pattern matching and yet just on that it can do some amazing things. It’s very good on visual things and can be trained to recognise objects. It’s been used to control a bunch of drones flying through a forest and spotting/avoiding trees and branches. But AI is a very long way from being intelligent (called AGI Artificial General Intelligence) like Commander Data in ST:NG. It’s ANI – Artificial Narrow Intelligence. But it can be misused, driving every CCTV and recgonising and restricting people as it’s used in parts of China.
You can fool AI; take a photo of yourself, modify it imperceptibly and AI will no longer recognise it as you.
It is not regulation that poses a threat to AI development. It is electricity shortage and blackouts. This is the present and near future reality that threatens not only AI but human wellbeing in general. Caused by climate-madness.
We are living on borrowed time, using heritage electricity infrastructure that is slowly disintegrating, and no new facilities are built.
To control AI is easy. Throw a switch and finito. AI is dead.
I mean: human madness is a far greater and more immediate threat than AI, which, in my opinion, is no threat at all.
Never mind the old refrain of “When am I getting my jetpack”, where are my genetically engineered cat boys/girls*?
* – Delete according to sexual preference.
I was going to mention shortages of electricity but you got there before me. Here in East Yorkshire the wind turbines were stationary again. During this summer they seem to have spent much of their time at a standstill. In our bright new future you are going need a bicycle for when your electric car wouldn’t charge and the electric buses won’t be working either. Not being able to boot up your intelligent computer is going to be a secondary consideration.
If AI are hyper rationalist will they be libertarians like us?
I find the idea of humans in charge of AI much more terrifying than the idea of AI in charge of humans.
Most AI that’s been given direct access to the public have been turned out-and-out racist, misogynist, whatever.
Amazon scraps secret AI recruiting tool that showed bias against women
Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day
@David
AI is highly sophisticated pattern matching and yet just on that it can do some amazing things.
But that is what humans are too, and look at all the crap they get up to.
But AI is a very long way from being intelligent (called AGI Artificial General Intelligence) like Commander Data in ST:NG.
But that is why I mentioned the Turing Test above. This is a stupid definition of intelligence. AI and software in general is already vastly superior to humans in many intellectual domains. To say that AI “intelligence” is different from human intelligence is to suggest that only humans can be intelligent, but definition.
But it is easy to get lost of the definition of words. What matters is AI is used to control important parts of our lives, and as time goes on, it will control more and more. And the simple fact is that we don’t entirely have it on a leash.
@jacob
To control AI is easy. Throw a switch and finito. AI is dead.
Well what if we have AI control our electrical grid. Throw a switch and finito, we are all dead.
@Stonyground
I was going to mention shortages of electricity but you got there before me
There is no shortage of electricity in China. They are building coal plants an unprecedented rates.
@jon mors
If AI are hyper rationalist will they be libertarians like us?
Why do you assume it would be hyper rationalist? Why do you assume that it would have any concept of government at all? I think the important thing to think about with regards to AI is that it isn’t a human intelligence, it isn’t a “lifeform” similar to us, it is a categorically different kind of “intelligence”, a completely different type of “lifeform”, and so projecting human ways of thinking onto it are a mistake. It is, in a sense, beyond our understanding in much the same way a frog can’t understand what it is like to be a human, or, perhaps to be a bit more generous, like an dolphin can’t understand what it is like to be human and probably doesn’t think like a human.
Except that AIs are much more categorically different from humans than frogs or dolphins.
I’d ask you for a moment to consider this: imagine the technology existed to upload your mind to a computer. What would that be like? What would it be like if you processed things a billion times faster than you do? Or had almost infinite memory and perfect recall. If your sensory “peripherals” were quite different than our current ones. If you had instant access to massive amounts of information and could communicate with others huge ideas at the speed of light? Where “multi tasking” meant the ability to do ten thousand things at the same time, What would that be like? Would you do it? If you did would you still be you?
Now imagine the same thing but not starting with the constraints of a human thought process.
So don’t make the mistake of projecting human thinking processes, human assumptions, human societies onto AIs. They don’t fit. And what they will have instead of that are not only impossible to predict, but I imagine impossible for us to even hold in our minds.
If they are at heart pattern recognition algorithms, then their strength is in discrimination (which used to be a positive concept.) Now they just have to learn what they cannot say.
AI = Logic minus Values minus Emotion.
AI = Sheldon Cooper?
Seems the criterion of “intelligent” for AI takes the conversation in the wrong direction. Fraser Orr points out how AI and computers generally are very good at things humans just cannot do as well, e.g., find the square root of 6.78895 to 9 sig figs! But those are things that are fundamentally different from “thinking” in a fully human way. I like Will Smith’s character’s comment in “I Robot”, “just lights and switches, nothing in here [pointing to his heart]”. This seems to get closer to the key issue: self-awareness and a conscience, or at least a moral compass, even if the AI doesn’t have any attachment to particular directions on that compass. It may know right from wrong but why would it care which it chooses?
Asimov’s Laws also get at another issue: AI exists to serve humans. That’s it. No other scope for it’s existence. Maybe we need a (coded) Constitution that limits AI’s powers and sphere. That worked well in the US for a couple hundred years…now not so much. Is there a parallel between the inevitable decay of a constitutional republic (for example once a majority figures out it can vote itself all the money of the wealth producing minority) and the (possible!?) inevitability of AI to go wonky?
One other comment re Jacob at 10:06am, sure you can throw the switch and turn it off but what are your options when it’s embedded in every single thing we have that has any kind of sophisticated electronic element to it? Which is just about every thing made that runs on electrical power!
As I said above – human intelligence (or lack thereof) is to be feared much more than AI. Much more.
what are your options when it’s [AI] embedded in every single thing we have that has any kind of sophisticated electronic element to it?
Well, mankind has survived without “sophisticated electronic elements” before. Whatever breakdown occurs – it is fixable.
By the way: can AI build a power station to provide it needed electricity? Can it extract fuel to feed the power station?
@GregWA
But those are things that are fundamentally different from “thinking” in a fully human way.
But isn’t that the point I made. Of course AIs don’t think like humans, they aren’t human. But that doesn’t mean they don’t “think” or aren’t “intelligent”. Crows don’t think like humans either, but they are widely recognized as being intelligent and self aware. The problem is that we don’t have a good definition of what these things mean, or at the least we don’t all agree on such a definition. Which is why it is far more productive to think about what they do rather than ending up in some philosophical black hole. I think that is what I have tried to do above.
I like Will Smith’s character’s comment in “I Robot”, “just lights and switches, nothing in here [pointing to his heart]”.
That is just Hollywood pap. Your heart does not participate at all in your intelligence. He is probably thinking of a soul, but there is not one shred of scientific evidence that such a thing exists. We all like to think that we are more than just a bag of chemical reactions, but that is really all we are. We aren’t lights and switches, but we are neurons and axioms build in a body of protein and water.
Asimov’s Laws also get at another issue: AI exists to serve humans. That’s it. No other scope for it’s existence. Maybe we need a (coded) Constitution that limits AI’s powers and sphere. That worked well in the US for a couple hundred years…now not so much.
Certainly you can build AIs with certain “moral” premises, the same way your mother inculcated her values into you. However, there is absolutely no guarantee that you continued to follow these when you became an adult. Why would an AI be any different?
@Jacob
As I said above – human intelligence (or lack thereof) is to be feared much more than AI. Much more.
Are you saying you can’t imagine a situation where that isn’t true? I don’t doubt the malice of some humans but I assure you it is perfectly possible for AI to behave in far, far worse way. One need only read science fiction to get a list of possibilities.
By the way: can AI build a power station to provide it needed electricity? Can it extract fuel to feed the power station?
Not yet, however, more and more physical artifacts are created by computer, so I don’t doubt a time will come when it can.
But that actually doesn’t matter. I can’t build a power station either, but I have an abundant supply of electricity. Why? Because I have other skills that I use to trade, so that other people will do so for me. Why is it any different for an AI?
“Why is it any different for an AI?”
You mean AI will ally itself with one group of humans (which will provide electricity) against another group of humans?
That’s exactly the point I made.
You have to fear the human part of that (vile) alliance, not the AI part – which enhances human capabilities but has no independent existence.
@Jacob
You mean AI will ally itself with one group of humans (which will provide electricity) against another group of humans?
I don’t understand what you mean by ally. I don’t “ally” with the guys who provide me electricity.
AI doesn’t exist without electricity supplied by humans. It’s that simple. AI has no independent existence. Therefore it’s nothing. It is, maybe, a tool that enhances human capabilities.
I don’t “ally” with the guys who provide me electricity.
You had better ally, and try to help them the best you can. Otherwise you won’t have electricity.
Funny how people take electricity, food, transportation, fuel – everything for granted – as if they just exist. You have to ally with the people who produce electricity for you (and everything else) and fight those who try to deny them to you. The greenies – for example.