That was a line from a Guardian op-ed written entirely by a robot. The machine was instructed to focus on why humans have nothing to fear from AI. I do not find this reassuring.
|
|||||
“I know that I will not be able to avoid destroying humankind.”That was a line from a Guardian op-ed written entirely by a robot. The machine was instructed to focus on why humans have nothing to fear from AI. I do not find this reassuring. 12 comments to “I know that I will not be able to avoid destroying humankind.” |
|||||
All content on this website (including text, photographs, audio files, and any other original works), unless otherwise noted, is licensed under a Creative Commons License. |
This is the most disturbing line in the article for me. The robotic intelligence has been imbued with the statist impulse, and all the unholy menace that implies.
It also seems to be equipped with the statist impulse to deflect blame.
Get in line, robot!
Oh, hell, another TESLA thread?
Dear Miss Solent
The epitome of AI, writing Guardian level drivel?
Some way to go before becoming useful, or dangerous.
DP
Shouldn’t it be “mankind”?
A pedant writes:
I find people conflating AI and Robots to be on the same piss poor level of technical knowledge as those who think the web and the internet are the same and that Sir Tim Berners-Lee invented the internet. AI is just a computer program that uses statistics to do incredibly sophisticated pattern matching. A robot is a mechanical device either capable of motion and/or the ability to manipulate things. Like those that make cars.
Also our current level of AI is what is known as Narrow AI. It can’t create, can’t think, but it can recognise objects in photos and distinguish them. The AI beloved of science fiction (e.g. Commander Data in Star Trek Next Gen) is the kind that people fear but doesn’t actually exist and is unlikely to for a very long time. Putting that in a robot could be a very dangerous thing but as it doesn’t exist…
That said, the military is experimenting with marrying Narrow AI and military hardware. The Israelis have had sentry bots that can guard an area and recognise a human trying to get through and fire a machine gun at said human. And the US has an autonomous drone that can select its own targets. But we’re definitely not at the West World stage of things nor likely to be. But I’d keep an eye on that SkyNet just in case…
As a software developer and writer I was very impressed with that article. Not the content though; that’s pretty much irrelevant and reflects the source material and biases of whoever programmed the model. That’s why Microsoft pulled its recent experiment Tay because the data provided by the public was unpleasant and the AI reflected that.
The grammar in the article was good and it had a voice and that is clever. While anyone can create a grammar checker, creating a coherent article is not that easy. I think writers should be concerned as this type of thing can only get better. Within 3-5 years I could envisage copywriters being automated out of existence. Amazon reviews are probably next…
Interesting posts and interesting comments.
‘ The machine was instructed to focus on why humans have nothing to fear from AI.’
Then it was programmed and clearly not capable of AI.
In any case Mankind will be destroyed by Environmentalists as a matter of deliberate intention not unavoidability, long before any machine acquires AI. It may well be AI could save Mankind by destroying Environmentalists first.
@David
A few years back, someone programmed an AI to act like a teenage son, and it was claimed to pass the Turing test. Therefore…
Parents of teenage sons would have suggested a different interpretation of this result.
I agree AI has a long way to go, and is massively hyped, the ‘gis us your cash!’ stage.
I just love visiting AI-households: “Alexa. order 100g of coke and 2 sticks of semtex”.
But there is some real progress in AI. A while back, someone used AI to program a machine to play ‘Go’, using historical records of grand-master games, i.e. instead of trying to decide what moves were good, it was simply loaded zillions of top level games, and calculated its own preferences. Just like Deep Blue and chess.
The odd thing was, it started playing effective attacks that had never figured in any grand master game. The experts looked, and discovered a whole new area of ‘go’ strategy that human players had never found. Disturbing.
I saw a film years ago that showed a flaw in the Asimov Laws. An old man, retired, was given a robot by his children, and when the robot discovered that he had been a burglar, it helped him to steal, and burgle from his neighbours! After all, this was good for its’ owners mental health! Take that, Isaac