Artificial intelligence, or at least the kinds of algorithms that are, perhaps erroneously, so named, has many useful applications that will doubtless generate much wealth, freeing us from mundane tasks. As with anything, there are risks. Private criminals will of course find ways to use any technology. But, as a libertarian, I think it is more interesting to consider how it might be used by the state.
It’s not too early to ask whether the US, or the West in general, is at risk of being run by a totalitarian technocracy. Many people are alarmed by the emergence of a Silicon Valley elite that is becoming richer and more powerful than even the great industrial powers of the 20th century. Others believe that, rather than a threat, this kind of tech innovation is an opportunity to create a better world.
I am an optimist about humanity. I believe that technology can be used to create the means of self-organisation and freedom. So why should we care about a new wave of technology that, if used
The first paragraph above was hastily typed by me. The next two paragraphs (ending in a truncated sentence) were generated using Talk to Transformer, a web interface to an instance of OpenAI’s GPT-2 language model. The model you can play with on the web site is larger than the one that was released back in February. It can generate much longer tracts of coherent text, but I think this instance of it is limited to conserve computing resources.
Its output is rather uncanny, but it can follow the style of its input and stay on topic. By trying different opening sentences or paragraphs, I have made it write stories, newspaper articles and Amazon reviews. At times it looks suspiciously like it is copying directly from whatever material it was trained with (presumably text scraped from the web). However I tried Googling some of its output and could not find anything identical. At the very least, details are changed, such as when I gave it the text, “Who do you think you are? Lewis Hamilton?” and it generated a newspaper interview with Nico Rosberg talking about how proud he is to be a female racing driver.
Have fun with it, but do remember to get some work done.
Update: It seems you can ask it questions by prepending your question with “Q:”. For example:
Q: who is Perry de Havilland?
Perry de Havilland was an American aviation pioneer who worked on both aircraft design and aeronautics research. He was instrumental in developing both the Bristol Blenheim and the North American F-86 Sabre jet aircraft.
You’re saying that was generated by machine?
Wow.
I tried it. Cool. Artificial Inanity.
Pretty sure China is demonstrating how it’s used by the State, as we speak.
Natalie, yes. Similar phrases must appear in the training material, but it does seem able to select them and appears to be able to substitute words while still making sense.
For example, if I do a Google search for the phrase “Itâs not too early to ask whether the US, or the West in general, is at risk of being run by a totalitarian technocracy” I get one result: this article. If I remove the quotes I don’t see anything very similar.
“at risk of being run by a totalitarian technocracy” — still only finds this article.
“run by a totalitarian technocracy” — finds this article and one other page which seems to be someone’s ideas for movies.
Yet another tool to make us that much more lazier as a society. Pretty soon we will all be peeing in the bed again because we won’t even get up to go to the bathroom anymore…. Oh yeah btw theres an app that can summon a drone to clean up your bowel movements and pee in case you have an “accident” so you’re good!đ
Silicon Valley’s wealth seems mostly phony/illusory to me. Apple’s might actually be justified by what they produce. The idea that Facebook is “worth more” than 10 of our largest industrial firms building actual hardware combined is stock-market-driven fantasy. The monetary input for Silicon valley is ad revenue and the difference between VC input and crazed stock valuations. Every time these firms try to get into real manufacturing of real products in the real world, they fail hard. In addition, the actual silicon part of silicon valley vanished to South china a decade or more ago.
Yet these days I hear a lot of arrogant pretense about how SV is somehow upending our industrial world via “automation”. They haven’t automated anything outside of their websites and web-services. To the extent that industrial automation exists, it exists independently of the SV tech titans and their firms. The sort of skilled-trades manufacturing jobs that have been destroyed haven’t been destroyed by NC controlled machine tools (which date to the 60s, 70s, 80s at their most sophisticated) – but have been destroyed by a completely hostile political/administrative climate driving deindustrialization, financialization, and looting of the economy. GM isn’t in Mexico because the Mexicans are world-renowned roboticists! Keithley Instruments wasn’t sold off to south china by Harvard corporate raiders because they were technologically unsophisticated!
So SV and their imaginary robots haven’t had the effect on the economy that the business FUD press claims.
Totalitarian technocracy would be a hell of a lot more interesting, if not desirable, if it involved people who had anything to do with actual technology, instead of business/financial illusion. We could get chased around by Terminator instead of ejected from the banking system due to a low social credit score.
As to AI, what we actually have (as opposed to what we might potentially build): that’s a whole separate rant. I’ll rant later. Have to work.
Aaand, I’m ranting at a chatbot. đ Need to read more carefully
Q What color is a green orange?
A A green orange is the color of a plant or vegetable that is commonly used as food and fertilizer.
Q Is it possible to get a green orange color if I eat food that is cooked at a low temperature?
A Yes, it is possible to get green orange color in food that is cooked at a low temperature.
Q How long does it take for a green orange color to form?
A Green orange color can form up to 3 days after cooking food.
—–
Chatbots make for a strange demo of artificial intelligence. I imagine people go for them because they are easy to program (just like violence makes for an easy-to-program videogame mechanic), and it’s easy to fool people into reading meaning into words that don’t actually have any (my rant above, which is mostly a reaction to a whole bag of techno-triumphalist articles finely blended.). Chatbots will never be able to produce meaning (regardless of whether the backend is a dumb curve-fitter or a godlike alien intelligence) for structural reasons: They can’t know what any of the words they are fed or trained on actually mean. None of the nouns have any referents – they don’t know what a “cow” is or a “jacket” is beyond frequency counting. For that matter, they cannot know what nouns or verbs are because they have no experience of what space, time, or objects are, much less actions or continuity. Game playing AIs and control agents are far more interesting to me than Bayesian spam filters, or things trained on vast bags of words.
The old project to use machines to generate formal mathematical proofs is slightly less silly, since you can (if you are a formalist) sort of make an argument that there’s no content in mathematics beyond the semantic rules: What was produced were “valid” proofs that were exactly as devoid of concrete meaning as the symbols to the automaton. (Not a formalist mathematician, so I’m of the opinion that we do mean things beyond a particular way to shuffle symbols when we’re doing mathematics. We certainly do when we’re communicating via text!)
“Blogging”: Now I’m imagining a dystopian library of babel future where most of the internet is machine generated gibberish. LOL:
Q Who is Grant, Ulysses S?
Ulysses S. Grant was a Civil War soldier who was the leader of the Confederate forces.
Ulysses S. Grant served in the Confederate army as a Major General. After the war, he went into business in Atlanta, Georgia and eventually became a successful plantation owner.
During the Civil War, Grant commanded a combined Union and Confederate army and was awarded the Medal of Honor for his actions during the Battle of Antietam in 1862.
LOL đ
This is me from a parallel universe! How cool is that?
Q: Who is Geoffrey de Havilland?
Geoffrey de Havilland was a UK-based writer of we-blogs, as well as being a philosophical auteur.
Not only that, but Perry gave birth to Otters — two of them at once in fact. :>)))
MadRocketSci it’s not a chat bot but a text generator. The Q & A thing is just an emergent property of it continuing with the same style, I think.
MadRocketSci,
You say “semantics”. Don’t you mean “syntax” in the context of mathematical formalism? BTW, have you read any Borges. Especially “The Library of Babel”?
I’m vaguely surprised nobody has brought up the Turing Test. I guess MadRocketSci hints at the key issue with the test but there is (I reckon) a strong case to address here as to how close we are to getting a computer to pass – whatever that really means.
Just out of interest I have written this on my Kindle using a lot of predictive. It’s very good.
NickM (November 14, 2019 at 10:39 am), if all the examples above are indeed from a program – then I’d say it’s got a way to go yet. đ
(Actually I’d say even less of it than that, but, as the OP says, “do remember to get some work done.” However I do wonder what the program would say about its chances if asked. đ )
Technology has destroyed many decently paid jobs in the last 40 years or so.
But they have mainly been working class jobs such as factory work and mundane office administration.
They have been replaced by call centre work, food delivery jobs, courier work for internet mail, taxi driving, work in care homes and other very low paid work with dreadful terms and conditions, and the middle classes have been very pleased about the this new supply of cheap labour.
But when AI gets good enough to replace the jobs of those middle class people, lawyers, accountants etc, we will get demands that something is done about its threat to income levels.
NickM: Yeah, I did mean syntax. I think I fail the Turing test today.
I think I did read Borges’ Library of Babel book at one point in time. It’s interesting in its own right as a thought experiment. What do we mean by order/disorder? Where does the information in a block of text really reside? That sort of thing.
Pete: I’d contend that “technology” didn’t do anything of the sort. Cheap and greedy management did. How many times have you found yourself shouting at a phone robot to connect you to a (bleeping) *person* who has a hope of understanding what you’re actually trying to call them about? The technology is an *excuse* – for shabbiness, lack of service, and a race to the bottom.
I’ll have you know that the Twin Otter was a very fine airplane.
“Chatbots will never be able to produce meaning (regardless of whether the backend is a dumb curve-fitter or a godlike alien intelligence) for structural reasons: They canât know what any of the words they are fed or trained on actually mean. None of the nouns have any referents â they donât know what a âcowâ is or a âjacketâ is beyond frequency counting. For that matter, they cannot know what nouns or verbs are because they have no experience of what space, time, or objects are, much less actions or continuity.”
That’s because they’re only studying a part of the problem. They’re mainly interested in language, so the input needed from other aspects of the intelligence aren’t implemented, they just use random numbers to represent it instead.
A better example would be SHRDLU, which dates back to the 1960s. That program modelled a mini-world of blocks and cubes and balls and so on that could be stacked up. The program interpreted input as instructions to manipulate the world, and could then answer questions about it.
So you need some sort of world for the AI to talk about, and to interact with. You can do that by giving it sensors and motors. Or you can do it internally with things like files and documents. When you tell Alexa or Siri to play a particular music file, it does know what you are referring to. But there are several parts to the general problem, all of them difficult. So someone working on the question of interpreting and manipulating the world will not want to spend a lot of time making the command language flexible, while someone working on languages will not be interested in the intricacies of modelling a complex world. The components are developed piecemeal.
Programming languages are another case, a sort of half-way house – something close enough to human communications enough for humans to use, but constrained enough for computers to understand. You could certainly imagine applying natural language AI to developing new more flexible and ambiguity-tolerant programming languages, for example. They generally don’t, because language AIs are far from perfect, and programmers have better things to do with their time than correct silly misunderstandings by some dumb AI. The more flexible and ambiguity-tolerant the language, the more ambiguous the code ends up, and so the harder it is to debug.
But if you can do it, you then go on from that to anything you can program computers/robots to do.
“The old project to use machines to generate formal mathematical proofs is slightly less silly, since you can (if you are a formalist) sort of make an argument that thereâs no content in mathematics beyond the [syntax] rules”
Oh, there’s certainly more content – but in a sense the mathematicians are coming at exactly the same problem from the other end. They’ve got a mysterious black-box neural net that they’ve trained on the real world to build up an intuitive mental model of space and lines and angles and physics and so on. The problem is, it’s generally approximate, highly ambiguous, and very possibly inconsistent, and mathematicians don’t trust it. The point of using formal rules is the same as the reason programmers use formal programming languages – to avoid the errors that arise from the approximate statistical curve-fitting done by neural nets. The mathematician is always involved in a dual process – to get the formal syntax to prove what their neural net intuition is telling them ought to be so, and to train their neural net to reproduce what the formal syntax proves to be true (i.e. understand it intuitively). And even for humans that is hard to do. Even most humans don’t grok advanced maths.
“But when AI gets good enough to replace the jobs of those middle class people, lawyers, accountants etc, we will get demands that something is done about its threat to income levels.”
No doubt. But perhaps by then, the AIs will have got good enough to explain basic economics to those people.
It’s interesting. We laugh at the ridiculous errors that AIs make, trying to understand how the world works. We’ve had decades to work it out. But when it comes to subjects like economics – how money and production and trade work – humans persistently make the same sort of mistakes. It has been generally observed that if you survey the general public at large on virtually any scientific question that’s been widely taught in schools and in the media, even stuff like whether the sun orbits the Earth or vice versa, huge numbers of people get it wrong. But we don’t notice, we don’t consider it surprising or significant, and we generally don’t care.
We’re better than the AIs, still, but it’s a difference of degree, not kind. And the AIs are learning a hell of a lot faster than we humans are.
Well, I’m a bit of a Borges nut. Borges was a rather small “c” conservative of the finest sort but much of his work is strikingly postmodern. The central idea of “The Library of Babel” is that a total library contains no information whatsoever other than how it is (selectively) interpreted (death of the author?). A total library is mathematically identical to a printer selecting characters at random. Obviously that will be mainly gibberish but there will of course be truths generated by sheer chance (there will be the sentence “two plus two equals four”) though no way of knowing it’s truth value from within the system. In Borges’ library, for example, the librarians get stuck pondering a phrase, “O time thy pyramids” found in one of the books. What does it mean? It means nothing more than the letters you pull out of a Scrabble bag until of course the reader (or Scrabble player) gives it meaning.
Yup, a conservative Argentinian Prof invented postmodernism (and picked all the finest plums) before the beret-wearing Left Bank mob had started secondary school.
For me, Borges is the most important writer of the C20th.
I crossed in commenting with NiV. I shall give his interesting comment some thought… I also have to get a pizza.
Some of the recent comments (certainly not the OP) on Uygurs in China almost made me give-up on Samizdata. This thread has from the start renewed my faith!
But I still have to have a pizza.
“The central idea of âThe Library of Babelâ is that a total library contains no information whatsoever other than how it is (selectively) interpreted (death of the author?).”
Each book contains lots of information, but considered on its own, no meaning.
Semantic meaning arises when you build a model/simulation of the world – a mapping between the states of a dynamic system and the states of the dynamic world, such that the behaviour of the system starting from that state matches the behaviour of the world starting from the mapped state. The dynamics of the simulating system are as important as the mapping.
Thus, a Turing tape filled with symbols has no meaning. You can map its states onto states of the world arbitrarily; still no meaning. The tape can only have meaning in conjunction with a particular Turing machine to interpret it. Then if the action of the Turing machine on the tape states corresponds to the behaviour of the world with regard to the mapped states, then the tape state can be said to ‘mean’ the world state. Run the simulation on the tape, and it tells you something interesting about the world.
To get meaning, you have to build a model of the world. Suppose we get a bunch of electrons, or a bunch of neurons connected up in such a way that if you feed in the coordinates of the Earth, moon using a certain mapping, and sun at a particular point in time, let the electrons move around as they will, and then reliably find that the state now maps back to the positions of Earth, moon, and sun a hundred years later, then the symbols ‘mean’ something about the outside world. They constitute a predictive model, a virtual representation of reality.
And once you’ve got one of them, you can use it to run simulations of possible actions you might take until you find a chain of actions that gets you the banana! Survival!
Writing is another mapping, from marks on paper to internal brain states. Again, it can only have meaning in conjunction with the interpreter – a machine to first map a particular pattern of symbols to a particular brain state, and then to manipulate the brain state to make reliable predictions about the world. A book needs a reader, and a world for it to speak about.
We can also use our inbuilt interpreter to read the future in the patterns of pepperoni on a pizza, like chicken entrails. It seems like meaning. But it’s only has *real* meaning if its predictions are reliable.
Of course, there is an interesting question about where the meaning in Egyptian Hieroglyphics resided for all the years there was nobody who could read them. I leave that as an exercise for the student. đ
Perry: youâve been outed mate.
NiV,
You are right on “meaning” v “information”. Me done bad. As to the rest… I broadly agree. As to the hieroglyphs… Well, I think there is a difference between something that is clearly a text and something which is random by definition.