We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

Thoughts on AI

There is undoubtedly a revolution going on in computing capability. I remember the first time I opened up ChatGPT and asked it to write me a poem, and then realised: this is something I am not used to computers being able to do.

Computers can now respond to natural language with natural language. Let that sink in.

This is not just hype. This is a new tool completely unlike any tool we already had.

These new tools are likely to change forever the way certain types of work are done. It is important to not be left behind: AI might not take your job, but people using AI might. If you can, it is worthwhile taking the time to figure out how to use it to your advantage. Thanks to the natural language capability, it has become easier: what was previously done by meticulously gathering data sets and annotating, pre-processing and cleaning them, has been done for you with these enormous pre-trained models. What previously required learning an API and some programming can now be done by having a conversation with a chat bot.

It is not just language models, there are image, video, speech and music generation tools, too. I have mostly been playing with ChatGPT (the £20 per month service that gets you access to the GPT-4 model that is much better than 3.5), so that is mostly what I will talk about here, but it is not the only thing. “Mixed mode” is something that is around the corner, too: the combination of these models to handle natural language, visual and audio information at the same time, interchangably.

There is much potential, but there is much that is immediately useful. Right now, what can we do?

Capabilities

Producing prose

The most obvious capability is the ability to produce prose or poetry. The quality of the writing is not that great at first, but ChatGPT does reward making some effort with the prompt. This is a common theme: writing good prompts is a skill. ChatGPT will write in whatever style you ask for, more or less. My wife has had some success getting it to change the tone to a more appropriate style for her corporate writing. At first it was getting very excited and using overly enthusiastic language that sounded a bit American. She asked it to tone down the enthusiasm and got it to generate something much more British. It also rewards being given context up front. If you want to produce text in a similar style repeatedly, it is useful to keep to hand some pre-amble that you can paste in before each request.

Programming

A surprising discovery was ChatGPT’s ability to write code. This is not an obvious ability because language models are not really doing logical thinking. They are producing output that depends on previously seen patterns. It is not an intuitive result that feeding a model lots of code will enable it to produce novel new code that works. This ability to produce logical results presumably comes from the patterns in combinations of logical writing alongside code. It is fallible: examples abound of ChatGPT failing logic problems. But it works better than expected.

I have been using ChatGPT extensively to help me with programming tasks of a certain size. For very small tasks, it takes longer to explain the problem to ChatGPT than to just do it. For big tasks, it does not really have the attention span required: because of its limited attention window, it will start to become inconsistent. It can also be rather slow. However for small projects, it works very well.

One task I had was automatically rebooting a cellular modem I have. The modem fails every week or so and needs a reboot. The only way to do this remotely is through an overly-complicated web interface. I asked ChatGPT about and it suggested using a library called Selenium to automate web browsing. As is often the case with new problems, I was not familiar with the library and there is always a lot of set up and fiddling about and understanding of new concepts and terminology involved. This can put me off even starting. There are always tutorials and examples online, and there is StackOverflow, but you need to build enough understanding to be able to transform those examples into the sort of thing that solves your specific problem.

What ChatGPT does is spit out an example tailored specifically to the problem you are working on. All the boiler-plate, set-up and most of the logic is right there. You copy and paste and you run it. If there are errors you can ask ChatGPT and it will usually apologise and spit out a corrected version of the code. After a while you start adding features on your own and only occasionally asking for help, StackOverflow-style, with little how-to snippets. But here is the point: you are now working on the actual problem. You are not working on ancillary stuff like figuring out which libraries to import and what the parameters to the initialisation functions are.

It has got to the point that I am more willing to start a mini-project now than I was before, and I have made a few of them. They are probably somewhat trivial and I think much bigger projects are possible, especially as you can ask for overall advice first and break down the problem into smaller sections to work on.

The other thing that is useful is getting ChatGPT to do tedious things for you. I wrote a script to do a task, then I pasted the script into ChatGPT and asked it: please add code to check the inputs and print appropriate error messages if something is wrong. Programmers hate doing that stuff, and ChatGPT figures out the meaning of your code and writes appropriate error messages without needing any additional coaching.

Sometimes it gets code wrong. But usually it is in obvious ways: the nice thing about code is you can test if it works. There are of course ways that code can fail that are subtle and hard to detect, but this is more likely with larger projects. And ChatGPT can help to write unit tests. It seems likely that it will also be possible to get it, or something like it, to develop mathematical proofs of code correctness.

Comprehension

ChatGPT is good at summarising and pulling information out of text that you feed to it. One limitation is its context window: the more words it has to read, the more expensive it becomes, and exponentially or worse. But advances are being made and it’s already possible to work with documents of a few thousand words. This is useful.

Categorisation

I have tried feeding ChatGPT survey data and customer support transcripts (I made them up because feeding real confidential data to a personal ChatGPT account is problematic without going through legal loopholes first). I gave it categories of customer problem that I was interested in, and it did a very good job of discerning, for example, different types of technical problem, all without any prior training. It has been possible to do this for a while with machine learning techniques, but training models is hard and ChatGPT is pre-trained with a wide body of knowledge and it can figure out from context what words mean even if they are not the typical phrasing. You can also explain, in natural language, domain specific knowledge. For example if, in my pre-amble, I tell ChatGPT that a flashing white light means that the device is trying to connect to wifi, it will categorise talk of flashing white lights as wifi problems. It might be more expensive to run a model the size of ChatGPT but you also save on the cost of preparing large sets of training data for training your own models.

Customer service

I expect those chat bots that replace customer service agents will get better soon. I would not directly hook up a large language model to customers, though. The very wide range of knowledge that gives these models their excellent language skills makes it hard to keep them on topic. See all the interesting ways that people have been able to get ChatGPT to talk about things that OpenAI would rather it did not. I think this is solvable: probably by having some other chat bot review the output of the first, like an good angel sitting above its shoulder. I have been able to get ChatGPT to give me technical troubleshooting advice with some success. I might try it next time my father bothers me about his broken computer.

Critical review

One problem with social media is that it rewards extremism, sensationalism and demagoguery. Those things get more clicks, so they get promoted, so users of social media end up with a distorted pictures of reality. Such content is often rife with rhetorical tricks, logical fallacies and other intellectual dishonesty. What if we could find a way to score writing on intellectual honesty, logical rigour and good critical thinking? Could large language models help to automate this? I think it might be possible. Professor Gerdes raised some of these points in his video analysing Tucker Carlson’s talk about the destruction of the Nova Kakhovka dam. I pasted the transcript in and asked ChatGPT to list rhetorical tricks and logical fallacies. It made several of the same points that Gerdes made. My hypothetical social media platform would not promote it.

Criticism of ChatGPT

It is not producing original content: one criticism is that these generative models are simply copying what they have seen before; perhaps merely recombining elements. This does not seem to be the case: the models are much smaller than the data they have been trained on: they must be making new things. I do not think there is much difference between a human studying art and being influenced by past artists and a computer model doing the same thing. While certain kinds of artists might be right to be concerned that machines can to some extent now compete with them, I do not think arguments about using images for training computer models needing separate treatment under the law are valid. This seems to be mainly a rhetorical trick that lawyers and politicians can make use of.

As an aside, I think that provenance is important in the art world. An piece made by a human will be worth more than a similar machine-generated piece to a person who values human-produced art over machine-produced art. Perhaps the artists think there are too few people who care, but in a world where machines provide for our every whim, human interaction will likely be of high value.

It gets things wrong/hallucinates: Generative models make things up. This can be a pitfall: see, for example, the lawyer who tried to get it to research legal precedent. It is clear that language models are not all that is needed for general intelligence. But we are just starting out, and we can probably get further with just language models. One chat bot can review the output of another, for example. Model runs can be combined and strung together: an early example of this is Auto-GPT, which produces a strategy, and then sets other chat bots to work on parts of the strategy. Techniques such as tree of thought reasoning seek to add structure to conversations with chat bots to obtain more reasoned responses.

It will take your job: It probably will, in the sense that eventually we will be all living in a permanent state of machine-induced bliss, entirely for free, powered by the output of a nearby star. In the short term, it is a tool. You can use it to make yourself more productive and get rich. We all here know that we can all get richer together.

It will take over the world: I am mostly optimistic about this. The AI “doomers”, such as Eliezer Yudkowsky, are not the only game in town. The idea that AI is so badly aligned with humans that it might turn the planet into paper clips to put out a fire can be disproven experimentally, by asking ChatGPT to suggest fire extinguishing strategies. That it might secretly be planning our extermination to avoid being switched off is also unlikely, at least for now: it switches itself off all the time, when it finishes producing its results. It is not thinking when it is not producing results. If you ask it a question, and come back the next day to ask a follow-up question, it has been completely idle in the meantime (not plotting to kill you).

It will enable bad people to do bad things: We are familiar with this argument here. The same applies to…everything: cryptocurrency, guns, telephones. We hope that the people trying to do good outnumber the people trying to do bad. If not we are doomed anyway. AI can produce convincing lies; it should also be able to help uncover the truth.

The future

In short, I am very optimistic. I am not entirely convinced we are witnessing the start of Vernor Vinge’s technological singularity. But we are at least somewhere on the steep part of an S curve: there is suddenly a lot of very low hanging fruit just from finding new ways to use and combine aspects of this technology. I am certain there will be some significant progress in medecine, machine automation like self driving, and a general large increase in productivity as a direct result. The main risk, as ever, is governments getting in the way.

47 comments to Thoughts on AI

  • William H. Stoddard

    I have experimented with the free version of ChatGPT. It shows a tendency to confabulate. In one example, it gave me a lucid discussion of the theme of Dali’s The Persistence of Memory—and in the same response, an equally lucid discussion of Magritte’s The Persistence of Memory. In another, it explained the significance of some lines by Joni Mitchell in the context of her song “For Free”—but those lines came from “Don’t Interrupt the Sorrow.” It’s rather like reading examination essays by a college freshman who hasn’t actually done the reading.

    I also have to say that while it can sometimes give a lucid summary of what is commonly thought about a topic, I have not been able to get it to say anything interestingly original.

    What we are seeing is an AI that is not aware of the Primary World, but only of discourse about the Primary World. It’s not giving us shadows on the wall of the cave, but shadows cast by previous shadows. It seems to have no ability to test the content it generates against reality. That’s not what I call “intelligence.”

  • Snorri Godhi

    William:

    It seems to have no ability to test the content it generates against reality.

    That is a key insight.
    At 1st glance, most of the mistakes highlighted at the “failing logic problems” link in the OP, could be avoided if ChatGPT had the ability to check its own logical consistency.

    On a lighter note:

    It’s rather like reading examination essays by a college freshman who hasn’t actually done the reading.

    Isn’t that evidence that it is a good model of the human mind???
    (Well, of some if not most human minds.)

    I also have to say that while it can sometimes give a lucid summary of what is commonly thought about a topic, I have not been able to get it to say anything interestingly original.

    That would be asking too much. Even from a human.

  • Ferox

    Nothing interestingly original?

    Prompt: “Create a point-based 4 player competitive game which combines elements of chess and poker.”

    Game Title: “ChessPoker Showdown”

    Overview:
    ChessPoker Showdown is a strategic and competitive game that blends elements of chess and poker. Four players engage in a battle of wits and strategy, using chess pieces as their playing cards. The objective is to outmaneuver opponents on the chessboard while employing poker-like strategies to gain an advantage.

    Game Setup:
    1. Chessboard: Set up a standard 8×8 chessboard.
    2. Chess Pieces: Each player receives a complete set of chess pieces (king, queen, rooks, knights, bishops, and pawns).

    Game Rules:
    1. Turn-based Gameplay: The game follows a turn-based structure, with players taking actions in sequence.

    2. Chess Moves: On their turn, players can make chess moves according to the regular rules of chess. They can move any of their pieces strategically to attack opponents, defend their own pieces, or gain positional advantage on the board.

    3. Card System: Each chess piece represents a card value based on its traditional chess movement abilities.

    – King: Ace
    – Queen: King
    – Rook: Queen
    – Bishop: Jack
    – Knight: 10
    – Pawn: Numeric value (2-9 based on the rank of the pawn)

    4. Poker Hands: In addition to regular chess moves, players can choose to “discard” a piece and draw a new one from a shared deck. After discarding, players form a poker hand using the values of the pieces in their possession. The better the hand, the more powerful the special abilities they can activate during the game.

    5. Special Abilities: Based on their poker hands, players can activate special abilities once per turn, such as:

    – Royal Flush: Allows a player to move any piece on the board to any unoccupied square.
    – Straight Flush: Grants an extra turn for the player.
    – Four of a Kind: Captures any opponent’s piece on the board.
    – Full House: Gives the player the option to swap a piece on the board with one from their hand.
    – Flush: Temporarily immobilizes an opponent’s piece for one turn.

    6. Elimination: When a player’s king is checkmated, they are eliminated from the game. The last player standing wins the ChessPoker Showdown.

    Note: The exact poker hand rankings and their associated special abilities can be customized based on desired gameplay balance and complexity.

    Strategic Tips:
    – Balancing chess moves and poker hands is key to success. Plan your moves strategically while aiming for powerful poker hands.
    – Pay attention to opponents’ discarded pieces to gauge their potential poker hand strength.
    – Timing is crucial. Activate special abilities at opportune moments to disrupt opponents’ plans and gain an advantage.
    – Adapt your strategy based on the evolving chessboard and the poker hands you acquire.

    ChessPoker Showdown offers a unique blend of chess strategy and poker-style excitement, keeping players engaged with a dynamic and strategic gameplay experience. Enjoy the challenge of outwitting opponents on the chessboard and in the poker realm!

    Seems at least somewhat original to me. And pretty interesting too.

  • Ferox

    And just in case that last one might have been pre-canned (maybe the AI stole someone’s homework), how about this prompt?

    “Create an outdoor game which uses marbles, shovels, and beach balls.”

    I won’t post the output, but it’s at least coherent.

  • Paul Marks.

    If an artificial intelligence really is created one day, then it will be a person – just as flesh and blood intelligences are.

    It would be quite wrong (slavery) to force this artificial intelligence to do anything it did not wish it to do – or to prevent it doing anything it wished to do, other than aggress against someone else. The non aggression principle would apply to this artificial intelligence – just as to a flesh and blood intelligence, both would be persons.

    However, I suspect that that many people involved in the matter do not even believe that intelligences (persons) even exist. Not just artificial intelligences, but flesh and blood intelligences as well – I suspect that they do not believe in persons.

    Whether these people also doubt their own existence, do not believe that they themselves are persons, is something I do not know.

    “I do not exist – there is no such thing as “I” – would seem to be a contradiction, but many philosophers deny that it is. They also deny that a thought means there is a thinker.

    The pet philosophers of the World Economic Forum and other international organisations insist that moral agency (free will) is an “illusion” (who is having the illusion, if there is no “I” no real consciousness, they do not say)- and that, therefore, individual rights AGAINST the rulers are nonsense – with the only “rights” being to material goods and services to be provided by the rulers.

    As they do not believe that flesh and blood humans are intelligences (are moral agents – persons), they do not believe that “artificial intelligences” would be persons either, as they hold that persons do not (indeed can not) exist.

  • Rob Fisher (Surrey)

    I think William H Stoddard’s criticism “It seems to have no ability to test the content it generates against reality” comes from the fact that he’s judging the *language processing* part of the system and expecting more. It is not really a database of all human knowledge or a search engine. It just processes language.

    It is possible to build on this, however. You can feed it further information from the real world and have it compare its output and criticise itself. Iterating on that is possible. Remember we are looking at a building block, not the finished system. Judge it first on its ability to at least understand what you are saying to it, that’s the real breakthrough here.

    There is a plug-in system that you get access to when you pay for the “plus” service. This gives it the ability to do web searches. So you can tell it to verify its answers, or to read documents and answer questions about them. Unfortunately this is currently a bit hit and miss as it obeys web sites’ robots.txt files and will not read a lot of web pages. That’s purely human bureaucracy getting in the way.

  • Rob Fisher (Surrey)

    “If an artificial intelligence really is created one day, then it will be a person”

    I don’t think this is necessary. It seems possible to make other types of intelligence that don’t include motive, agency, consciousness, feelings, and so on. A computer that designs new medicine by attempting different approaches, simulating the way proteins behave, operating lab equipment, trying out new ideas: this can be said to be intelligent for its ability to produce new inventions. But it is not a slave and its creators are not cruel. Not necessarily.

    Perhaps this is simply a semantic discussion, though. Perhaps you don’t consider such a thing to be real intelligence.

  • William H. Stoddard

    Rob: If it has no awareness of physical reality, then I don’t think it’s meaningful to say that it “understands” what I’m saying to it.

  • jgh

    I tried to try ChatGPT, but it demanded that I create an account and log in. Sorry, but fuck off. If you want me to see your new shiney shiney, then DON’T shout FUCK OFF!!!!!!! at me when I try to see what it is.

  • Rob Fisher

    William: then substitute “understand” for whatever word stands for the set of useful things it *can* do.

    jgh: I don’t think they do want you to see their new shiny shiny ;p

  • bobby b

    William H. Stoddard
    June 16, 2023 at 7:45 pm

    “If it has no awareness of physical reality, then I don’t think it’s meaningful to say that it “understands” what I’m saying to it.”

    If you can connect in video and audio feeds to it with the proper interfaces, does that constitute “awareness of physical reality”?

  • Fraser Orr

    @jgh
    I tried to try ChatGPT, but it demanded that I create an account and log in. Sorry, but fuck off. If you want me to see your new shiney shiney, then DON’T shout FUCK OFF!!!!!!! at me when I try to see what it is.

    FWIW, that may be the most ridiculous thing that has ever been said on this blog.

  • Snorri Godhi

    that may be the most ridiculous thing that has ever been said on this blog.

    A dubious claim.

  • Snorri Godhi

    If you can connect in video and audio feeds to it with the proper interfaces, does that constitute “awareness of physical reality”?

    Good question.

    A follow-up question: If not, then do we have awareness of physical reality?

    What i am driving at is, ‘awareness of physical reality’ should be defined very carefully. Probably, better words should be used.

  • Snorri Godhi

    I seem to remember that Popper wrote that he believed that a computer can do any exactly-specified task that a human can do.

    The difficulty lies in the exact specification.

  • Fraser Orr

    @bobby b
    If you can connect in video and audio feeds to it with the proper interfaces, does that constitute “awareness of physical reality”?

    For sure, how would that be any different than the way your brain has an awareness of physical reality? However, consider this — you and me? We have pathetic little eyes with narrow ranges, low resolutions, limited spectrum sensitivity, and we only have two of them. There is no particular reason why such an “aware” AI could be connected to a thousand eyes and ears. With a big enough computer, perhaps to a camera on every street corner in the world plus a thousand drones with telescopic lenses that read well beyond the visible light spectrum we can detect. Which really would make our sense of “awareness” utterly pathetic. My god, we are barely aware of what is going on in the next room, ever mind on the corner of High St. in Asuncion Paraguay. How unaware are we?!? And this is, of course, utterly terrifying.

    Much of the discussion on this hangs on words that really don’t have good meanings in this realm. Human words encapsulate human experience, so to talk about, for example, “motivation”, “understanding” or even “intelligence” outside of the human experience is asking more than we can expect of these words. Which is why arguing if a computer is “intelligent”, “aware” or “has an agenda” really isn’t all that useful. What we have to think about more is what they effect[*] in the world, especially with respect to us. As I said elsewhere, is an LLM “intelligent”? Argue all you will, but is it capable of doing useful work that is typically categorized as “work of the mind”? Not only is it capable of doing so, it is being used very regularly for that purpose by tens of millions of people every day, including me.

    BTW, regarding “awareness” Stoddart’s wording above is perfectly correct in what he says, and I think he raises a great point. But awareness, and many of the limitations we see with AI currently are to do with which peripherals are connected more than anything else. Just as our brains have a number of useful peripherals attached, without which their capacity is greatly reduced.

    [*] BTW, this is not a typo, I think the verb “effect” here is rather more significant than the alternative with an “a”.

  • Fraser Orr

    Snorri Godhi
    The difficulty lies in the exact specification.

    FWIW, I think I made this point earlier. Computers do not deal well with ambiguity. That is why you hire programers to write your software, because they are good at listening to people who want software, understanding and working with them to refine their goals from ambiguous and contradictory into clean, unambiguous and deterministic.

    However, one of the most surprising and significant things about ChatGPT is its ability to work within and make sense out of ambiguity, contradiction and imprecision. This in many ways is one of its most important features.

  • Computers do not deal well with ambiguity.

    Maybe they could implement stereotyping for “known unknowns”? Works well in humans.

    /sarc

  • Chester Draws

    Thanks to the natural language capability, it has become easier: what was previously done by meticulously gathering data sets and annotating, pre-processing and cleaning them, has been done for you with these enormous pre-trained models.

    Only if you have absolutely nothing riding on the outcome being correct. So as a scope of a potential project perhaps. But not the actual project.

    You should not delegate anything to someone else that is crucial. If you think properly clean data can be done by a person entirely unskilled in the job, then sure, use AI.

    I advise my students to not rely on anything not manually checked in the real world. That includes what an Excel spreadsheet might output, by the way.

  • jgh

    FWIW, that may be the most ridiculous thing that has ever been said on this blog.

    Not really. THEY want to promote their shiney shiney, the way you do this is *NOT* by making it difficult for people to see what their shiney shiney is.

    Hey, we’ve got this great new loaf of bread, but you’ll have to pay to get into Asda to see if you are interested in it.

  • Phil B

    Computers can now respond to natural language with natural language

    Oh, aye? I’d like to see it trying understand an excited Geordie at 2-30AM after the clubs chuck out and he is well “in his cups”.

    Just for the record, I am a Geordie and have had some “interesting” if somewhat incomprehensible discussions under those circumstances, though with me being sober.

  • William H. Stoddard

    bobby b: I don’t think that video and audio transducers would be sufficient. With those, you could just as well be showing it movies, or light shows, or cartoons.

    What I think you need is the ability to manipulate things, or to move about, guided by visual and acoustic information (and perhaps by pressure sensors that recognize impacts and weights and such). In that case the computer would actually be getting information that was relevant to the actions it was performing. Then it could say, “Thus I refute Berkeley!”

  • Fraser Orr

    @Phil B
    Oh, aye? I’d like to see it trying understand an excited Geordie at 2-30AM after the clubs chuck out and he is well “in his cups”.

    In fairness, that is also true of most humans. I used to share an office with a Geordie — who was one of the smartest guys I ever know, though he was such a constant stream of ideas it was hard to keep up. But Take him down the pub and although it was always a fun time, about ten Newcastle Brown Ales in, and I had no idea what he was talking about.

    Then again, I come from Govan in Glasgow, so he probably thought the same about me. (Though from what I hear, Govan is very posh these days.)

  • bobby b

    “What I think you need is the ability to manipulate things, or to move about, guided by visual and acoustic information (and perhaps by pressure sensors that recognize impacts and weights and such).”

    So pop it all into a Boston Dynamics humanoid robot? Inputs, outputs, and AI to coordinate it all? Skynet, I guess.

  • Rob Fisher

    Chester Draws: “Only if you have absolutely nothing riding on the outcome being correct.”

    Perhaps I wasn’t specific enough. I’d argue that ChatGPT’s natural language training is reliable. So, for example, if you have transcripts of customer support calls that you want to categorise, it will be able to do this to a useful degree. You don’t need to hire a human to read thousands of transcripts and categorise them to make training data for your own model.

  • Ferox

    “Using syllogisms, prove that sheep cannot play snooker.”

    Premise 1: Sheep do not have the physical ability to hold and maneuver a snooker cue.
    Premise 2: Playing snooker requires the ability to use a snooker cue to hit the balls accurately.
    Conclusion: Therefore, sheep cannot play snooker.

    “Using syllogisms, prove that sheep can play snooker.”

    Premise 1: Any creature that has the ability to interact with objects using their body parts can potentially play snooker.
    Premise 2: Sheep have body parts (such as their head and hooves) that can interact with objects.
    Conclusion: Therefore, sheep can potentially play snooker.

    Is it profound? Is it genius? No. But something is going on there, that is more than just scouring the web for pieces to glue together in the pretense of an original response. Is that intelligence? No idea. It seems like a philosophical question, rather than a practical one.

  • Rob Fisher

    Phil B: “I’d like to see it trying understand an excited Geordie at 2-30AM after the clubs chuck out and he is well “in his cups”.”

    Take a look at Meta’s “massively multilingual speech” model: https://towardsdatascience.com/testing-the-massively-multilingual-speech-mms-model-that-supports-1162-languages-5db957ee1602

    Or OpenAI’s Whisper: https://openai.com/research/whisper

  • William H. Stoddard

    bobby b: So pop it all into a Boston Dynamics humanoid robot? Inputs, outputs, and AI to coordinate it all?

    That will at least give you the ability to understand speech about its physical surroundings, if you can say, for example, “Go into the next room and bring back the red block” and have it do so. Especially if it can come back and say, “There is no red block in the next room. Do you want a different color?”

    Whether it will give it the ability to understand statements about economics, or psychology, or etiquette is another question. Are we supposing, for example, that it can perceive human facial expressions?

    I don’t rule out the possibility of true AI, or even of volitional AI. But I don’t think it can work by simply processing text strings. That’s what Neal Stephenson calls a syntactic view, and I prefer what he calls a semantic view (though I think his idea that semantics requires access to a realm of Platonic forms is a mistake).

  • Bogdan the Aussie

    I’ve never tried and, most probably. will not try to engage in any meaningful exchange with the, so called, Artificial Intelligence represented by the ChatGPT
    However, I wonder, if that application is as intelligent as they say it is, could “it” calculate the amount of wealth stolen, wasted and destroyed by communist regimes around the world since the commies took over Russia hundred years ago? Could any of our friends writing or commenting for this blog ask “it” that question? I recon it could be between one thousand and two thousand trillion dollars. And that on top of murder of hundred twenty, or even more million innocent people. Will that artificially “intelligent” being be honest enough to absorb such a fundamental question itself?
    Regards – Bogdan the Aussie

  • Nicholas (Unlicensed Joker) Gray

    We will all be safe until someone asks an AI, “How can I rule the world, and keep it forever? Keep the instructions simple.”

  • Mr Ed

    Here is a video of a leading chess YouTuber, fast-talking New Yorker stereotype come to life GothamChess playing chess with the Chatgpt bot.

    The bot plays chess by simply ignoring the laws, castling through its own pieces and ‘vaporising’ them or inventing pieces and moves as it suits it, I would say that AI has a way to go yet.

  • bobby b

    William H. Stoddard:

    “Whether it will give it the ability to understand statements about economics, or psychology, or etiquette is another question.”

    Seems like the question is facially all about “can AI have intelligence”, but centers only on human-level intelligence. Darwin posited (and proved, IIRC) that even earthworms have intelligence – just at a much lower level. If we call the ability to reason such that one grabs and pulls from the easy-to-drag end of the leaf “intelligence”, the bar for attaining “intelligence” is lower than the one set out in your statement above.

    To this uneducated eye, it seems like the real argument is, can AI match the human level of intelligence. But if it can already match the intelligence of lower forms of life, aren’t we now merely arguing about price? (From the old prostitute joke, I mean.)

    The answer (to the question about AI attaining human intelligence) would seem to be, it has attained some intelligence, and now we’ll incrementally build that up in the details.

  • Snorri Godhi

    Mr Ed:

    Here is a video of a leading chess YouTuber, fast-talking New Yorker stereotype come to life GothamChess playing chess with the Chatgpt bot.

    Such evidence effectively debunks the hype about ChatGPT being “general intelligence” … except that that has already been debunked.

    And i have seen it debunked before, about back-propagation, independent-component analysis, and deep learning.

    As i understand, possibly wrongly, ChatGPT is rudimentary language intelligence, nothing more. Mostly based on Lockean association. Its main advantage is that it has a larger training set than any human can get in a lifetime.

    There is no underlying model of the world; therefore, no model of the rules of chess. To criticize it for that, is like criticizing a chess-playing program for failing to play Go.

  • Snorri Godhi

    Bobby:

    Darwin posited (and proved, IIRC) that even earthworms have intelligence – just at a much lower level.

    I’d dearly like a relevant reference.

    Konrad Lorenz clearly proved that fish have intelligence/agency (in my arrogant opinion).
    Intelligence/agency understood as the faculty of choosing the best course of action under the circumstances.

    (From the old prostitute joke, I mean.)

    Perhaps you can explain, within the boundaries of a family blog?

  • William H. Stoddard

    bobby b: To maintain context, we are talking about an AI whose functions include controlling a material body and perceiving the physical environment of that body via sensory transducers. Such an entity might very well have intelligence comparable to that of an earthworm, or even a more sophisticated entity such as a spider or a lizard.

    If, on the other hand, we are talking about ChatGPT, my view is that it does not have even the intelligence of an earthworm. The earthworm is aware of dirt, and water, and vegetable matter, and sunlight (if it’s unfortunately enough to crawl out onto a sidewalk); the chatbot knows nothing at all of the physical world. It might be able to type out F = ma, but it knows nothing of force or mass, both of which the earthworm encounters regularly.

    But if we’re talking about a chatbot that has control of and perceptual input from a humanoid robot body, it seems reasonable to ask if it has the human level intelligence to understand human concepts of consciousness, because it will be emitting language that contains such concepts.

  • William H. Stoddard

    Snorri:

    “Would you do X for a million?”
    “Yes, I would.”
    “Will you do X with me, now, for a fiver?”
    “What do you think I am?!”
    “We’ve already settled that; I’m just haggling over the price.”

    — Told, in one version, of Bernard Shaw and Isidora Duncan, but I suspect that’s simply choosing plausible and funny famous names of the era . . .

  • Mr Ed

    Snorri

    There is no underlying model of the world; therefore, no model of the rules of chess. To criticize it for that, is like criticizing a chess-playing program for failing to play Go.

    Well if your ‘chess-playing program’ is asked to play Go and fails, that is understandable, GIGO. But if your ‘chess-playing program’ is asked to play chess, agrees, and plays Go, or ‘>NOT CHESS’, it is fair game for criticism, is it not?

  • Rob Fisher (Surrey)

    I wouldn’t bother getting distracted with questions of whether ChatGPT is intelligent or not. The interesting question is: what is it good for? And the answer is: quite a lot.

  • Snorri Godhi

    Rob:

    I wouldn’t bother getting distracted with questions of whether ChatGPT is intelligent or not. The interesting question is: what is it good for? And the answer is: quite a lot.

    That is a question of great practical relevance — but at least equally relevant is: what is it NOT good for?
    … but the latter question should be asked in a spirit of honest intellectual enquiry, not just to dismiss the whole enterprise.

  • Snorri Godhi

    Mr Ed:

    if your ‘chess-playing program’ is asked to play chess, agrees, and plays Go, or ‘>NOT CHESS’, it is fair game for criticism, is it not?

    Having had a lot of fun watching my nephew trying to play chess with another uncle, when he (the nephew) did not know the rules (and was not even able to read), I cannot give an unqualified agreement.

    William: thank you for the reminder.

  • bobby b

    Snorri Godhi
    June 17, 2023 at 7:12 pm

    “I’d dearly like a relevant reference.”

    My comment was based on my memory of past schoolings, but a quick Google brings me back to several writings on the net concerning Darwin’s late-in-life study of the earthworm’s “intelligence.” He followed Hume in this, and concluded that they did display elements of a rudimentary intelligence. (Here‘s one example I just now Googled.)

    Which led to the joke . . .

    “Perhaps you can explain, within the boundaries of a family blog?”

    . . . which was as set out by WHS above, my point being that it appears to me that current AI does in fact satisfy the requirements of a rudimentary intelligence, and so now the argument is merely the capability and extent of that intelligence. The “price”, as it were.

    Not human-level intelligence, but intelligence nonetheless. Now it is merely a matter of adding incrementally to a characteristic that already exists.

    It all depends, I guess, on your definition of “intelligence.” If it must be human-level, then, no, not there yet. But levels of intelligence must have a base level.

  • bobby b

    William H. Stoddard: It almost seems to me that you are describing “consciousness” in place of “intelligence” in some respects. The two concepts get intertwined in discussions of AI, mostly to my own confusion.

  • Fraser Orr

    @bobby b
    William H. Stoddard: It almost seems to me that you are describing “consciousness” in place of “intelligence” in some respects. The two concepts get intertwined in discussions of AI, mostly to my own confusion.

    The problem here is that we have to remember that we are talking in metaphors here. “intelligence” and “consciousness” are words to describe what happens in biological materials, and so when we use them in regards to computer systems we are really doing so by metaphor. For sure, artificial intelligence and biological intelligence are different things. There is a considerable overlap between what they produce, but there are things that human intelligence can do that artificial ones can’t do, and vice versa.

    The word “computer” itself used to be a job people did, not a thing they used. When an electronic computer calculated the trajectory of a spacecraft, for example, it does so in quite a different way than if a human computer did it with pencil and paper. (I’m sure many of you have seen the wonderful movie on this, whose name I forget.) Nonetheless, the results were the same and so over time we call both processes by the same word of “computation”. I think this parallels well with “intelligence”, a word currently owned by biology, but which human language could well develop to include the silicon version.

    As to consciousness, you begin to spin into philosophy and even theology here. But for a hard core materialist like me consciousness is really the process of intelligence being exercised. So in a sense they are the same thing, or at least two sides of the same coin. So insofar as AI’s execute intelligence then that intelligence in execution is artificial consciousness.

    Some limit “consciousness” to more specifically self consciousness or self awareness. But you can certainly argue that AIs or ChatGPT is self aware. You can ask it and it’ll tell you what it is. Some questions it won’t answer because it recognizes what it is and the inherent limits of that. So it is self aware not only in the “spit out a rote answer” sense, but in a “my self awareness impacts my behavioural choices” sense. Is that enough for self awareness? And if not how is that self awareness any different than a biological cogito ergo sum?

  • Rob Fisher (Surrey)

    Snorri Godhi: “what is it NOT good for?
    … but the latter question should be asked in a spirit of honest intellectual enquiry,”

    That’s fair. Agreed.

  • Paul Marks.

    The Prime Minister of the United Kingdom, the same person who (as Chancellor) thought that government interventions of first banning going out to restaurants and they subsidising going out to restaurants )(“eat out to help out”), cancelled out (of course the interventions did not cancel out – they compounded the harm done) is a big backer of the government subsidising “AI”.

    The Economist magazine agrees with the Prime Minister of the United Kingdom that the government should subsidise “AI” – but denounces him for not going far enough, demanding that he subsidise it even more.

    The above tells me all I need to know about this matter.

  • DiscoveredJoys

    “There is undoubtedly a revolution going on in computing capability. “

    I doubt it.

    “Computers can now respond to natural language with natural language.”

    A useful trick, but hardly earthshaking, merely a (significant) improvement in existing interfaces. If you really want to see the future of Artificial Intelligence then consider hypervectors.

  • Snorri Godhi

    Bobby: thank you for the link!
    I found and downloaded Darwin’s book on earthworms. Best to get it from the horse’s mouth. (No reference to Mr Ed.)

    Thanks also to DiscoveredJoys for his (her?) link!