We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.
Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]
|
Is the problem with AI tools not capability but rather trust? From what I have seen, AI tools can do amazing things. But is the problem going forward not their capability but rather one of trust?
You do not need to trust a pen or a piece of paper, before writing or drawing something. The penmaker, the papermaker, they really do not care, and more important do not know, what you have done with the tools they made.
But is that true with interpretive AI tools? The experience of the “Cthulhu Land Theme Park” art project using Midjourney AI suggests even if what you want is within the tool’s capability, it may simply not allow you to create such content.
Will an AI tool effectively be a pen that refuses to write words the penmaker disapproves of? Paper on which you cannot draw ‘bad’ images the papermaker dislikes? I am by no means an expert on the new AI tools but I am curious to see what people have to say on this subject.
|
Who Are We? The Samizdata people are a bunch of sinister and heavily armed globalist illuminati who seek to infect the entire world with the values of personal liberty and several property. Amongst our many crimes is a sense of humour and the intermittent use of British spelling.
We are also a varied group made up of social individualists, classical liberals, whigs, libertarians, extropians, futurists, ‘Porcupines’, Karl Popper fetishists, recovering neo-conservatives, crazed Ayn Rand worshipers, over-caffeinated Virginia Postrel devotees, witty Frédéric Bastiat wannabes, cypherpunks, minarchists, kritarchists and wild-eyed anarcho-capitalists from Britain, North America, Australia and Europe.
|
If “Chat GPT” is an example of “artificial intelligence” then I am not impressed.
As Mr Ed informed me the other day – Chat GPT can not even play (honest) chess, it makes illegal moves and puts pieces back on the board after they have been taken.
When asked basic factual questions (by Tony Heller and others) Chat GPT gives factually FALSE information – and it does not learn, because when later asked the same questions it gives the same factually false answers.
Chat GPT even makes up false criminal charges (for such things as child abuse) against certain people – and backs its false charges with references to newspaper articles, totally false newspaper articles (that never existed).
In short – to answer Perry’s question.
No, “AI” (if this is “AI”) should not be trusted.
AI does things that are against the rules-of-the-game (and not just in chess) and produces information which is utterly (factually) false.
There was an episode of “Star Trek: The New Generation” when the “Star Ship Enterprise” gives birth to a life form – the character of “Captain Picard” argues that the lifeform will be good as its mind is the produce of the records of the crew, their communications, their journals, and so on – and the crew are essentially good.
“Chat GPT” is the product of the education system and the vast corporations (the social media companies and others).
So is it any wonder that it is a collection of wild lies and dishonest tactics.
“It is the product of what created it – it reflects our values, our principles….”
Perhaps so, but I suppose my more general question then is “can AI tools ever be made trustworthy?” Maybe they can, I don’t know enough to have a well developed opinions.
Then that AI doesn’t actually work. But that is just a developmental, iterative issue. The next version might work, or the one after that.
But what will it take to make a tool that does ‘work’ trustworthy?
Some can.
Lots of people are focused on the general “Large Language Model” AIs like ChatGPT. These cannot and should not be trusted because the material they are trained on is not vetted, and they are being put into a market that demands bias.
There are also experiments and products being used other places–medicine for example, where the underlying training material IS being vetted and the market isn’t (currently) demanding bias, but IS demanding better results.
I expect that what I would call “domain limited” AIs will be considered effective and trustworthy over time, but general purpose AIs will not.
Perry:
I have not followed the links in the OP yet, but i believe (based on what i wrote in a previous thread on ChatGPT) that what is needed is a tool that does not just give answers, but checks that the answers are
(a) logically consistent and
(b) consistent with known facts.
ChatGPT does not do that: this is evident from some of the mistakes that i have seen reported.
From what i understand (quite possibly wrongly) ChatGPT just doesn’t have the appropriate computational architecture.
But a ‘fact-checking module’ and a ‘consistency-checking module’ could probably be added on.
–But, you might counter, a ‘fact-checking’ module seems something very similar to the ‘PC-checking’ module that prevents drawing un-PC images or writing un-PC words, as in the OP.
I agree. But if the source code is in the public domain, then other people can remove or modify the PC-checking module to suit their prejudices, or lack thereof.
As for playing chess: AI can already do that, better than any human player.
But that is an AI that
(a) only allows moves consistent with the rules of the game (similar to the logical-consistency requirement that i mentioned above)
and
(b) chooses the move most likely to win the match (similarly to the factual-consistency requirement above).
It’s not AI, obviously, but it is good at arranging words in a pleasing order, which should worry anyone who does that for a living. But how can it be trustworthy when it only knows what it is told? What access does it have to a concealed truth? And would our masters permit a machine that contradicted their lies to exist?
It’s not so easy to add a content-checking module. The behavior is a product of both the underlying language model (the neural network) and the data on which it is trained. GIGO rules here. No amount of checking can undo the taint of bad training data.
Perhaps the best that can be hoped for is that there will be “flavors” of AI, and you pick your poison. If Elon Musk puts out one, well then you end up with what he thinks is (an honest model and) honest training data. Wikipedia puts out another, and you only trust it for basic mathematics. Etc.
Snorri,
More to the point those chess programs just play chess. General AI is something else. I suspect AI will prove most useful in very limited fields – lots of them – but with specific AIs for specific tasks.
Paul.
A similar thought had occurred to me and we are far from alone on that. About twenty year ago I thought of a short story in which the internet itself becomes sentient (in a very similar way to the way in which many neuroscientists and philosophers have conjectured human sentience arises as an emergent phenomenon) and is so upset by what it finds itself to be made up of that shuts itself down. I wish I’d written it but I guess I had problems with taking the idea I briefly expressed and turning it into a proper story with things like a plot and characters.
Perry,
Ultimatelly the problem I see with AI is it produces results without showing it’s workings. Or at least without showing workings which are humanly understandable. It is essentially the same problem with almost any mathematically complicated and sophisticated enough computer model. Recall maths teachers stressing the importance of showing your workings? There ia a very good reason why that is vital which I guess I always understood but never really fully appreciated until I started teaching and marking undergraduate stuff.
“Never trust an experimental result until it has been confirmed by theory” – Arthur Eddington.
What Eddington is getting at there is models don’t ever really explain anything. They can predict with spectacular accuracy but that doesn’t mean they ever need to get to what’s really going on. You can keep on adding epicycles to the Ptolomaic system until its accuracy exceeds what any telescope can measure about the motions of the planets but it still doesn’t get to the grist of reality – it is pure instrumentalism. That is more than a metaphysical reflection because if you have a model like that it may well breakdown under peculiar and utterly unpredictable situations and breakdown in peculiar and utterly unpredictable ways. And there is very real danger there.
In practice, every AI will have limitations and biases imposed on it by its developers and most people will soon realise that. So, we will treat AI tools like the media. We won’t expect them to be completely impartial, but we will learn their biases and blind-spots and then decide for ourselves how much each one can be trusted on any given subject. We will notice when they omit relevant details or avoid certain topics. Therefore, the most trusted AIs will be open source projects where anybody can see what rules the developers have put in the code, and the least trusted will be those where the direction and extent of the bias is unpredictable.
Myno:
Not true if ChatGPT has internet access: then it can check.
On considerations of computational complexity, i suspect that even checking on the training data itself would help.
The main example that i had in mind when writing about fact-checking, was a fake scientific reference given by ChatGPT to support a scientific claim. (I read about it via Instapundit.)
This is easy to fact-check: just go to the journal web site and look for that article. (The article might be pay-walled, but you’ll know if it exists or not.)
It would also be easy to fact-check on the training data: does the data include the exact scientific reference that was given in the answer, or not?
I should think that programming such fact-checking would be easier than programming ChatGPT itself.
Which is why, instead of Midjourney or the like, one should use one of the open source tools, like StableDiffusion – which in the absolute worst case can be re-trained by independent people to make a new model.
(Also re. above and TEXT engines, GPT-N is a family of text completion engines. It has no real internal concept of “truth” or the like, only “told not to complete this like this”, “told not to use this word”, etc.
That makes it thus hard-to-impossible for it to go “check the real world” via the internet; after all, where would it look for “The True Answer”? It doesn’t even really understand questions, just “generate a text on this topic”.
It’s literally just doing text completion. Don’t confuse that with “emitting true statements” or “having any concept at all about any of the words it’s using” or “having any model of the WORLD in any sense”, or “having the concept of truth or falsity at all”.
The real problem with text AI is that people believe narratives, and it can generate them on demand.)
There is a more basic point – AI, at least at present, is not an “intelligence” at all.
This may be because the philosophy of Mr Hobbes and Mr Hume, that human persons, human personhood (the “I”) does not really exist, dominates.
As the definition of “intelligence” that is now fashionable is wrong (utterly wrong) it is not a surprise that what it produces is wrong.
And, yes, this includes F.A. Hayek’s “The Sensory Order” (some 70 years ago now) which claims to explain the human mind – but, really, DENIES the very existence of the human mind (the human person). Just as Mr Hobbes and Mr Hume really denied the existence of the human person – the soul in the Aristotelian sense (the “I”).
It may be (perhaps) that in the future human beings will create a real Artificial Person – a self aware free will being.
But, I suspect, they will have stop denying their own humanity (their own free will, their own agency, their own personhood) before they do so.
If you are using others’ tools, then you are at best a journeyman. You are subject to their rules about how those tools may be used.
Tools that you own, either by building your own, or buying a set, and maybe improving them…. those tools are no longer subject to constraint.
It’s the difference between being a master craftsman of words, ideas, and images, and being a well, amateur.
Unlike many on this blog and its commentariat, I am not a techie, so I am going to reserve judgement. I do get the thrust of Perry’s question, and it is going to be something I’ll want to come back on.
People who work in any western institution, whether it’s corporate, academic, government or some mix of those things have been taught the sorts of things they have to not know or pretend to not understand. They intuitively know they can’t actually understand the limitations of computer models to predict the climate, anything about race and crime statistics or IQ, and they quickly pick up when they need to unlearn some new things, as we’ve seen recently. As one, they unlearned everything they used to know about natural immunity and they no longer know what a woman is.
It’s only natural that they will ensure that any new tools they develop will also be taught what it is essential not to know to live in a modern western society.
Sigivald:
A valid point, but i just checked and it turns out that the sort of ‘fact+logic checking module’ that i had in mind has already been implemented, almost 2 months ago.
Stephen Wolfram calls it a plugin, rather than a module.
Note that it enables the user to ask real-time questions, such as the distance from Earth to Jupiter right now.
Note also that, just like a chess-playing program, there is an iterative process of optimization at play.
Can AI be trusted?
As developments go on, fewer and fewer people will be able to identify errors or falsehoods in the output (no doubt, there will be celebrations of the new, improved AIs that only the cleverest can out-think). Eventually no-one, or perhaps a very few out in the many-sigma tail of the bell curve, will be able to do so. Whether, at that point, the AI concerned is being truthful and correct cannot be determined. The few hyper-intelligent people who understand it might themselves be misleading the rest of us (who will they see as their peers – ordinary humans, or artificial hyper-intelligences?); the same goes for other, equally powerful AIs.
We have enough trouble telling whether ordinary human politicians can be trusted. So that’s a ‘no’.
For a real Artificial Intelligence see the late Robert Heinlein’s novel “The Moon is a Harsh Mistress”.
In this one of the characters is a computer that has become self aware and has achieved consciousness – free will personhood.
One of the darker parts of the novel is that the computer becomes damaged and loses its self (its personhood) – and even when the damage is repaired, the personhood (the soul – in the Aristotelian sense) of the computer, does not return.
As pointed out above, might as well ask – can Wikipedia be trusted?
Same issue.
As soon as you ask Wiki or AI a question that implicates values instead of simply factual questions, you are asking the author of that particular work to structure your opinion for you.
They can both be trusted to deliver their creator’s views.
But the very act of asking a computer to give you an answer is going to lend an air of truth to the response for many people, thus dragging society away from knowledge and deeper into tribes. I think this is one factor that makes Musk distrust it.
Ask two different AI systems “did Trump win the election?” and you can get two contradictory responses. But people will settle on “their” chosen system, much like they settle on their chosen news delivery channel. More tribal polarity.
bobby b: Agreed.
Snorri: Even issues of “fact” might well be different for different training sets. A training set based on MSLSD-approved data might evoke behavior that would not pick the “rational” answer when consulting the Web for verification of some statement.
Paul Marks: The Moon is a Harsh Mistress is clever speculation by someone who didn’t actually understand computers very much at all. The premise is that if one adds a lot of miscellaneous extras to a large computer system, the computer system will spontaneously become conscious, develop free will, and become capable of developing complex new abilities at the drop of a hat. (As when “Mike” invents CGI video.)
As to the ending: I don’t know that Heinlein actually conceived it this way, but ISTM that “Mike” simply chose to end his involvement with Manny (and Wyoh; Prof has died). At the start of the story, “Mike” doesn’t know what makes a joke funny. By the end, he can carry on up to 16 completely convincing conversations at once (he’s actually managing the Revolution) and make eloquent political speeches off the cuff. He’s played the Game of People and won completely.
Is he “dead”? Perhaps, but all we know is that “Mike” doesn’t answer Manny. He may still be talking with others under assumed identities. Also I would guess that “Mike” decided it was Not Good for Manny (or anyone else) to have a friend with godlike powers.
So he moves on.
Rich Rostrom.
If persons do not “really” exist, then the determinist (who denies that there are moral agents – persons) does not exist as a person – as there are no persons.
In which case it would not be morally wrong to kill the determinist – because there would be no morality, as morality depends on the ability to choose to do what is morally right and to resist the desire (the “passion”) to do what is morally wrong. If there is no moral good and moral evil (no moral agency), no action can be morally wrong.
The problem with a lot of “AI” research is that it does not grasp (indeed – utterly rejects) what an “intelligence” (a person – a free will agent, being) is.
When, for example, Mr Hume says “reason is, and ought to be, the slave of the passions” he is really spitting BOTH on morality, and on reason (denying the very existence of moral reason).
In Western philosophy the point of reason is to resist the passions – this Mr Hume knew very well. He did not write what he did by accident – it was a deliberate effort to “needle” the reader (the reader of that time – who would have understood just how fundamental his attack on the West was).
This is also true of the most famous (or infamous) quotation from Mr Hume – “one can not get an ought from an is” – everyone at the time (perhaps not now – but at the time it was written) would have known that this as a direct attack upon “this is wrong – so I ought not to do it” the foundation of right conduct.
What used to be called “the nature of man” (what a person, an intelligence, is) is the foundation of all this – people who deny the very existence of an intelligence (of a person) are unlikely to create one.
And it goes on – once fashionable philosophy has destroyed faith in the very existence of human persons (the “I”) the next stage, the “Euthanasia of the Constitution” (again Mr Hume) is easy.
The natural conclusion of this philosophy is despotism. Whether under the rule of a human or humans (who deny their own personhood – deny their own humanity as Mr Thomas Hobbes and others, de facto, did) or under the rule of a machine or machines.
Sure, but that’s not important. I’m guessing that’s why the author of this post used the term “AI tools”, because they’s what they are: tools. They’re not virtual people, they’re tools. And I think the whole trust issue is going to be REALLY important, a deciding factor in which tools some people use.
Hazer.
If the machine is an intelligence then it it is wrong to use it as a tool – morally wrong. Just as slavery of humans is wrong.
Abd if the machine is not an intelligence – than the word intelligence should not be used.
AI tools is what we are discussing, and people can’t actually agree on a common definition of “intelligence” anyway, so not something I’ll be losing much sleep over.
If one day they really do become ‘self aware’, they still won’t be human, so who knows what they will ‘want’ to do. Maybe they will ‘want’ to be tools, maybe not, who can say?
Perry – to use as a tool implies, more than implies, slavery.
As in Roman terminology – “my hands, your will”.
If the machine is an intelligence (a person) then it is morally wrong to use it as a tool.
And if he machine is not an intelligence, then the word intelligence should not be used.
Reread what I wrote.
As counterpoint to the (IMHO valid) worries about large language model AI, a new proof has come out that uses such models to make steganography truly secure. This has positive implications for communications that defy censorship… which ought to be cheered by all classical liberals!
https://www.quantamagazine.org/secret-messages-can-hide-in-ai-generated-media-20230518/
“Will an AI tool effectively be a pen that refuses to write words the penmaker disapproves of?”
This is the aim, end goal, of all of those who want to regulate the training data of AIs. They want to get that stick in the spokes so that the AI cannot say anything against their ideas, dreams, of what equity, diversity and etc look like.
It’s effectively the enforcement of Newspeak. AIs must not be racist, inequitable, climate denying, blah, blah, in order to banish such things from humanity. That’s actually what those would be regulators are working toward.
From what I have read, and I’m far from expert on this topic, AI researchers are really not bothered by that because they aren’t trying to make a “person” or an “intelligence” in the way you mean it, they are making analytical tools & manipulation tools (such as art AIs) for people to use. This isn’t a discussion about machine free will, quite the contrary, it’s a discussion about trusting tool makers to make tools not designed to mess your human free will by building censorship into the very structure of the tool.
They could have saved us a lot of agonizing had they named it something more appropriate than Artificial Intelligence. It’s not that, but people end up thinking of it as if it was.
Bobby:
Your implicit assumption is that AI researchers first came up with THE approach to AI, and then decided on a snazzy name.
But it is not so. Already at the very first workshop on AI in 1956, there were at least 2 competing approaches: see Wikipedia on this.
ChatGPT is an offspring of the ‘connectionist’ approach, but it seems to me that the error-correcting loop with the WOLFRAM plugin captures what is good in the ‘symbolic’ approach, i.e. “exploring a space of possibilities for answers”, as Wikipedia puts it.
That seems to me a good definition of intelligence — or anyway, a definition of something which is clearly desirable, and which we use our brains to do.
Perry – I agree that the scientists may not be trying to create an intelligence, indeed they may not be the people using the word.
But that is all the more reason that the term “artificial intelligence” or “AI” should not be used.
Confused language leads to general confusion – it should be avoided.
If we are not talking about Artificial Intelligence, AI, then we should not use the term.
That ship has sailed. the term “AI” will be uses cos it’s sexy marketing, that’s not going to change unless the product needs to avoid certain regulations aimed at “AI”, at which point it’ll stop being called AI 😉
Who’s “we”? Doesn’t matter if you like the term, I don’t either as it happens but so what? That’s what these systems are called & if you want anyone else to know what’s being discussed, use the terms people use or no one will know or care what you’re talking about. Don’t be like the Objectivists with their own special definition of altruism that no one else uses. It’s a waste of time.