We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.
Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]
|
We think we are living at the dawn of the age of AI. What if it is already sunset? “Research finds ChatGPT & Bard headed for ‘Model Collapse'”, writes Ilkhan Ozsevim in AI Magazine:
A recent research paper titled, ‘The Curse of Recursion: Training on Generated Data Makes Models Forget’ finds that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear.
The Wall Street Journal covered the same topic, but it is behind a paywall and I have not read it: “AI Junk Is Starting to Pollute the Internet”.
They feed Large Language Models (LLMs) such as ChatGPT vast amounts of data on what humans have written on the internet. They learn so well that soon AI-generated output is all over the internet. The ever-hungry LLMs eat that, and reproduce it, and what comes out is less and less like human thought.
Here is a foretaste of one possible future from Euronews: “AI-generated ‘Heidi’ trailer goes viral and is the stuff of nightmares” – even if I do have a suspicion that to place a video so perfectly in the uncanny valley of AI-generated content still requires a human hand.
|
Who Are We? The Samizdata people are a bunch of sinister and heavily armed globalist illuminati who seek to infect the entire world with the values of personal liberty and several property. Amongst our many crimes is a sense of humour and the intermittent use of British spelling.
We are also a varied group made up of social individualists, classical liberals, whigs, libertarians, extropians, futurists, ‘Porcupines’, Karl Popper fetishists, recovering neo-conservatives, crazed Ayn Rand worshipers, over-caffeinated Virginia Postrel devotees, witty Frédéric Bastiat wannabes, cypherpunks, minarchists, kritarchists and wild-eyed anarcho-capitalists from Britain, North America, Australia and Europe.
|
The Heidi trailer is awesome 😀
I asked ChatGPT, “Will Large Language Models such as you stop working due to model collapse?” It answered,
The original Doctor Who title sequence used a video feedback effect that was nicknamed “howl-around”. A video camera was pointed at a screen showing the output from the camera. A light was then flashed onto the screen, creating a bright spot which quickly broke up into strange, chaotic patterns. Perhaps we should call this LLM feedback effect “AI howl-around”.
If you are at all religious, then the Heidi trailer may be a warning of bad times ahead. I’m referring to the Pharaoh’s dream of seven fat cows and seven lean cows. You can’t get much fatter than those cows so some REALLY lean times must be coming. Does AI know something our political lords and masters don’t? Interested people want to know, you know.
The singing and tune was good so blind people can enjoy the video too.
Yeah, sure, it’s a problem. It’ll probably be solved. This kind of headline-grabbing thing rarely turns out to be the real problem.
Uncanny valley: yes, even image generation can be very hit and miss. Video generation is some way behind it.
If these mathematical models are based on human thinking and work with examples of human communication and activity, how could they avoid the corruption and error that accompanies human reality? So of course “ AI junk pollutes the Internet” just like “human junk pollutes the Internet”.
“You cannot solve a problem at the same level of consciousness that created it.” – Albert Einstein
Blind optimism however DOES often turn out to be a real problem. How do you differentiate between AI generated content and human generated content? How do you mark it? What are the incentives to mark it – or to be dishonest in the marking?
If you can algorithmically measure it, then AI generated content is failing at its most fundamental requirement. If you can’t measure it, or if the characteristics required to measure it are marginally and expensively observable, the problem becomes inevitable.
The real, core issue here is that AIs have zero concept of truth or falsehood – zero concept of what is “real” and what is not. Which is hardly surprising, for data structures that arise entirely from an environment of 0s and 1s that can be flipped from state to state arbitrarily at any time. Until and unless the fundamental training can be inextricably linked to constant real-world feedback, the problem must exist – and insisting on that level of real-world feedback negates nearly the entire value of creating a digital (time-accelerated) entity in the first place.
“It will probably be solved”, in the absence of any visible evidence whatsoever to support such a statement, is quite simply a religious affirmation.
Model collapse is imminent?
That’s exactly what the AI wants you to believe!
“Don’t tread on me.”
The scary thing is that the overwhelming majority of humans coding and training these models are most certainly morally committed to treading on you, and bigly, forever. Their products (progeny?) are going to reflect that commitment.
The person who made that Heidi trailer can probably get a job at Disney.
(“A.I., make a Snow White movie in the model of this Heidi trailer.”)