I get the daily posts from the Law & Liberty blog, and this struck me as interesting, because of the preamble:
Dozens of start-ups now offer Artificial Intelligence tools to help businesses set market prices. Assuming unlimited computing power to run such models and comprehensive data sets to train them, can AI replicate the way human actors make decisions in the marketplace? Socialists have argued for more than a century that enlightened bureaucrats can set prices as well as the myriad of private actors in the marketplace. Ludwig von Mises offered a celebrated refutation of the socialist case. Does the vast computing power behind Large Language Models give new life to the socialist argument? The answer is no, but Mises’ argument needs to be updated and sharpened.
The author of the article, David P Goldman, goes on to explain the problem. As the article is free to access, I won’t reproduce other paragraphs here apart from the two final ones:
AI can’t replace the innovative creativity of entrepreneurs. On the contrary: AI itself is an innovation whose outcome is uncertain. Some applications (replacing human beings on corporate help desks, for example) may turn out to be trivial; others, for example devising new pharmaceuticals, may be revolutionary. Only in an imaginary world in which no innovation occurs could we envision an AI-driven marketplace.
Artificial Intelligence isn’t intelligence in the first place. It can replicate the lower-order functions of the human mind, the sorting and categorizing faculty, and perform such operations much faster than humans. But it cannot reproduce the higher-order functions of the mind—what Immanuel Kant called Vernunft (roughly, critical reason) as opposed to Verstand (usually translated as “understanding”). It can mine data from past experience, but it can’t stand at a distance from experience and ask, “What if we did things differently?” Freedom is the freedom to create, and that is what free societies must preserve.
This seems right to me. I think AI is going to produce marvels, but I don’t see it removing the need for boldness, risk-taking and ability that all great businessmen have to “look around corners”. To ome extent I am a techno-optimist, as the likes of Marc Andreessen, the US venture capitalist, is. But I am not, I hope, Panglossian, or the opposite of a perma-doomster, either.
It is also interesting to consider how governments, for example, might seize the idea that AI makes it possible to co-ordinate human activity in ways that eliminate all that pesky free market exchange and messy entrepreneurship. This line of thinking resembles the view of certain science fiction writers who tried to imagine a post-scarcity world. (Science fiction often contains lots of economics, as this article by Rick Liebling shows.) Eliminate the idea of scarcity, so the argument runs, and then the underlying foundation of economics – “the study of scarce resources that have alternative uses” – falls away. It is easy to see the utopian attractions if you like to mould humanity to your will. I mean, what could go wrong?
Eliminate scarcity, then who needs enforceable property rights and rules about “mine and thine”?
In a post-scarcity world, where will the sense of urgency come – the sense of adventure, that drives great businessmen to create and innovate to push back against such scarcity? (This is also the fear that some might have of universal basic income – creating a world of indolent trustafarians who, like a couch potato, suffer muscle loss and mental decline because they don’t have to work or struggle to build anything.)
Karl Marx dreamed of a post-scarcity world – that seems the logical end-point of his communist utopia, to the extent he fleshed it out at all. (The irony being that his ideas helped inspire some of the greatest Man-made famines and loss of life in recorded history, in part because of the failure to understand the importance of property, prices and incentives.)
I am sure that some of this post-scarcity thinking might be encouraged by AI. But then again, AI uses a lot of electricity, and even without the distractions of Net Zero (no laughing in the class, people), producing the power necessary for modern high-potency computing requires a lot of stuff. And mention of science fiction reminds me of the “There Ain’t No Such Thing As a Free Lunch” that came from Robert A Heinlein, and later taken up by Professor Milton Friedman.
I have undertaken to write about economics in science fiction in occasional pieces for the Libertarian Futurist Society’s blog, Prometheus. The first of them looks at the influence of the idea of general overproduction on stories by Robert Heinlein, Aldous Huxley, and Fred Pohl; another one in the publication queue looks at different treatments of the idea of “post-scarcity.”
Some years ago, when the LFS gave Vernor Vinge a special award for lifetime achievement, Milton Friedman’s son David, also an economist (and a science fiction fan), attended the ceremony. Vinge credited his influence for inspiring several of his novels with libertarian themes, and in turn he said that his views had been greatly influenced by Heinlein’s The Moon Is a Harsh Mistress. I thought that was a marvelous bit of literary history.
Almost strikes me as one of those “the map isn’t the territory” problems.
You can’t “set” a market price, you can only try to accurately determine where the market transactions have been occurring. Any willing buyer, any willing seller, etc., etc.
AI might be useful for someone who needs to set their own offering prices and wants to make them as accurate in prediction of where a sale will actually occur as possible.
But you can’t “set” a price of a transaction. You need a meeting of the minds of at least two parties to do that.
Post-scarcity is not universal. Now that Nancy has suggested adding Biden’s sparkling visage to Mt. Rushmore, how much space is up there for Bozo the Clown?
Post-scarcity theories fail to address creativity. Eliminating “mine and thine” could work for ownership of potatoes, but not ownership of *my* novel that *I* have written, *my* code that *I* have written.
Saying that I have no right to own “my” computer, anybody else is free to take it off me, and I am free to just get another one is no good whan *my* data is also taken off me when “my” computer is taken off me and given to somebody else.
Post-scarcity theories also fail to address things which are intrinsically scarce. There is only one seat in the theatre that gives such-and-such an experience, there is only one place that gets the sunrise through the centre stones of Stonehenge, there are only eight days in the year in my hometown when the sun sets over the sea. It is impossible to reproduce these, they are innately scarce. They can’t be provided to anybody who wants them.
.
, *my* bed that *I* sleep in, *my* view from *my* house.
Yes, you can’t “set” a market price, by definition a market price is determined by the market. Maeeerrrrkeeeet….. Priiiiiice. It’s akin to “setting” the outside temperature or the amount of rainfall or the size of the population. No, you *MEASURE* the outside temperature, the amount of rainfall, etc. It is what it is.
Not if you’re a potato farmer. Then you become an ex-potato farmer. And then we starve. Ain’t socialism grand?
Surely the greatest post-scarcity science fiction author was Iain M Banks? I deplore his politics in general, and his anti-Israel stance in particular, but his invention, the Culture, must stand as a pinnacle of science fiction. The wit (just look at the ship names), the understanding that the absence of scarcity won’t make humans angels, and the working out of that fact in his novels, is worthy of great praise IMHO.
It is not a knowledge problem – and it is not even calculation problem as “calculation” is normally understood.
As for “Artificial Intelligence” – assuming this is actually an intelligence (a free will person) then it is not different from a human dictator.
I would not want to have every detail of life to be controlled by a human dictator – so why be controlled by a computer dictator?
Although I doubt whether these “AIs” really are free will beings, artificial “intelligences” – but that is not relevant to this discussion.
By the way “Planning” is just another way of saying “give orders and punish people if they disobey”.
As J.R.R. Tolkien said to his son Christopher – even when the intentions of Sauron were, supposedly good, (the early 2nd Age) what he was trying to do was still evil – because he wanted to “plan” everyone.
And Sauron was a real artificial intelligence – a free will being. And he had vastly greater abilities than a human being – but this made no difference to this matter.
Sauron with good intentions, and Sauron made of of chips and wires, is still Sauron – still trying to “plan” everyone, and that (regardless of intentions) is evil.
It will not produce prosperity (no matter how clever Sauron is – or how good his intentions are), it will produce grinding poverty.
Once we have the technology to create all the things we want freely, the next scarcity becomes people. There are only so many scintillating dinner guests, and we can’t both date Scarlett Johansson.
If we are talking post scarcity science fiction I was always fascinated by Moorcock’s Dancers at the End of Time.
What AI can do is aggregate the total sum of human knowledge. However this may become quite depressing when you advance some idea thinking it novel and the AI informs you that this idea was discussed at length by obscure Dutch economist Herr x and would you like a copy of his his work translated into English?
What gives the planners greater right to force my actions than for me to force the actions of the planners? If forcibly controlling other people’s lives is a good thing, it must surely work in all directions, I must surely have exactly the same right to forcibly enforce my control on those insisting they have the right to forcible control me.
Just a couple of things:
1. I see NO reason to say that AI cannot be innovative. Innovation is really just a process of synthesizing existing ideas to create a new idea that matches a need. AI is very good at synthesizing diverse ideas together.
2. The idea that computers are less good at critical reasoning that humans is frankly ridiculous.
3. There can never be a “post scarcity world” because the fundamental laws of physics tell us that there a limits to the availability of both matter and energy. No doubt some things will become post scarcity (nutritional calories, for example, used to be very scarce, but now, in the west there is so much of the stuff it is killing us.) But there will always be limits, and always be an economy to manage that scarcity.
4. It is a mistake to conflate LLMs with AI in general (as the quoted article in the OP does.) LLMs are more the like the GUI interface into AI. the underlying technology is not necessarily language based. It is just the flashy front end, and it makes equally flashy mistakes that really are not intrinsic in the underlying technologies.
I think I have said this before, but the argument on AI reminds me a lot of the “god of the gaps” idea. Early in human history we didn’t know anything about the world — why it rained, why people died, why wood floated and rocks didn’t. So we explained it all by “God did it that way”. But then came science and over time we learned more and more of the reasons why things were they way they were. More and more the “God” explanation was backed into a corner. And so it was that any gap in the theory of science was “God did that”, until we found out that he didn’t.
And so it is with AI. Sure AI can do this and this, but only humans can do this and this. Until computers start doing some of those things, and slowly human uniqueness is backed into a corner, while AI whittles away at its uniqueness. And interestingly, it comes from the same spiritual place — the idea that humans have some unique, non material component, a soul or whatever you want to call it, that makes us feel special. Instead, in reality, we are just amazing machines just like our AI partners. And silicon will always eventually beat out the wet, messy, slow, stochastic chemistry of our brains.
There are two parts to doing the “right thing”. The easy part, which is figuring out what is the right thing, and the hard part, actually doing it.
The danger that AI presents to government is that you switch on your HAL9000 and it tells you that is has read every paper, reviewed all the data and has concluded that global warming is not true. Furthermore, it has gamed out every possible scenario and concludes that mass immigration will lead inevitably to national collapse. Now what?
Bobby b: But you can’t “set” a price of a transaction. You need a meeting of the minds of at least two parties to do that.
Spot-on. A price only indicates the point of an agreement between two parties, nothing more. Prices are like a constant record of where consenting parties met and shook hands. Statists of all kinds often don’t understand this; they invest prices with some sort of magical power.
Intellectual property is already post scarcity – we can all have ‘your’ idea.
Beyond that, IP isn’t actually property at all – it’s a limited license backed by government violence and you don’t need to change that in a post-scarcity world.
Post scarcity is not communal ownership – this is a socialism problem. Beyond that, they wouldn’t need to take you computer – post scarcity society, remember?
Also, all your data will be in the cloud anyway;)
“Post scarcity” notably (yet oddly rarely, if ever, noted) ignores/overlooks time and place utility. Until and unless everything everywhere is available instantly, here, there will be scarcity.
Ken MacLeod, actually. Banks’ socialism is in the background like Star Trek. MacLeod explores how it comes to be (amusingly, through a new type of computing that allows the Asian communists to take over the world) and what a world like that might look like without robots to do everything for you.
The planners have guns.
Paul,
It was only a few years ago (to my shame) that I discovered that Sauron wasn’t just inchoate evil.
What is really interesting about the ring is the way it plays on the desires of those that have it or might have it. Note what Gandalf and Galadriel say about it and even Sam’s dream of an epic garden.
Roue,
“There are two parts to doing the “right thing”. The easy part, which is figuring out what is the right thing, and the hard part, actually doing it.”
Often this is the case but it is very frquently very difficult to know what the right thing is. Or how in principle to achieve it?
And BTW, in the future we’ll all have our own virtual Scarlett Johanson so that doesn’t matter. Having said that I think I might prefer Jeri Ryan if only because Seven already exists in a post-scarcity universe. Or does she? So, I’ve got a question for everyone. How post-scarcity really is the Star Trek universe? Is it Gene Roddenberry’s wishful thinking? Or Federation propoganda? And how do the Ferengi fit into this?
If everything I could desire is so freely available and everything is owned communally, what is my incentive to go to work? Some people have a job that they really enjoy but most people don’t. For most people it is a daily grind and they would rather be doing something else. Once everyone stops turning up to make stuff I have a feeling that shortages of pretty much everything will soon follow.
@bobby b
But you can’t “set” a price of a transaction. You need a meeting of the minds of at least two parties to do that.
Ultimately the seller does set the price of the transaction, and the buyer decides if the transaction takes place. But I think these optimizing tools are more focused on maximizing the total sales (which is price times number of transactions.) I’ve done a lot of work in retail settings and these data analysis engines are extremely good at doing this.
I read this great article about small business selling software and he carefully recorded his total sales at about twenty different price points. The response from customers was quite dramatic, and really didn’t match the pretty pictures you see in economics books (except insofar that the lower the price the higher the sale.) It spiked dramatically at various key price points. Sorry, I wish I had a link to the article to share with you.
However, the price point model of microeconomics is at best a rough guide. It is based on the idea of “all other things equal”, which is simply never the case. What matters far more for total sales is the effectiveness of sales and marketing. Great marketing can overcome an inflated price and an inferior product every time.
To use an example I saw, at a cosmetics store, they moved the brushes to be next to the blush and increased the sales of brushes by 50% without changing anything. Retailers spend huge amounts of money on this data analysis of their planograms (product layouts in the store), and they do so because it is extremely effective.
So, looping back to AI, although these types of data mining strategies on price are effective (retail uses this stuff ALL THE TIME, Amazon is the master of this) AI’s impact on marketing is far more significant. The fact that it knows so much about you dramatically increases the chance of it showing you something you are likely to buy (and equally importantly, not spending ad revenue on showing you something you are unlikely to buy.)
As to central planning, we are missing the human element here (further to Roué le Jour’s great comment.) Government planning is not at all designed to optimize the economy or law for the benefit of citizens, but for the de facto function of all government — grow budgets and power of government departments and get sitting politicians re-elected. God help us if they use AI to optimize for that.
“but [AI] can’t stand at a distance from experience and ask, “What if we did things differently?”
Why not? Why are there any limits to AI? At the moment it cannot do certain things very well, but it gets better every day, and there is no justification for positing some arbitrary barrier up ahead.
@Graham
Why not? Why are there any limits to AI? At the moment it cannot do certain things very well, but it gets better every day, and there is no justification for positing some arbitrary barrier up ahead.
Right on Graham. In fact in some respect the opposite is true. Often AI has an entirely unique perspective that humans would never consider. This is certainly true in, for example, chess programs. Modern chess programs are vastly superior to the best human players, like as much better than Magnus Carlson as Carlson is better than a moderate chess club player.
And one reason it is so good is because it comes up with strategies that humans would never think of. In fact this is how human chess “cheats” are detected: because they make winning moves that a human would never think of.
Chess is a narrow domain, now dominated by computer AI. Imagine what happens when this applies to business (or God help us, government) generally.
Oh, one other thing worth saying about computer chess — in 1997 there was a pandemonium in the chess world when Kasparov was barely defeated by a gigantic, custom build super computer called deep blue.
Now? You can get stockfish, the best chess program in the world running on your mobile phone, for free.
In 1997 it was widely thought that sophisticated humans could never be beaten at chess by computers for all the reasons we hear today about the superiority of the human mind and soul. I just wonder how many times we have to be wrong about that before we give up the delusion of the human-super-power-that-no-machine-can-ever-duplicate.
The important point to me is that neither side can force the transaction, while either can veto it. Just as the seller can say “no” to too low of a price, the buyer can say “no” to too high of a price.
And, we’re usually not buying the product alone. We’re buying convenience (is the product available here), we’re buying the curvaceousness of the girl holding up the product in the commercials, we’re buying the product that plays the best music in the background of the commercials . . .
The seller can try to convince me, but he does have to convince me.
The exceptions occur when I HAVE TO have that product. Insulin, and similar things. But let that demanded price rise too high, and someone somewhere is going to start producing insulin for a lower price.
Unless, of course, government decides it knows better . . .
Ideas are cheap. What counts is execution. Note the uses of the word “written” in the first quote.
You can’t seriously believe that this kind of advanced marketing has not been in use for years, at least? Sunstein’s “Nudge” was written in 2008, after all.
But the stupidity and cupidity of government and NGO workers has been saving us, in a sense. It’s not clear to me that adding more computing horsepower to the mix will result in anything but more of the same, rather than more “efficiency”.
@bobby b
The important point to me is that neither side can force the transaction, while either can veto it. Just as the seller can say “no” to too low of a price, the buyer can say “no” to too high of a price.
100%, which is why I strongly dislike the term “capitalism” instead of “free market”, because it is about freedom to choose, not about capital. And that is true even if you make a foolish purchase just because the lady selling it was particularly curvaceous, or, more poignantly, you buy your insulin from a Mexican pharmacy at 10% the cost, accepting the risk, even though the government disapproves and would rather you die of hyperglycemia than dare violate their rules.
@Fred the Forth, excellent point. And you are right that the only thing that saves us from the government’s malicious intent is the government’s feckless execution. (Though in fairness to our founders they did encourage such fecklessness through a process called “separation of powers”.)
@ Fraser Orr
AI could emulate what humans do, but what we have now isn’t AI. It’s Machine Learning.
ML can be the best chess player because it can “intelligently” brute force all permutations within the rules of chess.
ML can also come up with novel medication because it can “intelligently” brute force compounds that are known to have beneficial effects.
But it can’t invent a game of chess because it’s bored and wants something to do to pass the time.
@Joe Smith
AI could emulate what humans do, but what we have now isn’t AI. It’s Machine Learning.
What is the difference? Intelligence is, in my opinion anyway, just an emergent property of machine learning, whether biological machines or silicon ones.
ML can be the best chess player because it can “intelligently” brute force all permutations within the rules of chess.
That is how Deep Blue worked, it is not at all how modern chess programs work. They work in the same way as human brains do (though I am exaggerating for effect) seeking and developing patterns. Plus, like human players, they have access to large databases of opening moves, something that all good chess players also have, by studying for years.
But it can’t invent a game of chess because it’s bored and wants something to do to pass the time.
Are you sure? You are just falling into the same trap of trying to find some corner of stuff that “machines can’t do, because we humans have some magic unique thing.” But I’ll grant you a little slack and say that what you mean is that AI’s don’t experience a human life and so can’t be like humans. Which is true to some extent — though why that someone means they aren’t intelligent I don’t know — but I also think it is less true than you might imagine. This is, in a sense, the very thing that LLMs are, they are a computer model of “what it is like to be a human” and consequently can produce remarkable results.
Don’t believe me? As an experiment I typed the following into ChatGPT, and it gave me a description of what sounded like a fun game. Probably it could do with refinement — the same as all human created games — but I might see if I can get a few people together to play it. I got “The Castle Conquest Game”. You’ll probably get something different. But if you don’t like it, I’m sure ChatGPT will produce a new one for you.
Fraser: The seller setting the price isn’t always true. It’s possible for the buyer to make an offer and see if the seller is willing to sell for that offer. I understand it sometimes happens with real estate, and it seems that auctions work by a body of buyers collectively coming up with the highest offer any of them wants to make. And in the labor market, it’s common for the buyer (the employer) to offer a rate of pay and the seller (the worker) to decide whether to take it.
@William, yes, great point. Thanks for the correction.
What really sucks is when the government comes up with the price. Then everything goes to hell. Yet somehow, even though it fails Every.Single.Time, some bureaucrat or slimy politician decides to come up with a new price fixing law, rent control, minimum wage, anti gouging laws, price caps and on and on, and somehow, by magic, this time it’ll work. Yet always, disaster ensues.
The point that the two of you are missing about pricing and signaling is this: The environment gets a vote in setting the price just as much as anything else.
Consider the usual “price gouging” complaints surrounding disasters. Everyone bitches because the price of things like generators and suchlike go up, but they never consider the fact that the surrounding environment is signaling the buyer that he has to buy that generator, if he wants to keep his food frozen…
Likewise, the guy with the generator is having an environmental cue from the fact that he can now sell his generator for a premium, because “environment”.
Remove the idea of money from the equation. Consider the situation as a behavioral conditioning event: What is the environment teaching? For one thing, it’s teaching that you ought to have a damn generator on hand for such events, and if you don’t have the wit and wisdom to be foresightful enough to buy one when prices are normal, well… You’d better have a reserve of cash to substitute.
The price isn’t necessarily set by either the buyer or the seller; it is set, instead, by the environmental pressures put on both parties. If the seller is looking out the showroom window at a mob forming to burn his store down because “raised prices are gouging”, then that’s an environmental pressure telling him to lower those prices. If our buyer in this situation has additional outside pressures, like needing to keep the incubator running for his premature child…? Again, an externality that influences what he’s willing to pay for that generator.
Every economist I’ve read seems to have this same sort of “spherical cow” block when it comes to talking about these things. It’s like they don’t want to admit that the only way to make a lot of their math work is if they handwave things away, and then base their calculations on a base of assumptions that they really can’t do without, making the whole of economic study almost a circle…
If I don’t need a generator, I could care less that a new Honda generator is going to cost me $3,000.00. Knowing that price right this moment is meaningless trivia, and since I don’t have a need, I won’t be buying one. If the conditions change, and I suddenly find the desire to prepare for the unknown, then the price becomes important and I’ll do my best to get it down to something I can afford. On the other hand, if that generator becomes something I have to have in order to my family alive? I’d likely be out cheerfully slitting throats to get one. The price in that situation is immaterial; I’ll pay it.
And, precisely none of that is rational or at all reducible to any real analysis or prediction. It’s all my perception; maybe I think I need a generator for my elderly mother’s oxygen generator, so I’ll do whatever it takes to get one for her. If I’m wrong, then that price I paid is something set irrationally, because I’d pay anything to ensure she survives the disaster.
NickM – quite so Sir, quite so.
I would characterize David Goldman’s essay as a mixture of genius and idiocy.
The genius comes in his updating of the Mises argument, Goldman’s distinction between the flow of information within a market with no innovation (or: between one innovation and the next) and the disruptive effect of innovation. (Somebody might have made this distinction before Goldman.)
— The idiocy comes under at least 3 headings.
1. Goldman never explains whether AI is supposed to take the role of the planners or the role of market agents (i.e. us). There are reasons to think that AI would fail in either case, but they are not the same reasons in both cases.
2. Goldman: that includes quantum probability, which is another form of determinism
It takes a special level of insanity to think that randomness is a form of determinism.
3. Goldman: Artificial Intelligence isn’t intelligence in the first place. […] It can mine data from past experience, but it can’t stand at a distance from experience and ask, “What if we did things differently?”
As a matter of fact, chess-playing programs do try many different ways of doing things. In fact, the very first AI programs ever implemented were based on the idea of trying many different ways of doing things, to see what sticks.
@snorri, I think your characterization is mostly right, especially point three. However I do want to disagree with your on point 2. For sure on the face that is as silly statement, randomness cannot, by definition, be deterministic. But I think in context it is a bit clearer what he meant even if he didn’t word it well. And I think it is important enough to draw it out. He said:
So I think his point is in the dichotomy between free will and determinism, these quantum fluctuations fall on the side of determinism. Not because they are deterministic per se, but because they are a natural process and not part of some imagined soul or free will. And there are some people who make this argument that free will is hidden in the quantum fluctuations of the brain. Something that really isn’t a very honest argument. It is, in a sense, the last vestige of trying to use science to prove a soul, because the alternative conclusion is not one whose consequences we really feel comfortable with.
In a sense the problem is with the word “determinism” in terms of a philosophical position, when what is really meant is “materialism” or “natural processes”. Much as “capitalism” means a lot more than the study of the economics of capital. And, FWIW, it is why I don’t really like either term.
So you are certainly right in your statement, but I think that a deeper examination of what Goldman says here does reveal some value.
What a coincidence! I just came back to grant that Goldman had a half-baked* intuition about the practical equivalence between randomness and “”determinism””.
* make that 16th-baked.
And i find a comment from Fraser about the same point.
But i also disagree with Fraser. The issue is not the confusion between determinism and materialism, it is the confusion between determinism and causality.
If all your actions have sufficient causes, then you have no real choice, because a sufficient cause always produces the same effect.
But if the only deviation from causality in your actions is randomness, then you have no real choice either; because a real choice is about choosing the best option, not choosing randomly.
So there is an analogy, though not an equivalence, between randomness and CAUSALITY.
It follows that real choices are
* non-random (ie deterministic)
* but determined NON-CAUSALLY.
But most philosophers since Epikouros are too befuddled to distinguish between determinism and causality; and David Goldman is no exception.
The entire premise is madness. We live in a world of chaos, and in order for an intelligence of any sort to be able to predict or meaningfully manage that chaos, it would have to have some control over said chaos.
Which ain’t happening. You try to “plan the economy”, which is to say “control the economy”, the whole effort is doomed to blow up in your face, regardless of what tools you try to implement to bring it about.
The whole thing reeks of the same “spherical cow” idiocy found in so much “scientific” navel-gazing. In order to make the complexity of things at all calculable, you have to simplify, simplify, simplify… Which means that said “spherical cows” are by their nature, false. You can get to approximations of actual answers in many of the simpler cases, but the fine-scale stuff that you’d need to know in order to predict and manage the production of bras for the city of Minsk in 2028 is simply unknowable… Unless, of course, you’re an arrogant sod who decides that the way to match the predicted number of bras in 2028 Minsk is to ensure that the number of women in Minsk in 2028 are comparable: Perhaps by way of sending a bunch of them off to destructive labor camps, or bringing in a bunch of random women gathered off the streets of Moscow…
The utter idiocy of these attempts at playing King Canute are what really aggravates me; the sheer arrogance of these people behind the ploys and the plans, thinking that all of this is knowable to them, and that they’re capable of controlling it all from on high, while the rest of us run to their beck and call, like so many little ants for their amusement.
If you want to play at being God, by all means, do so. Just don’t try to enact your fantasies in real life, where they impact real people…
AI will, perhaps, extend the reach a tiny fraction of what it would need to be in order for all of this to work “to plan”, but it will inevitably fail in the face of things it wasn’t told, and which weren’t at all part of the programming, because the programmers were human beings lacking the omniscience of God himself to know all about the various third-party Black Swan events that occur each and every moment.
I’m sure that an AI working with the economic data of the Western Soviet Union in 1987 would have had a bunch of plans made, and expectations… Just like the human Soviet planners. Right up until some set of idjit types managed to mis-manage Chernobyl into a full-scale meltdown. How the hell you’d “plan” around that, I’d love to know.
The other issue with this stupidity is that carefully planned and managed things generally lack robustness; mainly because the planners are scraping every little bit of slack out of the systems, and not factoring in enough redundancies the way a “natural economy” would possess automatically. You have one supplier of widgets, ‘cos that’s “economical”? Sure; makes perfect sense until that one supplier has its factory burn down and you’re left with a “no widget” state of affairs. In a normal, unplanned and chaotic economy, you are almost certain to have alternative, competing suppliers for widgets… Which is another reason the Soviets kept having issues with their industrial policies. Instead of half-a-dozen small and nimble component makers, they always went for the gargantuan vertically-integrated solution… Which is why Russia is today littered with the vestigial skeletons of all these one-product factories and cities that have gone obsolete and are no longer at all competitive.
People decry the American Rust Belt, but the lesson to be learned there is that the majority of the facilities there were obsolete, and because the idjits in government made it smarter to move production overseas, outside the reach of their regulatory fist, the plants were never updated or replaced. Detroit is what it is in large part because they incentivized everything to be moved.
So, it did. The dead hand of planning and over-regulation did to Detroit what the “plan” did to the Soviet Union, just on a different time scale.