We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.
Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]
|
Samizdata quote of the day – macroeconomic management doesn’t work Macroeconomic management doesn’t work because the data available to do detailed macroeconomic management is shit. Therefore let’s not try doing detailed macroeconomic management. Get the basics right, the incentives, markets, then leave be.
Of course, this then leaves a paucity of jobs for economists but then as I’m not one of them why would I give that proverbial?
– Tim Worstall
|
Who Are We? The Samizdata people are a bunch of sinister and heavily armed globalist illuminati who seek to infect the entire world with the values of personal liberty and several property. Amongst our many crimes is a sense of humour and the intermittent use of British spelling.
We are also a varied group made up of social individualists, classical liberals, whigs, libertarians, extropians, futurists, ‘Porcupines’, Karl Popper fetishists, recovering neo-conservatives, crazed Ayn Rand worshipers, over-caffeinated Virginia Postrel devotees, witty Frédéric Bastiat wannabes, cypherpunks, minarchists, kritarchists and wild-eyed anarcho-capitalists from Britain, North America, Australia and Europe.
|
Even if there was good data available, detailed macroeconomic management would still be shit.
The only way to stop them trying is to fire the economists, their managers, comptrollers and close any department, agency or QUANGO so doing.
If the Institute of Economic Affairs or the Adam Smith Institute wishes to chip in economic advice, then fair enough. As long as they declare their biases and backers as well as accepting no public funding.
Rachel from Accounts is bad enough and should be fired for it.
Don’t want to encourage more of that sort of thing.
Shouldn’t the impulse be, fix the selection and collection of data?
Bobby: you’re fighting against physics. When you double the number of entities, the number of pieces of data goes up exponentially. Two entities has 1 interaction. Double it, four entities have 6 interactions. Double it, sixteen entities have (16+14+13+12+11+10+9+8+7+6+5+4+3+2+1)/2 interactions. We’ve only just gone past the number of fingers on two hands, and already we’re approaching exponential notation.
@jgh
You have your formula wrong there. It’s n*(n-1)/2. For 16 entities you have 16*15/2 interactions. That’s 120 interactions.
… which means that for each additional entity the number of possible different interactions increases by the number of entities there were before that addition. The numbers increase very fast.
Central planning is one of those stories we made up and kill each other over rather than admit that some things we just can’t know.
We can’t even centrally plan our own pantries.
https://staghounds.blogspot.com/2005/08/proof-that-marxism-can-never-work.html
This is of a piece with the principal weakness of all “climate change” initiatives. ; it’s the demand for macro i.e., global, action which renders arguments for/against the issue largely moot.
Just tell me again please how over 200 sovereign states are going to work in lockstep to “save the planet” under the direction of what global authority and enforcement regime. That sounds like the kind of magical thinking a 20 year old Swedish woman with Asperger’s syndrome would come up with.
Story of my life.
But . . . Isn’t that the point of the “macro” part? Rough approximations trying to narrow down to a result?
I always thought of macro-econ as like flying a plane. You’re never going exactly where you think you are. You’re yawing back and forth, and with effort you try to average out a path to your destination.
With enough corrections – with enough inputs, data – you can get there. Not a straight line, but you can get there. I have friends at the Mpls Fed who can probably fly the plane straighter than I can, ‘cuz they’re data wizards. They’ll always tell me that they can never get enough (or get the correct) inputs, but that’s because they’re intent on precision, which is hard. But they get close.
Most of the people the world calls “economists” are not economists – because they either do not know, or do not care about, the basic laws of economics.
In 19th France economists (real ones) told the government that its interventions would do harm, not good, so the governments of the Third Republic created a new subject “Public Administration” to produce “educated” people who would support state intervention – which is what it does in France to this day (although, eventually, the teaching of economics was corrupted in France – these days one gets French “economists” supporting Wealth Taxes and other absurdities).
In Britain and the United States the process was different – economics was corrupted directly, in the United States that process of corruption was led by Richard Ely – the mentor of both “Teddy” Roosevelt and Woodrow Wilson.
Richard Ely tried to drive A.L. Perry out of academic life – Ely failed in that (he had to wait till Perry died a natural death – unlike Jane Stanford who founded Stanford University in California and got in the way of the Interventionists Collectivists there), but gradually, over time, real economics was driven out of most American universities.
In macroeconomics, the “corrections” are often as grand as “changing laws from giving people subsidies for doing x to imprisoning them for doing x and vice versa”. Even in the rare cases when that’s desirable, it takes a lot more power, effort, and time to handle than pulling a lever in a plane’s cockpit.
In a more general vein, jgh & AFT were optimistic about the needed data only growing exponentially. Interactions can easily grow according to nested exponential functions, or exponential functions nested with factorials, or…
Plus, data processing & decision making take time, and if you can’t do it fast enough, your data aren’t relevant. How do you know if you can do it fast enough? Well, how hard is it to decide if we can solve a problem in polynomial time or not?
On a finer-grained level, it’s not easy to define meaningful data in the first place. We can easily fit a single human’s choices about a hypothetical simple, finite decision on an ordinal scale. If that choice affects other choices (as real choices tend to do) then we’re looking at combinatorial growth again. If we want to combine one person’s choices with another, the only way we have for that involves measuring with cardinal numbers, or just going straight to boolean “yes” or “no”. This is hard enough even when the measure is as simple as “I’d give up x mass of gold to do y” or “You’d have to pay me z mass of gold to get me to do y“, as each individual’s value will vary based on how much gold they’ve already got, think they can take care of, etc. And, as any commentor here will eagerly tell you, lots of economies are using fiat currency, which makes decision analysis even more complicated.
It’s part of why I want to popularize the fact that rigorous-but-provably undecidable definitions are surprisingly easy to hit upon; it’s the definitions that are rigorous, decidable, and which accurately capture the phenomenon we want to define that are hard. Unfortunately, proving that those rigorous-but-provably undecidable definitions are undecidable is also hard…
This still isn’t a complete list, but my hands hurt. I’ll wait for more specific objections to reply, rather than try to anticipate them & completely destroy my hand’s tendons writing preemptive responses.
CayleyGraph2015:
Maybe I’m misunderstanding the argument. Wouldn’t be a new thing.
The Fed Reserve Bank in the US is macro heaven. That’s what they do.
Is the argument here that their work is utterly without redeeming value? That’s there’s no point to it, that no good is done there?
Or is the argument that they are not quite accurate, that they fail at the tiny margins?
(And, yes, I understand Worstell’s point that the margins are exceedingly important, when a 1% miss can be as bad as a mile’s miss.)
Give it up as a lost cause, or keep working on an exceedingly complex problem?
Hayek had this issue nailed more than half a century ago, as did his mentor, Ludwig von Mises, when he debunked the possibility of calculation under socialism.
The follies and mistakes made by central banks – those who think we can plan a monetary system like planning tractor production in the Soviet Union – seems to not really engender real soul searching outside a few supposed marginal folk like us.
Johnathan Pearce – quite correct Sir.
And those who wish to understand economics should study “Human Action” by Ludwig Von Mises.
If they find that text a bit difficult – then start with “Economics in One Lesson” by Henry Hazlitt.
AFT: Thanks, I was staring at the numbers trying to wrench the maths from memory, and got it to fit for 4 items, but miscounted in my head attempting 16 items.
Please forgive my late reply; I’m sorry I didn’t get back sooner after saying I would in my earlier comment.
I’m certain that they gather a lot of data that is genuinely meaningful, and that the statistics they publish include many with rigorous definitions and practical margins of error.
I’d also bet that they publish a lot of numbers whose definitions mean that any practical interpretation requires so many calculations that the margins of error grow to blanket the entire range of possible results.
The question isn’t whether “their work is utterly without redeeming value”, but whether it’s worth the time/effort/salaries that go into it.
In addition, the original topic was macroeconomic management, not just macroeconomic calculations. The rapidly-increasing margins of error in macroeconomic calculations is one of the reasons macroeconomic management isn’t feasible. If we’re bringing the Fed Reserve Bank into the picture… I know the US Fed has legal authority over interest rates banks are allowed to use to lend to eachother, though I confess ignorance to most details. If nothing else, it creates yet another barrier to entry for people who want to create new financial institutions. As for why that’s bad? I can’t get more detailed than just vague “pressures” or “forces” (that’s kinda the point I’m trying to make, after all) but barriers to entry in a market contribute to that “too big to fail” phenomenon that meant taxpayers had to bail out investment firms a decade ago.
I can’t argue that “no good is done there” yet, but I can argue that they end up doing a lot of bad in addition to the good.
As for the work of your “friends at the Mpls Fed”? I’m completely ignorant about the authority of smaller reserve banks, but it can’t be that important if they can’t even afford vowels. 😉
I should attempt a better job explaining why a 1% miss can be bad even when it only means 1% of a problem.
When you’re modelling an economy, you’re not just answering the question, “What will statistic S be if we do this?” You’re answering the question, “What will statistic S be at time t if we do this?” Hence, a constant margin of error isn’t really what you want; you want the error as a function of time.
If the computational model you’re working with creates it forecast for time t based on values calculated at previous times, your error grows exponentially, even when the error itself is a small, well-behaved constant. This has been common knowledge for decades among modelers, so they mostly try to avoid using such models… but there’s only so much you can do when the underlying phenomenon requires it.
In a computational model, those “corrections” are where you take your current data and run the model. If you’re managing something a few seconds into the future, you can usually wait for that data to come. If you’re managing macroeconomics, you can’t be changing policies every few seconds, or even every few days; you have to give the population months (or at least weeks) to learn and apply the changes. Thus, when you make your corrections, you can’t wait until that data comes in the future; you have to know the results now, and the closest thing we have to a way to do that… is to run a computational model on the results of a computational model.
Every time you do, that 1% error compounds, growing just like interest compounds. After ten iterations, that error has grown to over 9%; after 30, it’s past 26%, and after seventy, it’s past 50%.
As for coming up with models that don’t need daily corrections? Maybe, but even with airplane pilots and drones, you’re correcting your models several times per hour at least, and there you’re just modelling inanimate objects like air, steel, and fuel. With macroeconomics, you’re modelling the behavior of humans, many of whom are trying to model the behaviors of the macroeconomic modellers themselves.
Getting back to jgh and AFT,
I think the explanation has gotten snarled in itself. The formula you cite is quadratic growth (a kind of polynomial growth) and not exponential growth. Quadratic growth is roughly proportional to x྾x, or x²; if the number of pieces of data goes up exponentially by doubling, it grows roughly proportional to 2ˣ. If the difference doesn’t seem immediately important, try calculating the ratio between them for x equal to 10 or more. I let it go earlier because the number of potential interactions one needs to examine grows exponentially with the number of interactions, and I think that’s what you were getting at.
Well, to be honest, I let it go earlier because I was more interested in arguing about my disagreement with bobby b than with you. And my reply may have been more about me showing off that I can copy Unicode characters from a character selection app.
Actually, not so much arguing with me as educating me.
You should never trust anonymous strangers on the Internet to educate you. We’re a deviant bunch…