The Myth of the Model
MAY 20, 2010 by MAX BORDERS
Most people don’t notice it, but “model” may be the most dangerous word in the English language right now. Models justify a lot of the bad policies that have been, or soon will be, foisted on us. For example, what was used to justify the fiscal policy of the big “stimulus”? That’s right. And as I wrote this, “experts” were using models to gear us up for another one.
More than a year after the original “stimulus,” not only are economists nowhere near consensus about its effects but few if any of the models used to justify it have turned out to be right. Obamanomic adviser Christina Romer, for example, has come under heavy criticism because her team’s plan has performed abysmally. The model behind the plan predicted unemployment would peak at 8.3 percent. It exceeded 10 percent before dropping back slightly. In defending her plan she appealed to counterfactuals—that is, how bad things could have been without it. That her team failed to reach its rosy targets, she says, “prevents people from focusing on the positive impact.” But did Romer ever consider the possibility that her model was just wrong?
When it comes to prediction and explanation, macroeconomic models are often just as bad after the fact as before it. There are just as many debates raging about the effects of the “stimulus” as explanations of the crisis used to justify it. Consensus consistently eludes us. Almost all the arguments presuppose models. There are Keynesian models, “new” Keynesian models, and unbranded models proffered by leading economic lights like Harvard’s Robert Barro. Comparative analyses of these positions offer little except further evidence that, as Stanford’s John Taylor writes, “[T]here is no consensus.”
But why? These people aren’t stupid. I’d like to suggest in nontechnical terms why the problem might be with the models themselves.
“Mainstream macroeconomics is ‘hydraulic,’” writes recovering macroeconomist Arnold Kling at EconLog. “There is something called ‘aggregate demand’ which you adjust by pumping in fiscal and monetary expansion. I wish to reject this whole concept of macroeconomics.” I think he’s right. In fact, as I’ve argued in these pages before, economies are not pumps to be primed, but economic ecosystems. Economists are thus notoriously bad at predicting, much less planning, economies. That’s why next-generation economics must focus on the fundamentals—namely, the rules that give rise to successful entrepreneurship and sustainable growth. Not aggregates. Not models. Rules.
“Institutions form the incentive structure of a society and the political and economic institutions, in consequence, are the underlying determinant of economic performance,” said Douglass North in his Nobel Prize lecture. And he’s not alone in the wilderness. James Buchanan, another Nobel Prize winner, says economists are simply asking the wrong questions:
How do markets work? Standing alone, this is an inappropriate and unanswerable question. It must be replaced by the question: How do markets work under this or that set of constitutional and institutional constraints? Economists’ scientific expertise can be brought to bear on the predicted effects of alternative sets of constraints. The relevant question is not how this or that outcome may be put in place through possible collective or political action. The question becomes instead, how can this or that set of constraints be predicted to operate so as to allow the generation of an order that meets certain criteria of desirability? The difference between the two methodological stances may appear minor, but much ill-advised effort might be avoided if economists would recognize the limits of their own discipline. [“Economists Have No Clothes.”]
Buchanan not only laments the poor framing of economic questions, but also says that even those attempting postmortems of the 2008-09 financial crisis have been “embarrassed by their inability to offer ‘scientific’ explanations.” Those who’ve dared to come forward have offered little more than warmed-over Keynesian nostrums. Just think of folks like Romer, Paul Krugman, and Joseph Stiglitz—all of whom have recommended various versions of doubling down on the failed policy. At the end of their yellow-brick road? A curtain, behind which lies a model, behind which lies an agenda.
So what do all these macroeconomic models have in common?
- They’re rendered either in impenetrable math or with sophisticated computers, requiring a lot of popular (and political) faith.
- Politicians and policy wizards hide behind this impenetrability, both to evade public scrutiny and to secure their status as elites.
- Models vaguely resemble the real-world phenomena they’re meant to explain but often fail to track with reality when the evidence comes in.
- They’re meant to model complex systems, but such systems resist modeling. Complexity makes things inherently hard to predict and forecast.
- They’re used by people who fancy themselves planners—not just predictors or describers—of complex phenomena.
None of this is to argue we should do away with measurement. Temperature, blood pressure, and other indicators are all useful data for telling whether you’re healthy. But using proxy measures to determine an economy’s health is a far cry from using models to reduce dynamic systems to steady-state snapshots.
What does this mean for economics as a discipline? I think it’s time we admit many economists are just soothsayers. They keep their jobs for a host of reasons that have less to do with accuracy and more to do with politics and obscurantism. Indeed, where do you find them but in bureaucracies—those great shelters from reality’s storms? Governments and universities are places where big brains go to be grand and weave speculative webs for the benefit of the few.
And yet “ideas have consequences.” Bureaucracies are power centers. So we have a big job ahead of us. We’ve got to do a seemingly contradictory thing and make the very idea of complex systems simple. How best to say it? Economists aren’t oracles? Soothsaying is not science? Ecosystems can’t be designed?
“The very term ‘model’ is a pretentious borrowing of the architect’s or engineer’s replica, down-to-scale of something physical,” says Barron’s economics editor, Gene Epstein. “These are not models at all, but just equations that link various numbers, maybe occasionally shedding light, but often not.”
Take this as a throwing down of the gauntlet. Macroeconomic wizards owe us more than the circular justifications for cushy jobs.
Likewise, we have to explain that a scientist’s model, while useful in limited circumstances, is little better than a crystal ball for predicting big phenomena like markets and climate. It is an offshoot of what F. A. Hayek called the “pretence of knowledge.” In other words, modeling is a form of scientism, which is “decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed.” A model is thus a cognitive shortcut for both the wonk and the journalist, the latter of whom wants to peg his story to something authoritative the wonk has to provide. At the receiving end of this wonk-writer alliance are the rest of us—with little besides common sense as a shield. And I don’t mean this as populism. It is rather a defense against scientism launched from the turf of Austrian economics.
Complex phenomena can be counterintuitive. Sometimes they require scrutiny by experts to make sense of them. Notwithstanding their expertise, experts are just as often wrong as right. Can we base policy decisions on what amounts to coin flips? Models are a means of making the most fragile of hypotheses seem strong and substantive. But the only thing we can really predict is that they’ll eventually shatter.