posted on 15 August 2016
by Steve Keen, Steve Keen's Debtwatch
For decades, mainstream economists have reacted to criticism of their methodology mainly by dismissing it, rather than engaging with it. And the customary form that dismissal has taken is to argue that critics and purveyors of alternative approaches to economics simply aren't capable of understanding the mathematics the mainstream uses.
The latest installment of this slant on non-mainstream economic theory appeared in Noah Smith's column in Bloomberg View: "Economics Without Math Is Trendy, But It Doesn't Add Up".
Figure 1: Noah's tweet announcing his blog post
While Noah's column made some valid points (and there's been some good off-line discussion between us too), its core message spouted five conflicting fallacies as fact:
I'll consider each of these assertions one by one, because they certainly can't be addressed together.
There is indeed a wing of heterodox economics that is anti-mathematical. Known as "Critical Realism" and centred on the work of Tony Lawson at Cambridge UK, it attributes the failings of economics to the use of mathematics itself. Noah has been less than complimentary about this particular subset of heterodox economics in the past - see Figure 2.
Figure 2: Noah's reaction to critical realism
What Noah might not know is that many heterodox economists are critical of this approach as well. In response to a paper by Lawson that effectively defined "Neoclassical" economics as any economics that made use of mathematics (which would define me as a Neoclassical!), Jamie Morgan edited a book of replies to Lawson entitled What is Neoclassical Economics? (including a chapter by me). While the authors agreed with Lawson's primary point that economics has suffered from favouring apparent mathematical elegance above realism, several of us asserted that mathematical analysis is needed in economics, if only for the reason that Noah gave in his article:
The difference between mainstream and heterodox economists therefore isn't primarily that the former is mathematical while the latter is verbal. It's that heterodox mathematical economists accept Tony Lawson's key point that mathematical models must be grounded in realism; we just reject, to varying degrees, Tony's argument that mathematics inherently makes models unrealistic.
In contrast, the development of mainstream modelling has largely followed Milton Friedman's edict that the realism of a model isn't important - all that matters is that it generates realistic predictions:
Even on this criterion, mainstream macroeconomics is a failure, given the occurrence of a crisis that it believed could not happen. But this criterion alone isn't sufficient: realism does matter.
If Friedman's "only predictive accuracy matters, not realism" criterion had been applied in astronomy, we would still be using Ptolemy's model that put the Earth at the centre of the Universe with the Sun, Moon, planets and stars orbiting it, because the model yielded quite accurate predictions of where celestial objects would appear to be in the sky centuries into the future. Its predictions were in fact more accurate than the initial predictions from Galileo's heliocentric model, even though Galileo's core concept - that the Sun was the centre of the solar system, not the Earth - was true, while Ptolemy's Earth-centric paradigm was false.
Friedman's argument was simply bad methodology, and it's led to bad mainstream mathematical models that make screamingly unrealistic assumptions in order to reach desired results.
The pivotal unrealistic assumption of mainstream economics prior to the crisis was that "economic agents" have "rational expectations". It sounds reasonable as a sound bite - who wants to be accused of having "irrational expectations"? - but it actually means assuming (a) that people have an accurate model of the economy in their heads that guides their behaviour today and (b) that this model happens to be the same as the one the Neoclassical author has dreamed up in his (it's rarely her, on either side of economics) paper. And there are many, many other unrealistic assumptions.
Noah's argument that heterodox economics is less mathematical than the mainstream was also truer some decades ago, but today, with so many physicists and mathematicians in the "heterodox" camp, it's a very dated defence of the mainstream.
The standard riposte to critics of mainstream economics used to be that they are critical simply because they lack the mathematical skills to understand Neoclassical models, and - the argument Noah repeats here - their papers were just verbal hand-waving that couldn't be given precise mathematical form, and therefore couldn't be tested:
"Ideas like Minsky's, with no equations"? If it's equations and Minsky you want, try this macroeconomics paper: "Destabilizing a stable crisis: Employment persistence and government intervention in macroeconomics" (Costa Lima, Grasselli, Wang & Wu 2014). And I defy any Neoclassical to tell the authors (including mathematician Matheus Grasselli, whose PhD was entitled "Classical and Quantum Information Geometry") that they lack the mathematical ability to understand Neoclassical models.
The mathematics used in heterodox papers like this one is in fact harder than that used by the mainstream, because it rejects a crucial "simplifying assumption" that mainstreamers routinely use to make their models easier to handle: imposing linearity on unstable nonlinear systems.
Imposing linearity on a nonlinear system is a valid procedure if, and only if, the equilibrium around which the model is linearized is stable. But the canonical model from which DSGE models were derived - Ramsey's 1925 optimal savings model - has an unstable equilibrium that is similar to the shape of a horse's saddle. Imagine trying to drop a ball onto a saddle so that it doesn't slide off - impossible, no?
Not if you're a "representative agent" with "rational expectations"! Neoclassical modelers assume that the "representative agents" in their models are in effect clever enough to be able to drop a ball onto the economic saddle and have it remain on it, rather than sliding off (they call it imposing a "transversality condition").
The mathematically more valid approach is to accept that, if your model's equilibria are unstable, then your model will display far-from-equilibrium dynamics, rather than oscillating about and converging on an equilibrium. This requires you to understand and apply techniques from complex systems analysis, which is much more sophisticated than the mathematics Neoclassical modelers use (see the wonderful free ChaosBook http://www.chaosbook.org/ for details). The upside of this effort though is that since the real world is nonlinear, you are much closer to capturing it with fundamentally nonlinear techniques than you are by pretending to model it as if it is linear.
Mainstream economists are belatedly becoming aware of this mistake, as Olivier Blanchard stated recently:
This is not a problem for heterodox economists, since they don't take as an article of faith that the economy is stable. But it does make it much more difficult to evaluate the properties of a model, fit it to data, and so on. For this, we need funding and time - not a casual dismissal of the current state of heterodox mathematical economics.
Can't make forecasts?
Noah is largely correct that heterodox models aren't set up to make numerical forecasts (though there are some that are). Instead the majority of heterodox models are set up to consider existing trends, and to assess how feasible it is that they can be maintained.
Far from being a weakness, this has been a strength of the heterodox approach: it enabled Wynne Godley to warn from as long ago as 1998 that the trends in the USA's financial flows were unsustainable, and that therefore a crisis was inevitable unless these trends were reversed. At the same time that the mainstream was crowing about "The Great Moderation", Wynne was warning that "Goldilocks is doomed".
Wynne's method was both essentially simple, and at the same time impossible for the mainstream to replicate, because it considered monetary flows between economic sectors - and the stocks of debt that these flows implied. The mainstream can't do this, not because it's impossible - it's basically accounting - but because they have falsely persuaded themselves that money is neutral, and they therefore don't consider the viability of monetary flows in their models.
Dividing the economy into the government, private and international sectors, Godley pointed out that the flows between them must sum to zero: an outflow from any one sector is an inflow to one of the others. Since the public sector under Clinton was running a surplus, and the trade balance was negative, the only way this could be sustained was for the private sector to "run a deficit" - to increase its debt to the banking sector. This implied unsustainable levels of private debt in the future, so that the trends that gave rise to "The Great Moderation" could not continue. As Wynne and Randall Wray put it:
Had Wynne's warnings been heeded, the 2008 crisis might have been averted. But of course they weren't: instead mainstream economists generated numerical forecasts from their DSGE models that extrapolated "The Great Moderation" into the indefinite future. And in 2008, the US (and most of the global economy) crashed into the turning point that Wynne warned was inevitably coming.
Wynne's wasn't the only heterodox economist predicting a future crisis for the US and global economy, at a time when mainstream Neoclassical modellers were wrongly predicting continued economic bliss. Others following Hyman Minsky's "Financial Instability Hypothesis" made similar warnings.
Noah was roughly right, but precisely wrong, when he claimed that "Minsky, though trained in math, chose not to use equations to model the economy - instead, he sketched broad ideas in plain English."
In fact, Minsky began with a mathematical model of financial instability based on Samuelson's multiplier-accelerator model (Minsky, 1957 "Monetary Systems and Accelerator Models" The American Economic Review, 47, pp. 860-883), but abandoned that for a verbal argument later (wonkish hint to Noah: abandoning this was a good idea, because Samuelson's model is economically invalid. Transform it into a vector difference equation and you'll see that its matrix is invertible).
Minsky subsequently attempted to express his model using Kalecki's mathematical approach, but never quite got there. However, that didn't stop others - including me - trying to find a way to express his ideas mathematically. I succeeded in August 1992 (with the paper being published in 1995), and the most remarkable thing about it - apart from the fact that it did generate a "Minsky Crisis" - was that the crisis was preceded by a period of apparent stability. That is in fact what transpired in the real world: the "Great Recession" was preceded by the "Great Moderation". This was not a prediction of Minsky's verbal model itself: it was the product of putting that verbal model in mathematical form, and then seeing how it behaved. It in effect predicted that, if a Minsky crisis were to occur, then it would be preceded by a period of diminishing cycles in inflation (with the wages share of output as a proxy for inflation) and unemployment.
The behaviour was so striking that I finished my paper noting it:
This was before the so-called "Great Moderation" was apparent in the data, let alone before Neoclassical economists like Ben Bernanke popularised the term. This was therefore an "out of sample" prediction of my model - and of Minsky's hypothesis. Had the 2008 crisis not been preceded by such a period, my model - and, to the extent that it captured its essence, Minsky's hypothesis as well - would have been disproved. But the phenomenon that my Minsky model predicted as a precursor to crisis actually occurred - along with the crisis itself.
Even without quantitative predictions, these heterodox models - Godley's stock-flow consistent projections and my nonlinear simulations - fared far better than did Neoclassical models with all their econometric bells and whistles.
Over-fitted to the data?
It is indeed true that many heterodox models have numerous parameters, and that a judicious choice of parameter values can enable a model to closely fit the existing data, but be useless for forecasting because it tracks the noise in the data, rather than the causal trends. Of course, this is equally true of mainstream models as well - compare for example the canonical Neoclassical DSGE paper "Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach" (Smets and Wouters 2007) with the equally canonical Post Keynesian "Stock-Flow Consistent Modelling" (SFCM) paper "Fiscal Policy in a Stock-Flow Consistent (SFC) Model" from the same year (Godley and Lavoie 2007). Both are linear models, and the former has substantially more parameters than the latter.
The fact that this is a serious problem for DSGE models - and not a reason why they are superior to heterodox models - is clearly stated in a new note by Olivier Blanchard:
What really matters however, as a point of distinction between two approaches that share this same flaw, is not the flaw itself, but the different variables than the models regard as essential determinants of the economy's behaviour. There we see chalk and cheese - with the heterodox choices being far more palatable because they include financial sector variables, whereas the pre-crisis DSGE models did not.
The real problems with over-fitting the data arise not from over-fitting to what ends up being noise rather than signal, but from fitting a model to real world data when the model omits crucial determinants of what actually happens in the real world, and from developing linear models of a fundamentally nonlinear real world. The former error guarantees that your carefully fitted model will match historical data superbly, but will be wildly wrong about the future because it omits key factors that determine it. The latter error guarantees that your model can extrapolate existing trends - if it includes the main determinants of the economy - but it cannot capture turning points. A linear model is, by definition, linear, and straight lines don't bend.
On the "get the variables right" issue, most modern heterodox models are superior to mainstream DSGE ones, simply because most of them include the financial system and monetary stocks and flows in an intrinsic way.
On the linearity issue, most heterodox SFCM models are linear, and are therefore as flawed as their Neoclassical DSGE counterparts. But it then comes down to how are these linear models used? In the Neoclassical case, these models are used to make numerical forecasts and therefore they extrapolate existing trends into the future. In the heterodox case, they are used to ask whether existing trends can be sustained.
The former proclivity led DSGE modellers - such as the team behind the OECD's Economic Outlook Report - to extrapolate the relative tranquillity of 1993-2007 into the indefinite future in June of 2007:
In contrast, Godley and Wray used the SFCM approach (without an actual model) to conclude that the 1993-2007 trends were unsustainable, and that without a change in policy, a crisis was inevitable:
This leads to Noah's next false point, that Neoclassical models do what heterodox ones do anyway.
We're doing it anyway?
There are three main strands in heterodox macro modelling: what is known as "Stock-Flow Consistent Modelling" (SFCM) that was pioneered by Wynne Goldey; nonlinear system dynamics modelling; and heterogeneous multi-agent modelling (there are other approaches too, including structurally estimated models and big data systems, but these are the main ones). Noah made a strong claim about the stock-flow consistent strand and Neoclassical modelling:
I agree the name is confusing - perhaps it would be better if the name were "Monetary Stock-Flow Consistent Models". With that clarification, there is no way that Neoclassical DSGE models are stock-flow consistent in a monetary sense.
Even after the crisis, most Neoclassical DSGE models don't include money or debt in any intrinsic way (the financial sector turns up as another source of "frictions" that slow down a convergence to equilibrium), and they certainly don't treat the outstanding stock of private debt as a major factor in the economy.
Heterodox SFCM models do include these monetary and debt flows, and therefore the stocks as well. A trend - like that in the mid-1990s till 2000 - that requires private debt to rise faster than GDP indefinitely will be identified as a problem for the economy by a heterodox SFCM model, but not by a Neoclassical DSGE one. A Neoclassical author who believes the fallacious Loanable Funds model of banking is also likely to wrongly conclude that the level of private debt is irrelevant (except perhaps during a crisis).
This makes heterodox SFCM models - such as the Kingston Financial Balances Model (KFBM) of the US economy produced by researchers at my own Department - very different to mainstream DSGE models.
No decent results from Agent-Based Models?
Noah concludes with the statement that what is known as "Agent Based Modelling" (ABM), which is very popular in heterodox circles right now, hasn't yet produced robust results:
Largely speaking, this is true - if you want to use these models for macroeconomic forecasting. But they are useful for illustrating an issue that the mainstream avoids: "emergent properties". A population, even of very similar entities, can generate results that can't be extrapolated from the properties of any one entity taken in isolation. My favourite example here is what we commonly call water. There is no such thing as a "water molecule", or a "steam molecule", let alone a "snowflake molecule". All these peculiar and, to life, essential features of H2O, are "emergent properties" from the interaction of large numbers of H2O under different environmental conditions. None of these are properties of a single molecule of H2O taken in isolation.
Neoclassical economists unintentionally proved this about isolated consumers as well, in what is known as the Sonnenschein-Mantel-Debreu theorem. But they have sidestepped its results ever since.
The theorem establishes that even if an economy consists entirely of rational utility maximizers who each, taken in isolation, can be shown to have a downward-sloping individual demand curve, the market demand curve for any given market can theoretically take any polynomial shape at all:
Alan Kirman suggested the proper reaction to this discovery almost 30 years ago: that the decision by the Neoclassical school, at the time of the second great controversy over economic theory, to abandon class-based analysis, was unsound. Since even such a basic concept (to the Neoclassical school) as a downward-sloping demand curve could not be derived by extrapolating from the properties of an isolated individual, the only reasonable procedure was to work at the level of groups with "collectively coherent behaviour" - what the Classical School called "social classes":
Instead of taking this sensible route, Neoclassical economists - mainly without consciously realising it - took the approach of making the absurd assumption that the entire economy could be treated as a single individual in the fiction of a "representative agent".
Mendacious textbooks played a large role here - which is why I say that they did this without realising that they were doing so. Most of today's influential Neoclassical economists would have learnt their advanced micro from Hal Varian's textbook. Here's how Varian "explained" the Sonnenschein Mantel Debreu results:
The "convenience" of the "representative consumer" led directly to Real Business Cycle models of the macroeconomy, and thence DSGE - which Neoclassicals are now beginning to realise was a monumental blunder.
Multi-agent modelling may not lead to a new policy-oriented theory of macroeconomics. But it acquaints those who do it with the phenomenon of emergent properties - that an aggregate does not function as a scaled-up version of the entities that comprise it. That's a lesson that Neoclassical economists still haven't absorbed.
Previous periods of crisis in economic theory
Since I began this post by calling the current debate the "5th great conflict over the nature of economics", I'd better detail the first three (the 4th being Keynes's batlle in the 1930s). These were:
I'll address this very last wonkish issue in a future post.
Econintersect Note: The framework of heterodoxy discussed here has been well established (and widely ignored) for decades, as references provided by Prof. Keen indicate. A particularly well written review of the Great Financial Crisis of 2008 was published by Dirk Bezemer in 2010. That paper was summarized by John Lounsbury in July 2010: What Caused the Blind Spot for Economists Before the Great Financial Crisis?
>>>>> Scroll down to view and make comments <<<<<<
This Web Page by Steven Hansen ---- Copyright 2010 - 2016 Econintersect LLC - all rights reserved