*by Philip Pilkington*

**Article of the Week from Fixing the Economists**

The danger when mathematicians try to do economic modelling is twofold. The first problem is that they often do not have a clue about what they are doing or the object that they are trying to model. The second problem is that they often begin to mistake the model for reality and make grandiose claims about what they have achieved or will potentially achieve that ring hollow when scrutinised.

Both of these problems are amplified as economists, who are often not very strong mathematicians, assume that because the mathematician is more mathematically savvy than they are that what they are doing must be correct and any doubts the economist has must be miscomprehensions. It is in this way that bad but highly mathematised economics becomes like a potential brain slug sitting on the forehead of good economists, leading them down blind alleys and wayward paths.

I’ve recently got into something of a spat with one Matheus Grasselli on the INET YSI Facebook page; a page where young economists looking for alternative approaches congregate. Grasselli is an associate professor of mathematics and he is currently part of the burgeoning industry of Minksy modelling that has sprung up since the crisis. You can see a presentation of Grasselli’s work here.

I heard about the work Grasselli and others were doing at the Fields Institute some time ago and I was instantly skeptical: isn’t there an established tradition in Post-Keynesian economics that goes back over 80 years that uses mathematics only in a very presentational manner? Indeed, doesn’t Post-Keynesian economics generally follow the spirit laid out by Keynes (a mathematician himself by training) in the following quote from his *General Theory*?

The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error. It is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep “at the back of our heads” the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials “at the back” of several pages of algebra which assume that they all vanish. Too large a proportion of recent “mathematical” economics are mere concoctions, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols.

Keynes, and those that followed him, were aware that the mathematics used by economists should only be part of establishing “*an organised and orderly method of thinking out particular problems*”. Once it was used to build giant formal models it risked moving away from the real-world entirely and becoming a fetish game of “my maths is bigger than your maths”. The result is schoolyard academic squabbles of the most boring and irrelevant kind.

Of the two sins laid out at the beginning Grasselli has fallen into both. First, he has made the claim that the Post-Keynesian concern with the non-ergodicity of economic systems and the implications of this for modelling and empirical mathematical research are without foundation. On the Facebook page he writes he following of what he refers to as the “*ergodicity nonsense*”:

OK, this ergodicity nonsense gets thrown around a lot, so I should comment on it. You only need a process (time series, system, whatever) to be ergodic if you are trying to make estimates of properties of a given probability distribution based on past data. The idea is that enough observations through time (the so called time-averages) give you information about properties of the probability distribution over the sample space (so called ensemble averages). So for example you observe a stock price long enough and get better and better estimates of its moments (mean, variance, kurtosis, etc). Presumably you then use these estimates in whatever formula you came up with (Black-Scholes or whatever) to compute something else about the future (say the price of an option). The same story holds for almost all mainstream econometric models: postulate some relationship, use historical time series to estimate the parameters, plug the parameters into the relationship and spill out a prediction/forecast.

Of course none of this works if the process you are studying in non-ergodic, because the time averages will NOT be reliable estimates of the probability distribution. So the whole thing goes up in flames and people like Paul Davidson goes around repeating “non-ergodic, non-ergodic” ad infinitum. The thing is, none of this is necessary if you take a Bayes’s theorem view of prediction/forecast. You start by assigning prior probabilities to models (even models that have nothing to do with each other, like an IS/LM model and a DSGE model with their respective parameters), make predictions/forecasts based on these prior probabilities, and then update them when new information becomes available. Voila, no need for ergodicity. Bayesian statistics could not care less if the prior probabilities change because they are time-dependent, the world changed, or you were too stupid to assign them to begin with. It is only a narrow frequentist view of prediction that requires ergodicity (and a host of other assumptions like asymptotic normality of errors) to be applicable. Unfortunately, that’s what’s used by most econometricians. But it doesn’t need to be like that. My friend Chris Rogers from Cambridge has a t-shirt that illustrates this point. It says: “Estimate Nothing!”. I think I’ll order a bunch and distribute to my students.

It is not clear that Grasselli’s approach here can be used in any meaningful way in empirical work. What we are concerned with as economists is trying to make predictions about the future. These range from the likely effects of policy, to the moves in markets worldwide. What Grasselli is interested in here is the robustness of his model. He wants to engage in schoolyard posturing saying “*my model is better than your model because it made better predictions”.* This is a very tempting path for academics because it allows them to engage in some sort of competition. That it is completely irrelevant matters little if a distracting competition is established to show off the various tricks one has learned.

Indeed, in misunderstanding the object of economic analysis — one so eloquently laid out by Keynes in the above quote — Grasselli sets up Post-Keynesian economics to become yet another stale classroom discipline with no bearing on real-world analysis whatsoever. He also risks turning out a new generation of students who cannot do any real empirical work and instead show off to others their mathematical prowess rather than their results. If Grasselli does order the “Estimate Nothing!” t-shirts perhaps he should have written on the back “And Become Completely Irrelevant!”.

The second sin Grasselli has committed has to do with the claims he makes for his models. As we will see this sin is committed for largely the same reasons as the first. Recently — and this is what sparked off the debate on the INET page — Grasselli had an article written up on his work by a fellow mathematician. The article was aimed at investment advisers and ran with the impressive title *A Better Way to Measure Systemic Risk*.

As anyone familiar with the investment community will know that title promises rather a lot. If you can measure systemic risk you can adjust your portfolio accordingly and you can get a distinct advantage over the other guy. That’s a big promise for investment guys; a bit of a Holy Grail, actually. I pointed out to Grasselli, however, that nowhere in the article could I see any method discussed for how to measure systemic risk. This did not surprise me as I don’t think that such a thing is possible using Minsky’s work, as it was something I gave quite a bit of thought to about a year ago when I was choosing my dissertation topic.

Now, I assume that Grasselli did not himself choose the title of the article. But he must have at least given an impression to the person that did — who, remember, is a mathematician himself and not some starry-eyed journalist. So, I called Grasselli out on this and said that I didn’t think he had such a measure. He countered that he did and laid out his approach. I said that he was just comparing models with one another and this meant nothing. Here is his response (which I think the clearest explanation of what he is doing):

I’m not comparing models, I’m comparing systems within the same model. Say System 1 has only one locally stable equilibrium, whereas System 2 has two (a good one and a bad one). Which one has more systemic risk? There’s your first measure. Now say for System 2 you have two sets of initial conditions: one well inside the basin of attraction for the good equilibrium (say low debt) and another very close to the boundary of the basin of attraction (say with high debt). Which set of initial conditions pose higher systemic risk? There’s your second measure. Finally, you are monitoring a parameter that is known to be associated with a bifurcation, say the size of the government response when employment is low, and the government needs to decide between two stimulus packages, one above and one below the bifurcation threshold. Which policy lead to higher systemic risk? There’s your third measure.

What Grasselli is doing here is creating a model in which he can simulate various scenarios to see which one produces high-risk and which will produce low-risk environments within said model. But is this really *“measuring systemic risk”*? I don’t think that it is. To say that it is a means to measure systemic risk would be like me saying that I found a way to measure the size of God and then when incredulous people came around to my house to see my technique they would find a computer simulation I had created of what I think to be God in which I could measure Him/Her/It.

What sounds impressive on the outside is actually rather mediocre and banal (it would also be a bit weird if it were not socially sanctioned, but I digress). It is also a largely irrelevant way to determine policy. Again, as economists what we need is a toolbox with which we can analyse certain problems, not a *“maze of pretentious symbols and unhelpful symbols”* that lead the economist *“to lose sight of the complexities and interdependencies of the real world”,* as Keynes wrote in 1936. Grasselli’s technique can be used to “*wow”* politicians and investors, but it cannot be used to make real choices which will always be carried out by practical, down-to-earth people with better or worse economic training.

So will the brain slug be passed around? Will Grasselli’s approach be adopted by Post-Keynesians? Will the debates over ergodicity evaporate from the journals as simple misunderstandings? Will the same pages pour with complex mathematical formulations and simulations? I doubt it. Some will buy into Grasselli’s program as a passing gimmick with good funding. Some will be drawn to it thinking that by having “*bigger maths*” than the neoclassicals we will win the debate — like a boys school bathroom game of a similar kind. But it will likely peter out when the results are seen to be what they will likely be: the naval-gazing of model-builders basking in self-admiration at the constructions they have built.

Or I may be completely wrong and my skepticism may be misplaced. Perhaps Grasselli’s program will produce wondrous new insights into economic systems that the likes of me have never imagined before. Maybe it will produce predictions about markets and economies which I could have never hoped to have made without the models. If such is the case I will be the first to say that I was wrong and I will give Grasselli all the praise that he will undoubtedly deserve. In the meantime, however, I continue to register not just my skepticism, but my extreme skepticism and plead with those who do engage in such exercises to tone down the claims they are making lest they embarrass the Post-Keynesian community at large.