Economic Modelling and Artificial Intelligence: Is Economic Reasoning Always Based on a “Hidden” Model?

April 15th, 2016
in history, macroeconomics

by Philip Pilkington

Article of the Week from Fixing the Economists

There’s a trope one hears from economists all too often when one discusses the usefulness (or uselessness) of models. The argument usually runs like this: the person questioning the use of models says, for example, that all the useful predictions over the past X number of years have not used formal models; then the person defending the models says that all these predictions were made using models, it was just that the models were not explicitly articulated.

Follow up:

There are a few variations on this trope, but the underlying assumption is always the same: people have, locked inside their heads, models of the world that they apply without even knowing it. The same is true of economists who wrongly think that they work from trained intuition. They are just being naive because they are, in fact, working from a model; it is just one that is, as of yet, unconscious to them.

Epistemologically, this is a very slippery argument. But rather than getting into the nitty-gritty I’m going to say that this argument has already been played out in the computer sciences and economists who claim that we all walk around with models in our heads might do well to pay this some attention.

To get an idea of what this debate was all about we have to rewind to the 1960s. At this time research into cybernetics and artificial intelligence (AI) had reached a level of optimism never seen before or since. Yes, there were some — like Stanley Kubrick and Norbert Weiner — who painted a dark picture of where cybernetics and AI might lead, but there was a general consensus that we were heading firmly in the direction of AI, of HAL 9000 and so on.

Then an obscure philosopher named Hubert Dreyfus, working in a tradition of philosophy completely alien to his native MIT, published a paper entitled Alchemy and AI which he wrote while working at the RAND corporation; a hotbed of AI research and center of US intellectual strategy in the Cold War. The paper started with a sober assessment of the AI movement at the time written in true RAND style.

Early successes in programming digital computers to exhibit simple forms of intelligent behavior, coupled with the belief that intelligent activities differ only in their degree of complexity, have led to a conviction that the information processing underlying any cognitive performance can be simulated on a digital computer. Attempts to simulate cognitive processes have, however, run into greater difficulties than anticipated. (p.iii)

Dreyfus was expressing a deeply felt skepticism that touched a nerve with the optimists in the AI community. This was probably not helped, of course, by the air of ridicule in the paper; in it Dreyfus compared AI research to alchemy. Dreyfus was soon isolated from his peers who attacked his ideas and his person. His criticisms were soon subject to a setback when he pointed out that a computer could not beat a ten-year old at chess. In response, AI proponents had Dreyfus play against the Mac Hack chess program and he lost; proving that Dreyfus’ criticisms had, perhaps, gone too far.

Nevertheless, Dreyfus’ criticisms were not the products of an angry crank. Rather they were an epistemological attack on the foundations of AI. AI assumed, as Dreyfus notes in the above quote, that people think essentially in terms of symbols and rules — as do computers. Dreyfus, however, came from the phenomenological tradition in philosophy and insisted that this was not the case. He summarised his position in the introduction to the MIT edition of What Computers Still Can’t Do:

My work from 1965 on can be seen in retrospect as a repeatedly revised attempt to justify my intuition, based on my study of Martin Heidegger, Maurice Merleau-Ponty, and the later Wittgenstein, that the GOFAI [Good Old Fashioned AI] research program would eventually fail. My first take on the inherent difficulties of the symbolic information-processing model of the mind was that our sense of relevance was holistic and required involvement in ongoing activity, whereas symbol representations were atomistic and totally detached from such activity. By the time of the second edition of What Computers Can’t Do in 1979, the problem of representing what I had vaguely been referring to as the holistic context was beginning to be perceived by AI researchers as a serious obstacle. In my new introduction I therefore tried to show that what they called the commonsense-knowledge problem was not really a problem about how to represent knowledge; rather, the everyday commonsense background understanding that allows us to experience what is currently relevant as we deal with things and people is a kind of know-how. The problem precisely was that this know-how, along with all the interests, feelings, motivations, and bodily capacities that go to make a human being, would have had to be conveyed to the computer as knowledge — as a huge and complex belief system — and making our inarticulate, preconceptual background understanding of what it is like to be a human being explicit in a symbolic representation seemed to me a hopeless task. (p.xi-xii — Emphasis Original)

Regular readers of this blog will recognise in this both mine, Lars Syll’s, Tony Lawson’s and ultimately Keynes’ critiques of applied economic modelling and the use of econometric techniques. The problems are the same. Whereas certain people in the economics community are trying to model processes in terms of symbols that are so complex that they cannot be captured in these symbols alone, the AI community were attempting an even more daunting but not unrelated task: to use symbolic forms to model human consciousness itself. The AI modellers were, in a very real sense, trying to play God.

They basically failed, of course. Today AI research is much more humble and, although many laymen and some futurist-types still hold fast to a Sci-Fi view of the possibilities of AI, most of Dreyfus’ substantial predictions have been vindicated. AI failed spectacularly at mimicking the processes of human consciousness through the manipulation of symbols on computer programs and the likelihood that this will take place at any point in the future are remarkably slim. The AI community have run into the problems that Dreyfus thought they would — problems such as how to simulate non-symbolic reasoning or what Dreyfus calls “know how” — and although there is still some optimism the level of difficulty that these problems pose has made the AI community far more cautious in their claims. Much of the investment in boundless visions of the possibilities of AI now appear mere emotionally-charged fantasy — backed, undoubtedly, by an all-to-human desire to play at being God.

What can economists learn from this? A great deal actually. When dealing with economic data we use processes of reasoning that do not conform to systems of symbols — i.e. to models. This is why basically all interesting and relevant predictions come from intuitive empirical work and why none are generated by applying models. We do not, contrary to what the modellers believe, all carry models around with us in our heads that are just waiting to be discovered and applied. And anyone who thinks so will likely prove to be sub-par at actual applied work.

Human processes of reasoning are enormously complex and it is very difficult — if not impossible — to get an “outside” or “God’s eye” view of them. Thus, attempting to replicate the processes of reasoning inherent in economic thinking in models will only be useful for didactic purposes — and even then it will only be useful if students are made aware that these models cannot be directly applied and do not directly simulate how economics is done.

With that in mind I leave you with a nice quote from one of the late Wynne Godley’s students, Nick Edmonds, who does a great deal of modelling but who is nevertheless reflective enough to recognise the limits of modelling.

I think it is very important to recognise the limits to what models can do. It is easy to get seduced into thinking that a model is some kind or oracle. This is a mistake. Any model is necessarily a huge simplification. The results depend critically on the assumptions made. However complex and detailed they are, all they really reflect is the theories of the modeller… The model is not revealing any new truth, it is simply reflecting our own ideas, helping us to visualise how a massively complex system fits together. (My Emphasis)

Update: Here is an excellent film made about the philosophical tradition that Dreyfus comes from which explains in far greater detail than I can here why it is wrong to conceive of human thought and action in terms of models. It also features Dreyfus and has an extensive discussion about the AI debate from about the 14 minute mark on.

Update II: The film has been taken down from Youtube. I’ve linked to the trailer instead. The film is worth seeking out though.

Make a Comment

Econintersect wants your comments, data and opinion on the articles posted.  As the internet is a "war zone" of trolls, hackers and spammers - Econintersect must balance its defences against ease of commenting.  We have joined with Livefyre to manage our comment streams.

To comment, just click the "Sign In" button at the top-left corner of the comment box below. You can create a commenting account using your favorite social network such as Twitter, Facebook, Google+, LinkedIn or Open ID - or open a Livefyre account using your email address.



Analysis Blog
News Blog
Investing Blog
Opinion Blog
Precious Metals Blog
Markets Blog
Video of the Day


Asia / Pacific
Middle East / Africa
USA Government

RSS Feeds / Social Media

Combined Econintersect Feed

Free Newsletter

Marketplace - Books & More

Economic Forecast

Content Contribution



  Top Economics Site Contributor TalkMarkets Contributor Finance Blogs Free PageRank Checker Active Search Results Google+

This Web Page by Steven Hansen ---- Copyright 2010 - 2016 Econintersect LLC - all rights reserved