from the Atlanta Fed
— this post authored by Larry D. Wall
A computer now sits in the middle of virtually every economic transaction in the developed world.
– Hal Varian, chief economist at Google
Hal Varian recently observed the increasing availability of transactions data and discussed five channels through which the availability of that data is likely to affect economic activity.
One channel he did not explicitly address is that of government regulation of economic activity. Yet government regulation has a large impact on many aspects of economic activity, and regulators are also interested in exploiting the explosion in the amount of available data.
This Notes from the Vault post examines the implications of such data availability for one type of regulation that has long been data intensive, prudential regulation of the financial sector.[1] The post begins with a discussion of the historical importance of financial statement data to financial regulation and supervision.[2] Next is a discussion of how granular data on individual assets and transactions data could help supervisors. The final section considers some inherent limitations for answering important supervisory questions using granular data and machine learning.
Financial statement data and prudential financial regulation
The analysis of banks’ financial statements, such as a bank’s income statement and balance sheet, has long been a part of prudential supervision.[3] Given that banks differ in size, the financial statement information is often used to calculate ratios that are more comparable across banks than the raw dollar amounts. An early example from U.S. bank supervision is a 1792 Massachusetts requirement that limited a bank’s notes and loans to no more than twice its paid-in capital, according to University of Pittsburgh Professor John Thom Holdsworth (1971). The use of financial statement ratios continues to play an important role, with the Uniform Bank Performance Report providing an extensive set of ratios currently used by supervisors and other analysts to evaluate bank performance, such as net income to total assets and net loss to average loans and leases.
To be sure, bank supervisors have long looked at more granular data. Historically, this took the form of audits by independent accountants and examinations by bank supervisors to verify the accuracy of the aggregate data reported on banks’ financial statements. The auditors and examiners would analyze samples of the underlying data looking for discrepancies between their reported values and their properly measured values. Indeed, such reviews at the individual bank level and at the large loan level (Shared National Credit Program) remain an important part of contemporary supervisory practices.
This historic focus on balance sheet ratios reflected limitations on the capacity for obtaining, storing, and processing more detailed data. As information and analytic technology has progressed, bank supervisors and others have developed more sophisticated ways of analyzing financial statement data. For example, bank supervisors developed statistical models to help in directing supervisory resources to the places where they were most valuable. These so-called early warning models combined economic theory, financial statement data, and statistical models to predict which banks were at greatest risk of failing or of being downgraded in the next examination.[4]
Along with the use of these data by bank supervisors, economists have estimated thousands of models of bank organizations’ performance using financial statement data to help supervisors and other policymakers better understand banks. Although access to electronic versions of these data were once limited, they are now readily available at no extra cost for U.S. banks and bank holding companies.
Benefits of more granular data and machine learning
As the limitations on the ability to obtain and store data have been relaxed, financial firms and their supervisors have increasingly looked to more granular data to measure risk. One of the more important early initiatives was the development of value at risk (VaR) methodologies by banks to help measure the risk of capital markets operations.[5] VaR combined statistical modeling and data on individual financial instruments (including stocks, bonds, and derivatives) to estimate the maximum losses at a particular quantile of the distribution. For example, a $10 million VaR at the 0.10 percent level means that 99.9 percent of the time the loss is estimated to be no greater than $10 million.
Bank supervisors subsequently applied VaR type methodology to evaluate bank capital adequacy, resulting in a great increase in the sophistication of their calculations. Indeed, Andrew G. Haldane, executive director at the Bank of England, observed that the number of risk buckets used in capital adequacy increased dramatically from about seven buckets with Basel I (which is based on financial statement data) to over 200,000 with Basel II (which looks at individual instruments).
Looking to the future, the availability of more granular information, combined with new tools to analyze these data, may provide supervisors with a variety of opportunities to better evaluate the risk of banks and financial systems. Andrew G. Haldane talked about the possibility of a “Star Trek” type setup where a supervisor could track the global flow of funds using a bank of monitors. Supervisors are working on improving their access to the data needed for such monitoring, but considerable work remains.[6] However, as the data and monitoring systems become available, such tracking may be especially valuable during periods of substantial stress in the financial system when it is critical that supervisors understand the flow of liquidity out of vulnerable parts of the financial system.
Additionally, the development of machine learning using techniques such as deep learning raises the possibility that supervisors will be able to use granular data to better understand the risks in the financial system. Deep learning techniques use neural networks that are loosely based on the ways that humans process data.[7] These techniques allow computers to learn how to solve relatively complex problems such as facial recognition and how to play games such as Go at a master level.[8] Deep learning combined with granular data could help supervisors observe otherwise difficult to identify relationships in the financial system. Uncovering these relationships could provide supervisors with enhanced understanding of the operations of individual banks and the links across financial institutions and markets.
Difficulties in applying big data and machine learning to supervision
The fundamental limitation in applying big data and machine learning techniques to supervision is that the analysis of data can only tell us something about the environment that generated that data. As David M. Rowe stated in a previous post, “No amount of complex mathematical/statistical analysis can possibly squeeze more information from a data set than it contains initially.” In order to use available data to analyze different environments, supervisors need to rely on some theories that link observed data to what we are likely to observe in a different environment.
This limitation is particularly relevant for prudential regulation because prudential regulation seeks to change the incentives facing financial firms to reduce the probability and severity of distress at the bank and financial system levels. This creates at least two problems in applying big data and machine learning to supervisory problems: (1) distress situations for banks and financial systems are relatively infrequent events, and (2) regulatory changes in incentives are likely to change the environment that generates the data.
The problem with supervisory focus on distress situations is illustrated by the limitations of VaR as used by financial firms for risk management and supervisors for capital adequacy. VaR is intended to estimate the losses in a portfolio of assets associated with an unlikely event, such as a one in 1,000 event. This focus on extreme (or tail) events means that there are relatively few observations of periods with such large losses, which makes it difficult to make precise estimates of the likelihood of such losses.
Given the small number of relevant observations, an alternative approach is to impose assumptions about the relationship between the large amount of data we have on relatively small losses and the expected losses associated with extremely bad events. As Rowe observes, the assumptions that have been typically used for mathematical tractability have the unfortunate side effect of reducing the probability we would assign to such extreme loss events.
The application of machine learning to big data obtained from “normal times” is unlikely to improve these loss estimates substantially. The problem is that the returns on assets in a given portfolio are correlated so that large losses will almost always be associated with some event (or shock) that adversely affects most of the portfolio.[9] Thus, machine learning applied to more granular data cannot be of much help when there are few, if any, large shocks during the period in which the data are obtained.
The second problem is that supervisors would like to know what will be the effect of a change in regulatory policy, but such a change in policy is intended to change the environment that may also alter the process generating the data. This problem has been known to economists as the “Lucas critique” after Robert Lucas’s (1976) famous article. These changes may take a wide variety of forms, including contributing to fundamental restructuring of the way a market operates, as Carnegie Mellon University Professor Chester Spatt argues occurred with U.S. stock markets. Moreover, binding regulation, by definition, limits financial firms’ ability to exploit profitable opportunities. In doing so, such regulation creates incentives for firms to actively seek out and exploit ways of undercutting the intention of the regulation, which also has the potential for changing the data generating process.[10]
Once again, analyzing more granular data using machine learning techniques will not necessarily be able to predict how financial firms will respond to a change in regulation. Firms are not constrained to follow a small set of rules and may respond along dimensions for which there are no observations.[11] However, the availability of more granular data and machine learning may allow supervisors to better understand the whys and hows of the financial system, putting them in a better position to anticipate how the system will react to changes in regulation. Moreover, the use of granular data and machine learning may allow supervisors to detect attempts to avoid regulation earlier in the process, allowing a more prompt supervisory response as gaps in the regulation are identified.
Conclusion
Financial supervisors have long been reliant on financial data, typically after data have been aggregated into financial statements. Their ability to monitor financial system developments has been enhanced in recent years by the development of techniques to obtain, store, and analyze very large data sets. However, a critical part of financial supervision involves extrapolating from the data that we have to the conditions we may face in the future. Big data and machine learning may help supervisors with this extrapolation, but it seems unlikely to relieve them of this burden for the foreseeable future.
References
Holdsworth, John Thom, 1971. “Lessons of State Banking before the Civil War.” Proceedings of the Academy of Political Science 30, no. 3, 23 – 36.
Lucas, Robert E., 1976. “Econometric Policy Evaluation: A Critique.” In Carnegie-Rochester Conference Series on Public Policy 1, 19 – 46. Amsterdam: North-Holland Publishing Company.
Source
https://www.frbatlanta.org/cenfis/publications/notesfromthevault/11-prudential-regulation-bigdata-and-machine-learning-2016-11-21
About the Author
Larry D. Wall is executive director of the Center for Financial Innovation and Stability at the Atlanta Fed. The author thanks Paula Tkac for helpful comments. The view expressed here are the author’s and not necessarily those of the Federal Reserve Bank of Atlanta or the Federal Reserve System. If you wish to comment on this post, please email [email protected].
_______________________________________
Footnotes
[1]Prudential regulation of the financial sector is that regulation designed to limit the risk and cost of failure to individual financial firms and/or the financial system. [2] Although technically regulation and supervision are different concepts, for the purposes of this post they will be used interchangeably. The distinction is explained in the Federal Reserve System: Purposes and Functionschapter “Supervising and Regulating Financial Institutions and Activities.” [3] The financial statements provided to bank supervisors are often referred to as “call reports.” [4] For a survey of this literature, see Federal Reserve Bank of St. Louis economists Thomas B. King and Timothy J. Yeager, and Federal Deposit Insurance Corporation economist Daniel A. Nuxoll. [5] See David Rowe, president of David M. Rowe Risk Advisory, for a discussion of the contribution of VaR to risk management. [6] One prerequisite for such analysis would be the ability to identify which firms are involved in a particular transaction. As the Office of Financial Research (OFR) observes, both firms and supervisors struggled with identifying firms’ exposure to Lehman Brothers because of a lack of standardized identifiers for firms. The OFR is working with industry and with financial supervisors around the world to create the Legal Entity Identifier (LEI) data standard, which would allow precise identification of the parties to financial transactions. Yet as the OFR observes, the use of LEI in the United States and abroad currently relies heavily on voluntary implementation. [7] Charlie Crawford provides an accessible introduction to deep learning. [8] See Elizabeth Gibney for a discussion of how Google’s AlphaGo program learned how to play Go and beat the European Go champion. [9] Bank stress tests face a somewhat similar problem in that the severely adverse macroeconomic scenario used in these tests is one for which we have few, if any, observations. My colleague Mark Jensen has argued that Bayesian techniques could be used to augment data drawn from one environment with data drawn from another environment to improve the precision of estimates of these stress tests. However, this is not a “something for nothing” exercise; in order to use data from another environment, one must make some assumptions about the relationship of the two environments. [10] See my discussion of capital regulation for how and why banks seek to undercut the intent of regulation while complying with the letter of the rules. [11] This contrasts to games where even though the space of possible moves may be huge (Gibney says a 150-move game of Go has 10170 possible moves), the set of possible responses is subject to fixed rules.