THANK YOU FOR SUBSCRIBING
Artificial Intelligence (AI) and Machine Learning (ML) are buzzwords in today’s data-centric business world. Firms in most industries are investing in bringing the capabilities of these techniques to bear on a wide variety of problems, from fraud detection to predicting the success of a marketing strategy. Proponents make bold claims about the benefits of AI and ML to a company’s bottom line. Not all these claims are unfounded, but what’s usually not discussed is the risk of relying on these and other mathematical techniques.
At the heart of AI and ML are models. Conceptually, a model is a simplified version of reality. We take an event of interest—whether an applicant will default on a loan, what next quarter’s revenue will be, whether the new ad campaign will increase sales—and try to boil its necessarily multi-faceted, idiosyncratic, and largely unknown or unknowable causes into a (relatively) small set of calculations. These calculations are the practical implementation of all the assumptions, theories, data, expert opinion, and mathematics that go into the model development process. They are usually instantiated in computer code that can be as complex as commercial-grade software or as simple as a few formulas in a spreadsheet.
Both aspects of models—the simplification of reality and the implementation in code—introduce risk to the business that relies on the model’s output to make decisions. This risk is called “model risk.” To be clear, the risk comes when decision-makers use a model to make decisions: an utterly unsound model with severe implementation deficiencies that is never used poses no model risk.
The idea of model risk is likely unfamiliar to those outside banking. In the wake of the Great Recession, the Fed and the OCC expanded existing regulatory guidance on model risk. Banks, who lost lots of money, in part, by relying on inaccurate and unsound models, are required to have robust model risk management programs. To my knowledge, model risk management is largely unknown outside of banking, but it should be employed wherever models—including AI and ML—are used.
"Both aspects of models—the simplification of reality and the implementation in code—introduce risk to the business that relies on the model’s output to make decisions."
The costs of overlooking model risk can be devastating. Long Term Capital Management, who counted two future Nobel winners as principals, used mathematical models to amass outstanding returns in the mid-1990s but went under in 1998 due to unexpected market conditions that were not accounted for by their models. Monetary loss is not the only possible effect of model risk. The reputation of Northpointe, a company that makes models used to predict criminal recidivism, suffered after an analysis by the investigative journal ProPublica claimed that the model was racially biased even though the race was not included in the model.
These examples are extreme but illustrate the potential adverse consequences of the use of models and the importance of model risk management. The good news is that model risk can be managed (though not eliminated). Focusing on whether the model predicts well enough for its intended use; instituting a means of “peer review” where other knowledgable folks challenge the assumptions, theories, decisions, and code used to develop a model; and a healthy dose of skepticism on management’s part before putting a model to use are hallmarks of effective model risk management. I highly recommend the Supervisory Guidance on Model Risk Management, published by the Fed and OCC in 2011, for a more thorough discussion of this topic.
AI, ML, and mathematical models can help a business compete in this increasingly data-centric world, but the firms that understand the risks inherent in using these techniques will be better positioned than those who unwittingly trust in all the hype.