Financial institutions are increasingly using new, innovative technology across the product lifecycle to provide more competitive services, grow market share, and increase security. One challenge with the due diligence in selecting new vendors or choosing technologies to develop is fully understanding the technology, data, and analytics underlying these platforms. Specifically, models are often imbedded in the function of many lending and service technology platforms to optimize decision making, increase revenue or market share or control risks.
"In the current competitive environment, future business strategies will depend both on thoughtful data and analytical strategies"
• Loan, deposit or payment processing platforms may include models for assessing credit worthiness, underwriting and pricing or even product recommenders.
• Additionally, these platforms often contain models that run in the background to monitor for fraud, cyber-risk and money-laundering.
With new advancements in technology platforms, there is a lot going on behind the scenes, whether you are using a robo-advisor or a loan, deposit and/or payment technology platform to expand markets, increase revenue on existing clients, enhance security or reduce fraud. However, these advancements need to be implemented so that they work as intended, which is whyModel Risk Management should be partners in the process of developing or purchasing new technology platforms, as well as IT and data governance, and validate the models in these platforms before they are implemented.
Additionally, in many of these platforms machine learning (ML) and artificial intelligence (AI) models are being increasingly utilized given the available computing power and rich transaction-level data sources as well as the volume of variables available from the transaction systems, credit bureaus, and alternative social-media sources.
• However, their validation requires advanced analytical skills, in depth testing, and transparency into the drivers of the model (including their business sense, non-discrimination and reliability of relationships).
• This is in direct contrast to statements that these are automated machines. Rather the development, validation and maintenance of ML and AI models require more effort, with human experts working alongside the machine.
The benefits gained from ML and AI methods should be weighed against the burden of testing required to implement and maintain them.
Given the frequent imbedding of models within these platforms, the model risk management function will thoroughly validate any models (e.g., regression, ML or AI models) to ensure that they are reliable and robust as well as implement ongoing testing in production. Pre-production validation will include an array of testing, but these tests provide valuable information as to how the solutions work and their limitations. All models have some level of error, and in a fast-changing environment that model error can lead to an unacceptable level of losses or lost opportunity if not controlled. For example, model development and validation testing should:
• Address known data inaccuracies prior to model development and tested in validation, as otherwise the model will be unreliable if built on flawed data.
• Analyze of changes in the population profile of customers or transactions over time and how this might impact a model’s robustness.
• Assess which variables contribute to the outcome being modeled as well as the stability and business sense of those relationships.
• Conduct statistical testing of the model fit as well as testing of the sensitivity to different populations or scenarios to understand under what circumstances the model performs.
• Benchmark the candidate model against alternatives, including different model methods and model design choices as well as the selection of different predictive variables.
• Evaluate the model performance across different time periods, populations and/or scenarios to ensure that the model can perform in production across a range of circumstances.
• Test the data and model code in production to ensure that the coding is correct and that data flows retain the data integrity.
In production, the model will require monitoring to ensure that it continues to provide reliable results. This monitoring will provide valuable insights into potential model improvements as well as insights for management into shifts in the customer population or transaction patterns. This testing should include:
• Statistical tests of the model’s fit and tests the model’s performance against expectations.
• Key assumptions, such as the customer profile, customer behavior patterns and demand, economic or market environment, or other changes in external influencers should be monitored as material changes in these factors can render the model less reliable.
• As a backup, a benchmark model may be run in parallel with ongoing improvements to eventually replace the champion model.
In addition to ongoing testing to monitor the model performance and robustness, the storage of production data, as well as model results is required for reporting and future analysis. If this is not prioritized, the cost will be that sufficient data may not be maintained for advancements in analytics that provide the necessary competitive advantage.
In the current competitive environment, future business strategies will depend both on thoughtful data and analytical strategies. These are often imbedded in new technology platforms to streamline and optimize business processes. It is the responsibility of executives to work with risk management and technology groups to ensure that these platforms fully perform as advertised without unintended consequences. For this reason, functions such as Model Risk Management are strategic partners in innovation.