AI in Financial Services: Joint OSFI and GRI Report Highlights Need for Safeguards and Risk Management as a Prelude to Enhanced OSFI Guidance
Financial Services Bulletin
3 minute read
While rapid advancements in artificial intelligence (“AI”) have created opportunities for financial institutions in Canada, they have also underscored the need to mitigate the risks accompanying AI technology. To promote discussion on the responsible use of AI in the Canadian financial services industry, the Office of the Superintendent of Financial Institutions (“OSFI”) and the Global Risk Institute (“GRI”) partnered to create the Financial Industry Forum on Artificial Intelligence (“FIFAI”). FIFAI brought together a group of financial services experts from industry, government and academia to advance the conversation on best practices for AI risk management.
Based on the discussions of the FIFAI, on April 17, 2023, OSFI and GRI released a joint report on the ethical, legal and financial implications of AI on the Canadian financial services industry (“OSFI-GRI Report”).
The EDGE Principles
The OSFI-GRI Report is organized into four areas identified by FIFAI as having the greatest importance to AI models: Explainability, Data, Governance and Ethics – the “EDGE” principles.
- Explainability enables customers of financial institutions to understand the reasons for an AI model’s decisions.
- Data leveraged by AI allows financial institutions to offer products and services that are targeted and tailored to their customers. It also improves fraud detection, risk analysis and management, operational efficiency, and decision-making.
- Governance ensures that financial institutions have the correct culture, tools and frameworks to support the realization of AI’s potential.
- Ethics encourages financial institutions to consider the larger societal effects of their AI systems.
According to the OSFI-GRI Report, explainability should be considered at the onset of an AI model’s selection and design. The appropriate level of explainability will be shaped by several factors, including what needs to be explained, the complexity of the model and who needs the explanation. For example, an explanation that would be sufficient for a customer may be insufficient for a regulator or a data scientist. Explainability will also depend on the materiality of the particular use case. For instance, higher levels of explainability will be required for AI models used to make credit decisions compared to AI models used in chatbots.
The OSFI-GRI Report further highlights the importance of disclosing adequate and relevant information on AI models to financial institution customers. At the same time, financial institutions should ensure that AI-related disclosure does not undermine their cyber security or competitive advantage.
Although financial institutions have been working with data for a long time, the integration of AI into their operations has presented challenges for managing and utilizing data. AI models have the ability to process massive amounts of data, which makes maintaining high data quality difficult. Financial institutions must also ensure that there are adequate measures in place to protect sensitive personal and financial information. The OSFI-GRI Report emphasizes that these challenges can be alleviated through sound data governance.
Model risk management came into focus in 2017 with OSFI’s introduction of Guideline-E23: Enterprise-Wide Model Risk Management for Deposit-Taking Institutions. The increasing use of AI models, which pose many of the same risks as traditional models, has prompted financial institutions to consider how to factor AI into their governance frameworks. The OSFI-GRI Report identifies the following elements as essential for good governance of AI in financial institutions:
- it should be holistic and encompass the entire organization;
- roles and responsibilities should be clearly articulated;
- it should include a well-defined risk appetite; and
- it should have the flexibility to pivot where required as new systems and risks emerge.
Ethics is a concept that is both subjective and nuanced. Addressing AI ethics is difficult because ethical standards evolve over time. In addition, the societal expectation on financial institutions to maintain high ethical standards is only increasing. Given these challenges, the OSFI-GRI Report stresses the importance for financial institutions to maintain transparency through disclosure on how their AI models meet high ethical standards.
The insights and discussion from FIFAI have highlighted the need to set strong regulations surrounding AI while ensuring that financial institutions continue to evolve and remain competitive. FIFAI has also demonstrated the importance of collaboration and a desire for ongoing dialogue about the safe integration and use of AI in the Canadian financial services sector. Regulatory guidance on AI is soon expected from OSFI, who is set to release an enhanced draft of its E-23 Guideline for public consultation later in 2023 to, among other things, address the emerging risks of models that use advanced analytics (including AI and machine learning). The enhanced Guideline is expected to apply equally to federally regulated deposit taking institutions, federally regulated insurance companies and federally regulated pensions plans.
by Darcy Ammerman and Srinidhi Akkur (Articling Student)
A Cautionary Note
The foregoing provides only an overview and does not constitute legal advice. Readers are cautioned against making any decisions based on this material alone. Rather, specific legal advice should be obtained.
© McMillan LLP 2023