top of page
  • PWC

AI Risk Management Frameworks Must Remain Relevant

Current artificial intelligence (AI) risk management frameworks must adapt to remain relevant, forward-looking, and responsive to industry needs, says a joint report from the Office of the Superintendent of Financial Institutions (OSFI) and the Global Risk Institute (GRI) on the ethical, legal, and financial implications of artificial intelligence (AI) on financial services institutions. It says as the use of AI technologies continues to evolve, the need for guiding principles became apparent. This led to the development of the EDGE principles ‒ explainability, data, governance, and ethics. It describes explainability as enabling customers and relevant stakeholders to understand how an AI model arrives at its conclusions. Data leveraged by AI allows financial institutions to provide targeted and tailored products and services to their customers or stakeholders. It also improves fraud detection, enhances risk analysis and management, boosts operational efficiency, and improves decision making. Governance ensures a framework is in place that promotes a culture of responsibility and accountability around the use of AI in an organization. Ethics encourages financial institutions to consider the broader societal impacts of their AI systems. Sonia Baxendale, president and chief executive officer at GRI, says, “AI applications will develop in scope and scale, so guardrails are needed to ensure the benefits of AI continue to be realized while the risks are prevented or mitigated. A robust risk management approach is critical to securing the public’s confidence in Canada’s financial services sector and its use of AI.” The report is at https://www.osfi-bsif.gc.ca/Eng/Docs/ai-ia.pdf

Comments


bottom of page