The International Actuarial Association (IAA) is pleased to announce the publication of three papers developed by the IAA’s Artificial Intelligent Task Force (AITF) aimed at raising awareness of the risks that need to be managed when designing, developing, implementing and using AI models and AI systems.
The Artificial Intelligence Governance Framework paper provides foundational guidance on governance areas impacted by AI. It highlights key principles and best practices for managing risks related to data, modelling, and outcomes once AI systems are deployed. The paper is designed to strengthen actuaries’ understanding of governance responsibilities and support their oversight of AI within actuarial and broader financial contexts.
The Testing of Artificial Intelligence Models paper offers guidance on the principles and methodologies that underpin reliable and ethical AI model testing. It outlines a principle-based approach covering data creation, partitioning, assessment of fairness and explainability, evaluation of robustness and accuracy, and the implementation of continuous testing and monitoring throughout the model lifecycle. This paper underscores the actuary’s role in testing, supporting the validation of AI models and enhancing confidence in their use.
The Documentation of Artificial Intelligence Models and Systems paper emphasizes the critical importance of clear, comprehensive and proportionate documentation throughout the entire model lifecycle. Recognizing documentation as the backbone of effective governance, the paper outlines essential elements needed to ensure transparency, accountability, regulatory compliance, and continuity of operations. It is intended for model developers, reviewers and validators who rely on consistent, well-structured documentation to support model risk management.
