Singapore’s new AI Governance Framework


Author: Tim Wright


Tim Wright, Partner, Fladgate LLP (twright@fladgate.com)


 

In January 2019, Singapore’s privacy regulator, the Personal Data Protection Commission (PDPC), published a Proposed Model Artificial Intelligence (AI) Governance Framework[1] for public consultation.

The PDPC recognises the many benefits of AI, but also calls out the emergence of concerns such as AI’s impact on personal privacy and algorithmic bias. The Model Framework aims to articulate a set of common and consistent definitions and principles – for use by Singapore’s regulators and policy-makers – relating to the responsible use of AI and is intended to help shape Singapore’s growing AI ecosystem. It was developed with input from a number of interested parties, including technology companies such as IBM, Microsoft, Salesforce and Facebook, and financial services firms such as AIG and Standard Chartered, as well as the Info-communications Media Development Authority (IMDA) and the Advisory Council on the Ethical Use of AI and Data.

The PDPC describes the Model Framework as a ‘general, ready-to-use tool to enable organisations that are deploying AI solutions at scale to do so in a responsible manner’, providing guidance on key issues to be considered and measures that can be implemented to address identified risks. It is not intended for organisations deploying commercial off-the-shelf software that happens to incorporate AI in its feature set.

The Model Framework, which is algorithm-, technology-, and sector-agnostic, is underpinned by two guiding principles: the decision-making process should be explainable, transparent and fair; and AI solutions should be human-centric.

Whilst adopting Model Framework is voluntary and is no substitute for compliance with applicable laws and regulations, its adoption will assist businesses operating in Singapore to demonstrate implementation of accountability-based practices in data management and protection, e.g. in line with the Personal Data Protection Act (2012) and the OECD Privacy Principles. With this in mind, a helpful healthcare-healthcare-related use case (UCARE.AI[2]) has been included at Annex C.

The Model Framework covers four main areas:

  1. Internal Governance – adapting existing or setting up new internal governance structures and measures which incorporate values, risks and responsibilities relating to algorithmic decision-making.
  2. Decision-making models – methodologies which assist enterprises to determine their risk appetite for using AI, i.e. determining acceptable risks and identifying an appropriate decision-making model for implementing AI.
  3. Operations management – issues for consideration when developing, selecting and maintaining AI models, including data management.
  4. Customer relationship management – communication and related strategies for managing relationships with consumers and customers.

An oft-voiced concern about some AI tools and products is the lack of explainability of AI-based decisions (aka black box). The Model Framework provides that where explainability cannot be practicably achieved, organisations should consider documenting the repeatability of results produced by the AI model. Documentation of repeatability is not an equivalent alternative to explainability but instead refers to the ability to consistently perform an action or make a decision, given the same scenario, with consistency in performance offering AI users a certain degree of confidence. Helpful practices identified in the Model Framework include:

  • Conducting live repeatability assessments for commercial deployments.
  • Performing counterfactual fairness testing – a decision towards an individual is fair if it is the same in the actual world and a counterfactual world where the individual belonged to a different demographic group.
  • Assessing how exceptions can be identified and handled when decisions are not repeatable, e.g. randomness-by-design.
  • Ensuring exception handling is carried out in line with the organisations’ policies.
  • Ensuring that models trained on time-sensitive data remain relevant.

As the PDPC point out, the Model Framework is not intended to be complete or exhaustive. Ethical and governance issues will continue to evolve alongside AI technologies themselves. The PDPC intends to update the Model Framework periodically (taking on board feedback it receives) to ensure that it remains relevant and useful to organisations deploying AI solutions.


[1] https://protect-eu.mimecast.com/s/mBAmCNxE6F0Lj1jS4_M6m

[2] https://www.ucare.ai/

For GDPR-related legal updates, please visit Fladgate Privacy Updates

View by author:


Would you like to hear more?