top of page

The Risk-based Approach to AI: a Global Trend


Introduction

A few years ago, the conversations on AI ethics focused on identifying and reaching a consensus on the principles. Now, these discussions have shifted to focus on translating these principles into practice as the adoption of AI has become a tangible reality as opposed to something of the future. In this context, proactive players in the private sector have moved to implement governance framework for responsible and/or trustworthy AI, and governments around the world, both national and local, have started to introduce not only was is considered soft law (e.g., guidelines) but also hard regulatory measures, just like what the European Commission introduced in April 2021.


There are numerous points of uncertainty around AI regulation. The technology itself is ever evolving, and policymakers are grappling with balancing innovation and the public good. There is a worry that innovation may be hampered by misguided regulatory measures. However, certain trends in AI regulation allow organizations to act proactively; the risk-based approach is one of them.


The risk-based approach to AI consists of categorizing AI systems based on different levels of risk and creating complementary guidelines or regulations depending on the level. This approach helps policymakers and key stakeholders deal with some of the challenges with implementing AI systems. Today, there is no consensus on the definition of AI; the technology continues to evolve rapidly, and there is a wide range of AI applications spanning multiple sectors. Therefore, as opposed to a definition based on specific technologies or types of applications, the risk-based approach provides a practical path for promoting safe uses of AI. At the same time, identifying different levels of risks (e.g., acceptable vs. unacceptable) implicitly facilitates the integration of the widely accepted AI principles and the insertion of local social values and needs in choosing to frame certain outcomes as risks.


In 2019, Canada adopted a risk-based approach through the Treasury Board Secretariat’s Directive on Automated Decision-Making (“the objective … is to ensure that Automated Decision Systems are deployed in a manner that reduces risks to Canadians and federal institutions”). Canada is aligned with other jurisdictions around the world in that there are clear signs that the risk-based approach to AI is gaining currency in both soft and hard law approaches to AI governance.


Risk-based Approach in the EU

The European Commission’s proposal of the Artificial Intelligence Act also favours a risk-based approach, in which AI systems are grouped into unacceptable, high-risk, and low- or minimal-risk categories. AI systems that pose unacceptable risks, that is, those that threaten the fundamental rights of EU citizens, are outright banned (Article 5.1). For high-risk systems, companies are required to create a risk management system (Article 9), follow the data governance standards (Article 10), and provide relevant information about the AI system to the users (Article 13). AI systems that do not fall under these two categories are considered to pose low- or minimal-risk and organizations are expected to implement some sort of self-regulation. Finally, if adopted, the proposal would require the operators to make information about high-risk AI systems (Article 60) available for a database that would serve as a monitoring tool.


Unacceptable Risk • AI system that causes physical or psychological harm through material distortion of behaviours • AI system that exploits vulnerabilities of specific groups of people to materially distort their behaviours • AI system that evaluates or classifies natural persons based on their social behaviour or personal characteristics (e.g., social credit rating) • AI system that uses real-time remote biometric ID system in publicly accessible spaces for law enforcement purposes


High-Risk

• Biometric ID and categorization of persons • Management and operation of critical infrastructure • Education and vocational training • Employment, workers management, and access to self-employment • Access to essential private services and public services and benefits • Law enforcement • Migration, asylum, and border control management • Administration of justice and democratic processes


In the US, the Algorithmic Accountability Act of 2019, if passed, would have defined the “high-risk automated decision system” (see table below) and required companies to conduct automated decision system impact assessments for all high-risk automated decision systems. The bill did not pass in 2019, but Senator Ron Wyden is expected to re-introduce it in 2021.


Algorithmic Accountability Act of 2019: List of High-Risk ADS· • Poses significant risk to the privacy or security of personal information of consumers • Results in or contributes to inaccurate, unfair, biased, or discriminatory decisions •  Makes decisions or helps human decision making in sensitive aspects of people’s lives (e.g., work performance, economic situation, health, personal preferences, interests, behavior, location, or movements) • Involves personal information of a significant number of consumers (e.g., race, political opinions, trade union membership, sexual orientation, etc.) • Systematically monitors a large, publicly accessible physical space


At the state level, Automated Decision Systems Accountability Act was introduced to the California Legislature in December 2020. The Act defines high-risk ADS similarly to that of the federal Algorithmic Accountability Act, and if passed, would require the development of a “comprehensive inventory” of all high-risk automated decision systems used by state agencies.


Singapore has not introduced a regulatory proposal on AI yet, but the Ministry of Communications and Information published the Model Artificial Governance Framework, which addresses trust-building and governance of AI through the language of risk management. The Framework proposes the following for managing risks within a company’s internal governance structure: using “reasonable” efforts to ensure that the datasets used for AI model training are adequate; establishing monitoring and reporting systems; ensuring proper knowledge transfer whenever there are personnel changes; and reviewing internal governance structure continuously. Further, the Framework identifies three different types of human oversight (i.e., human-in-the-loop, human-out-of-the-loop, and human-over-the-loop) and recommends choosing one, depending on the risk level.

Conclusion

The risk-based approach allows stakeholders to turn abstract, seemingly diffuse challenges that AI adoption entails into more concrete, identifiable problems that they can then address proactively. The risk-based approach is reflected in the soft law guidelines and the proposals of hard regulations around the world. While it will be necessary to continue watching this space (it is in the early stages, after all), leading jurisdictions mentioned above have taken up the risk-based approach, and it is likely to become a mainstream form of AI governance.


Within the jurisdictions taking up the risk-based approach, it will be important to observe how unacceptable or high-risk AI systems are defined differently. The EU and the US proposals very clearly reflect the very particular language of protecting the rights of natural persons, which seems to have trickled down to their local jurisdictions (e.g., see the similarities between the federal and California definitions of high-risk AI), while the Singapore framework merely mentions that “the design, development and implementation of technologies do not infringe internationally recognized

human rights.”


If this were to be interpreted critically, it means that the framework does not afford sufficient clarity on what constitutes different levels of risks and dilutes the efforts to promote uses of AI aligned with individual rights; but in a more positive light, it points to the possibility of affording the flexibility for different governments to define what are acceptable and unacceptable risks within their jurisdictions based on their local values and needs.


Comments


bottom of page