Introduction
On April 14, 2021, POLITICO reported on the leaked draft of the European Commission’s regulations on artificial intelligence (“AI”), rattling the policy and business community around the world. A week later, the Commission officially published the draft of the Artificial Intelligence Act, which has been touted as the “world’s first regulation on AI.” Much like the General Data Protection Regulation (“GDPR”), the European Union has taken a bold step to translate its values into concrete regulations on emerging technologies. It is expected to profoundly affect the regulations of AI around the world, including in Canada.
Executive Summary
The European Commission (“the Commission”) published the draft of the Artificial Intelligence Act, which seeks to establish concrete regulation of AI. The proposal operates around a functional, as opposed to a technique-based definition of AI, covering not just the individuals and companies within the EU but also those that could impact the users within the Union.
Taking a risk-based approach, the proposed regulations categorize risks entailed to AI systems into three types: unacceptable, high-risk, and low or minimal risk. The new EU regulations will likely inform and influence Canada’s AI regulations. Canadian companies need to proactively shift from a data or privacy-centred governance model to an AI-centred one.
What are the major points proposed under the Artificial Intelligence Act?
The Commission published the proposal of the Artificial Intelligence Act on April 21, 2021. Now, the proposal will go through the EU’s legislative process before it comes into effect – and this process will take several years.
The proposal takes a risk-based approach to AI regulation in the current iteration, categorizing the risks entailed to AI systems into three types: unacceptable (Article 5), high-risk (Article 6), and low or minimal risk. The proposal bans the use of AI systems with unacceptable risks, such as social scoring systems. It provides requirements for the management of AI systems with high-risk, building on the principle of transparency.
The proposal establishes specific requirements for the deployment of high-risk AI systems which includes, for example requirements on
a) AI Risk Management (Article 9)
b) Data Governance Standards (Article 10)
c) Transparency (Article 13)
d) Monitoring and Reporting (Articles 60, 61, 62, 63, and 64)
e) Enforcement (Article 71)
What does this mean for your business?
As we have seen with the launch of the GDPR, the EU’s AI regulation will significantly inform and influence policymakers in Canada. Further, the global trend is headed towards introducing AI-specific regulations that translate ethical principles into concrete action by authorities.
In this context, Canadian businesses need to prepare proactively, and think about how to incorporate AI governance practices within their organizations. In particular, this means starting to develop AI policies, for example on explainability and transparency; develop risk-based assessment criteria and control measures; and investing in continuous training and education to increase the culture awareness of AI related risks and best practices.
INQ Law supports clients with a future-looking approach to AI risk management programs that integrate the latest trends and good practices. Contact us at cpiovesan@inq.law or ncorriveau@inq.law for more information.
1. Background: EU’s Leadership in AI
Since 2017, the EU has acted with a sense of urgency to address its concerns about the increased application of AI. The proposed regulations are culmination of extensive consultation. The European Commission set up the High-Level Expert Group on Artificial Intelligence (AI HLEG) in June 2018, which subsequently published the Ethics Guidelines for Trustworthy AI on April 8, 2019. In February 2020, the EU released a White Paper on Artificial Intelligence. Different EU bodies have continued to release regulations and resolutions on digital rights, liability, and copyrights as they relate to AI, positioning the EU not just as a thought-leader but also a regulatory leader, aiming to achieve the following:
1. Ensuring that AI respects law and fundamental rights of EU citizens;
2. Creating legal certainty to facilitate innovation and investment;
3. Introduce enhanced governance, with effective enforcement; and
4. Develop a single European market for lawful, safe, and trustworthy AI.
In this context, the Commission’s proposal of the “world’s first regulation on AI” is a natural next step for the EU. Much like the GDPR, which has greatly impacted the norms and regulation of data worldwide, the Artificial Intelligence Act is poised to set the tone of AI governance globally – including in Canada.
2. Scope: Definition
In policy and regulation, defining AI has been a major challenge (for instance, New York City’s Automated Decision Systems Task Force failed to agree on the definition of AI for over a year). AI has various techniques, and the field continues to evolve quickly, which could undermine regulatory measures that do not have a comprehensive and accurate definition of AI. In this context, the Commission has introduced a definition of AI-based on the functional characteristics of the software to underpin its proposed regulations, which are meant to be tech neutral and future-proof, able to provide legal certainty.
The definition in the new proposal provides a horizontal definition of AI that enables horizontal regulation, which crosses different technologies and various interpretations of AI across the members of the EU by focusing on the key functional characteristics of the software. This definition generally aligns with the one presented in Canada’s Bill C-11.
Artificial Intelligence Act: Article 3(1) (EU)
Bill C-11 (Canada)
“’Artificial intelligence system’ (AI system) means software that is developed with one or more techniques and approaches listed in Annex I and can, for a given set of human defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environment they interact with.”
“automated decision system means any technology that assists or replaces the judgement of human decision-makers using techniques such as rules-based systems, regression analysis, predictive analytics, machine learning, deep learning and neural nets.”
Annex I of the proposal lists the three techniques that could mean “AI” under its definition.
Annex I: AI Techniques and ApproachesMachine Learning approachesSupervised, unsupervised and reinforcement learning (including deep learning) Logic- and Knowledge-based approachesKnowledge representation, inductive programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems Statistical approachesBayesian estimation, search and optimization methods
3. Scope: Persons Covered
According to Article 2 of the proposed regulation, its scope expands beyond the EU, applicable to the following:
Artificial Intelligence Act: Article 2
a) Providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
b) Users of AI systems within the Union;
c) Providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.
Here, much like the GDPR, the EU has created the means to facilitate an extra-jurisdictional approach to influence the uses of AI beyond its geographic boundaries. Leveraging its market power and the first-mover advantage, the EU seeks to set the tone of AI regulation, reflecting its values and interests, in economies around the world that depend on business with Europeans and European companies.
4. Risk-based Approach
Through this proposal, the Commission has indicated the intent to take a “risk-based approach” to the regulation of AI, which categorizes AI systems based on their level of risk and treats each category differently based on the classification. Canadian federal government has also taken a risk-based approach for guiding the use of AI in public service, articulated through the Directive on Automated Decision Systems and its accompanying Algorithmic Impact Assessment Tool.
In this proposal, the Commission identified three categories of risks posed by AI systems: “unacceptable,” “high-risk,” and “low or minimal risk.” AI systems that pose “unacceptable” risks – those that threaten the fundamental rights of EU citizens – are outright banned. Those that entail “high-risk” are provided with specific regulations that must be followed, which are discussed below in greater depth. Finally, the ones with “low or minimal risk” are left to self-regulation through codes of conduct.
5. Scope of Risk Classifications
Unacceptable Risk
Article 5(1) of the proposed regulation bans the following uses of AI, designating their risk-level as “unacceptable”:
· “AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.”
· “AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.”
· AI system deployed “for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score.”
· “the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement.”
These prohibitions are very much associated with the charter rights under the EU Charter of Fundamental Rights. Here, the European Commission stresses that the relationship between technology and society cannot be separated.
High-risk
The high-risk category consists of two main components. First, according to Article 6(1), this category includes systems used as a safety component of products that require a third-party conformity assessment, which includes testing, certification, and instruction. For instance, consider software as a medical device implanted into the body or a robotics product that relies on AI software. Second, Article 6(2) of the AI Act specifies the following areas of application as high-risk under Annex III:
Annex III: High-Risk AI Systems Referred to in Article 6(2)· Biometric ID and categorization of persons; · Management and operation of critical infrastructure; · Education and vocational training; · Employment, workers management, and access to self-employment · Access to essential private services and public services and benefits · Law enforcement · Migration, asylum, and border control management · Administration of justice and democratic processes
Low or minimal risk
The third category, low or minimal risk, refers to all other applications of AI that do not fall under the first two categories. Providers of AI systems under this category are encouraged to develop codes of conduct for self-regulation and voluntarily comply with high-risk system requirements. There are some notice requirements for low to minimal risk AI if the system will be interacting directly with users or in the cases where the system will be artificially generating or manipulating content ( “deep fakes).
6. Highlight of Requirements: High-risk AI
The section on the regulation of high-risk AI systems (Title 3) provides a fascinating point of discussion because it grapples with – and responds to – the challenge of managing risks through regulatory measures instead of complete bans or leaving to self-regulation by the industry.
AI Risk Management
Article 9 makes it mandatory for providers of high-risk AI systems to deploy a risk management system, which is defined as “a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring systematic updating.” In this, the Commission is expanding the scope of risk management beyond privacy and data to cover social impact considerations, requiring a more holistic approach.
Data Governance Standards
Article 10 specifies the quality criteria that data used for training, validation, and testing of high-risk AI systems must meet. More specifically, these data sets must be “subject to appropriate data governance and management practices” (10(2)), including bias testing and data quality assessments. Further, these data sets must be “relevant, representative, free of errors and complete” (10(3)) and take into account “the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used” (10(4)).
Through these provisions that necessitate the consideration of social impact, the Commission acknowledges again that the relationship between the technology and the society cannot be fractured and that data governance has already permeated into social, legal, and ethical realms, requiring not just technical, but multidisciplinary governance efforts.
Transparency
Article 13(2) specifies that high-risk AI systems will be “accompanied by instructions for use … that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to the users.” The information here includes:
· Intended purpose;
· Level of accuracy, robustness, and cybersecurity;
· Specifications about input data;
· Human oversight;
· Circumstances that may lead to risks to health, safety or fundamental rights; and
· Performance regarding persons or groups on which system is intended to be used.
Also, Article 52(1) states that providers must “ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system,” which is applicable to not just high-risk AI systems, but for any instance a human being interacts with an AI system.
Monitoring and Reporting
Further building on the principle of transparency, the proposal provides specific regulations around monitoring and reporting high-risk AI systems' performance. These instructions reflect the idea of regulation as risk management, “a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring systematic updating.”
First, Article 60 states that every provider of high-risk AI system must make some information publicly available. This includes, for example: name, description of intended purpose, the status of the system, certificate information, countries of operation, and declaration of conformity
In collaboration with the Member States, the Commission would maintain this database as its controller and provide technical and administrative support. The publishing of the database could be interpreted as a way for the Commission to democratize the discussions on responsible AI by providing all relevant information to the citizens for discussion.
Further, the proposal provides further details on the post-market monitoring system (Article 61), reporting of serious incidents and malfunctioning (Article 62), and market surveillance and control of AI systems in the EU market (Article 63). Again, these measures provide instructions on how a high-risk AI system provider could manage their product through an iterative process and continue evaluating whether it is functioning in the way that it was originally intended based on newly attained data on its performance.
It should be noted that under Article 64, surveillance authorities also have significant power to investigate the high-risk AI models – more specifically, they are given “full access to training, validation and testing datasets used by the provider” and depending on the circumstances, “the source code of the AI system.” While there could be tension between these regulatory powers and the interest of protecting trade secrets, Article 64 opens the path for regulatory authorities to demand access to previously unavailable information.
Enforcement
Under Article 71, the proposal sets the penalty for non-compliance with Article 5 (AI systems with unacceptable risk) or Article 10 (data governance) to a maximum of 30 million EUR (C$44 million) or 6% of the company’s total worldwide annual turnover. Further, non-compliance of the AI system with any requirements or obligations of the proposed regulation could be subject to a fine of 20 million EUR (C$29.3 million) or 4% of total worldwide annual turnover.
The penalties in this proposed regulation are hefty. Current fines under Canada’s PIPEDA do not exceed C$100,000, and even the penalties in Bill C-11 are significantly less than those in the European regulation. Also, the fact that the highest penalties are reserved for non-compliance with Article 5 and Article 10 suggests that these are the top priorities for the Commission. Here, the pressing importance of robust data governance is highlighted again, but also, the proposed EU regulations expand the scope from data or privacy-centred regulations to a much broader set of issues associated with uses of AI.
Penalties under Artificial Intelligence Act
Penalties under Bill C-11
Non-compliance with Article 5 (unacceptable risk) or Article 10 (data): 30 million EUR (C$44 million) or 6% worldwide annual turnover.
Non-compliance with any other article is 20 million EUR (C$29.3 million) or up to 4% of total worldwide annual turnover.
Maximum penalty for all contraventions is higher of C$10 million and 3% of gross global revenue
Maximum penalty on conviction of an offence is maximum of C$25 million or 5% of gross global revenue.
7. What are the implications for Canada?
As it has been with the introduction of the GDPR, what the EU does has significant implications for Canada. Now, Canada and the EU are aligned in their shared interest in the risk-based approach. Bill C-11 defines and uses the category of “significant harm” in s.58(7), and the federal government’s Algorithmic Impact Assessment (AIA) has taken the risk-based approach, as mentioned earlier. So, Canada’s upcoming AI regulations will be heavily informed by the proposed EU regulation and its implementation in coming years, and it would serve well for Canadian companies to prepare in advance.
Further, AI regulation is not merely an EU-specific trend. On April 19, the U.S. Federal Trade Commission published a short blog post on the use of AI, which concluded with this stern warning that if companies do not keep themselves accountable, they should “be ready for the FTC to do it for [them].” As AI applications are becoming more ubiquitous, most jurisdictions worldwide – from the UAE to Singapore – are actively considering the means of translating ‘responsible AI’ into effective regulations.
In this context, Canadian companies need to prepare proactively by moving beyond data governance or privacy-centred approach to AI and adopting strategies that place the algorithmic impact in the centre. What does that look like?
First, it is important for an organization to standardize assessment criteria and mitigation measures as they relate to AI systems. It is important for companies to include these as part of their process and normalize this discussion from an enterprise perspective.
Secondly, organizations should leverage AI risk assessment platforms and services available in the market. For instance, INQ Law provides a service called Canari, which analyzes the use of AI within an organization.
Finally, organizations should invest in continuous training and education. AI risk assessment is a multidisciplinary process that cuts across different realms of knowledge beyond the technical data governance approach. Taking a holistic approach is critical, and this cannot be done with continued efforts on training and education.
INQ Law would like to acknowledge Dongwoo Kim for his excellent contributions to this article.
Comments