The EU's new rules could completely rewrite the AI playbook
With the AI Intelligence Act, the EU takes a stand as the West's most active regulator of these promising but controversial technologies
In all the headlines this spring about the end of the pandemic, it would have been easy to miss the European Union’s (EU) announcement of a new set of proposed rules that would regulate or even ban a long list of Artificial Intelligence (AI)-based technologies. Yet, it would be a shame to miss the significance of the European Artificial Intelligence Act, for its proposals are remarkable for three reasons. First, they aim to bring AI under regulatory control for the first time. Second, they offer specific critiques of technologies now widely used in places like China and the U.S. Lastly, they are a signal to Big Tech that Europe’s concerns with AI will lead it down a different path than the rest of the world. The Act is the most important and public international effort to regulate AI to date. It covers everything from facial recognition to autonomous driving to the algorithms that drive online advertising, automated hiring, and credit scoring. Indeed, the proposals could help shape global views and regulations around these new and controversial technologies for the next decade.
The law’s approach and intent
Writing for Lawfare, Eve Gaumond (Laval University) provides a comprehensive legal analysis of the Act. Gaumond notes that one of the more contentious aspects of the EU proposals (buried in an “annex’ surprisingly) is that they define the types of AI technologies to be regulated as follows:
Machine learning approaches, including supervised, unsupervised, and reinforcement learning, using a wide variety of methods, including deep learning;
Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference, and deductive engines, (symbolic) reasoning, and expert systems;
Statistical approaches, Bayesian estimation, search and optimization methods.
Gaumond notes that definitions of the AI techniques falling within the scope of the proposed regulation have not been without controversy. For example, “some observers claimed that the EU was “proposing to regulate the use of Bayesian estimation —Bayesianism being first and foremost a mathematical theorem — to decry the overbroadness of the proposed regulation.”
In addition to the three definitions, the most consequential aspect of the regulations is that they are built on a risk-based framework. The four risk classes are represented below using a four-level pyramid. Gaumond notes that “while the nuances of the approach are extensive, understanding the underlying rationales behind them is crucial. Indeed, Lucilla Sioli, Director for Artificial Intelligence and Digital Industry at the European Commission, believes that the risk framework “is the most important part” of the entire Act. The four categories are explained below.
Figure 1: The AI Act’s hierarchy of risks (Source: Eve Gaumond/Lawfare)
Category 1: Minimal Risk: The base of the pyramid refers to technologies that are of “little or no risk” and covers anything not explicitly discussed in the regulations. According to the EU, this category encompasses “the vast majority of AI systems currently used in the EU.”
Category 2: Limited Risk: This layer includes some high-risk technologies and some that aren’t. The defining characteristic of AI systems that fall into this category, notes Gaumond, “is that they raise certain issues in terms of transparency and thus require special disclosure obligations.” Three technologies in this risk category are especially notable: “deep fakes, AI systems that are intended to interact with people, and AI-powered emotion recognition systems/biometric categorization systems.”
High-Category 3: High Risk: The technologies that fall into this category will be subject to the most extensive regulatory regime and fall into two categories. The first category covers those “embedded in products that are already subject to third-party assessment under sectoral legislation and serve as safety components for said products.” This includes, notes Gaumond, “safety components for machinery, medical devices or toys,” which “will be regulated by sector-specific legislation.”
The second category focuses on non-embedded AI systems. The proposed regulation “considers that these stand-alone systems are deemed high risk when they are used in certain areas,” including:
The second category focuses on non-embedded AI systems. The proposed regulation “considers that these stand-alone systems are deemed high risk when they are used in certain areas,” including:
Biometric identification and categorization of natural persons
Management and operation of critical infrastructure (such as supply of water, gas, heating, and electricity)
Education and vocational training
Employment, workers’ management, and access to self-employment
Access to/enjoyment of essential private services and public services and benefits (like credit and emergency first response services)
Law enforcement
Migration, asylum, and border control management
Administration of justice and democratic processes
High-risk systems will have to earn the familiar “CE” mark found on many products in Europe. To get that mark, notes Gaumond, the makers of AI products “will have to comply with five requirements heavily inspired by key principles” from the Act’s ethics guidelines:
Data and data governance: establish and follow rules for good data sourcing and management
Transparency for users: inform users that they are interacting with an AI technology
Human oversight: enable human monitoring and control of AI systems
Accuracy, robustness, and cybersecurity: ensure good data hygiene and security
Traceability and auditability: enable the ability to identify and audit any AI process and/or output
Crucially, compliance is not static, and the proposed regulation “requires high-risk AI system providers to enact a postmarket monitoring process that actively evaluates the system’s compliance throughout its life cycle.”
Category 4: Unacceptable Risk: The new rules will create outright bans on certain uses and types of AI. Four types of technologies are encompassed in this category, notes Gaumond’s analysis: social scoring, dark-pattern AI, manipulation and real-time biometric identification systems." Correctly, she notes that the “prohibition of social scoring seems to be a direct charge against China-style AI systems that are said to monitor almost every aspect of people’s life — from jaywalking habits to buying history — to assess people’s trustworthiness.” Moreover, she highlights the fact that while “Western countries have incorrectly depicted China’s social credit system” as Black Mirror come to life, this fear — real or imagined — is beside the point, as the aim of this ban is symbolic. “By stating that public authorities cannot engage in AI-powered assessment of people’s trustworthiness,” explains Gaumond, “the EU has made it clear that its vision of AI is one that protects fundamental rights.”
Controversial Reactions
For proponents of the new regulations, the EU is sending a clear message that permitted AI applications will be subject to rules and principles of law. As Daniel Leufer, a digital rights activist, recently told WIRED: “There’s a very important message globally that certain applications of AI are not permissible in a society founded on democracy, rule of law, fundamental rights.” Leufer believes the proposed rules may be too vague in certain aspects (a common criticism to date) but represent a significant step toward checking potentially harmful uses of the technology. Unfortunately, other commentators are not so generous.
For example, regulators wanted to ban facial recognition in an earlier draft but settled for banning facial recognition only in “real-time” situations, which has left some privacy advocates upset. Another critique is that the new rules do not address racial bias in AI directly. As a recent piece in Politico.eu noted:
Even with strict data protection rules, strong fundamental rights frameworks and a directive on racial equality, European minorities are not safe from algorithmic harms.
In January, the Dutch government resigned over a scandal where the government had used an algorithm to predict who is likely to wrongly claim child benefits. Without any evidence of fraud, the tax authority forced 26,000 parents — singling out parents of dual nationalities and ethnic minorities — to pay back tens of thousands of euros to the tax authority without the right to appeal. The Dutch Data Protection authority found the tax authority's methods “discriminatory.”
“It seems like there's a complete disconnect between reality, which is that automating bias, automating prejudice, automating racism that has huge impacts on huge groups within society, and this blind vision that anything that can be automated is a good thing,” said Nani Jansen Reventlow of the Digital Freedom Fund, which supports digital rights through strategic litigation.
Other critics note that the proposed regulations would prohibit “AI-based social scoring for general purposes done by public authorities,” as well as “AI systems that target “specific vulnerable groups” in ways that would “materially distort their behavior” to cause “psychological or physical harm.” This language could be interpreted as a ban on AI in applications such as credit scoring, hiring, or some forms of surveillance advertising. Thus, critics express concern that such a move would hurt European AI innovation — an area in which it already lags the U.S. and China.
Despite the controversies, the overall reaction has been in favor of the EU’s intent and general approach. “The fact that there are some sort of prohibitions is positive,” says Ella Jakubowska, policy and campaigns officer at European Digital Rights (EDRi), in the WIRED piece, but she adds that certain provisions would still “allow companies and government authorities to keep using AI in dubious ways.”
Two key AI debates come to the forefront
While there may be grounds for criticism in Europe, the Act will be seen as going far beyond what other countries in other parts of the world would think of doing today. Moreover, it has brought to the forefront debates over two often misunderstood aspects of AI. The first debate is over facial recognition — a hugely contentious issue among AI researchers. Some experts argue that its inherent flaws make perhaps the most flawed AI technology in use today. Indeed, though used widely in China — and by many law enforcement agencies in the US — some US cities have banned police from using the technology in response to increasing public outcry. The EU did not ban it outright, as noted, but it also did not rule out doing so in the future.
Even more interesting is the EU’s proposal for AI self-identification. This idea is just a philosophical proposal in the U.S. (and no debate at all in China’s regulatory offices). But noted AI theorists such as Daniel Dennett argue that “AI systems that deliberately conceal their shortcuts and gaps of incompetence should be deemed fraudulent, and their creators should go to jail for committing the crime of creating or using an artificial intelligence that impersonates a human being.” In his view, AI systems should be forced to make public their identity, composition, risks, and shortcomings — much as advertisements for medicines do today — pretty much what the EU is proposing.
What happens next
The Act unveiled this spring still needs to go through the EU Parliament and EU Council to become law, and that means it is likely to be amended along the way. Interestingly, however, with the U.K. now out of the EU, there is less likelihood of major philosophical shifts, so the overall intent of the regulations should be met in their final form.
As important as its impact in Europe, the proposal will also undoubtedly influence many other nations. Indeed, in her analysis, Gaumond notes that “due to the Brussels effect — a phenomenon by which the European Union seeks to apply its own regulations to foreign actors through extraterritoriality means — American tech developers will have to comply with European rules in many instances.” Failing to do so could lead to serious fines. For example, “violations of data governance requirements or noncompliance with the unacceptable risk prohibitions can lead to fines of up to 30 million euros or 6 percent of a business’s global annual revenue total worldwide annual turnover.”
What clearly emerges from the EU’s actions is that — much as California did with auto emissions within the U.S. — Europe is clearly setting itself up as the world’s regulator of technology generally and AI specifically. In doing so, it stands in creed and statue almost completely opposed to the systems widely adopted in the U.S. and China.
For corporate leaders who must guide companies through these shifting positions around the world, the contrasts could not be clearer. China is building a comprehensive AI-based surveillance state. Europe is rejecting the same concept completely and placing whole areas of its citizens' lives out of the reach of these new platforms. It remains to be seen where other nations, including the U.S. — where so many of these technologies are born – land on what may be the most essential regulatory debate of the first half of the 21st Century.