Towards the European AI Regulation

Can Şimşek
11 min readNov 7, 2020

In February 2020, the European Commission released its communication on Data Strategy and the White Paper on Artificial Intelligence fleshing out the future digital strategy of the European Union; foreseeing the proposition of a Data Act in 2021 and calling for a regulation on Artificial Intelligence (AI). With these new regulations, in addition to the General Data Protection Regulation (GDPR) and the Law Enforcement Directive which were introduced in 2016 and came into force in 2018, the EU will try to cover the two main elements composing “Artificial Intelligence” (AI): data and algorithms.

As for the data strategy, the Commission plans to enable the further deployment of AI in the real economy by creating a single market for data with an emphasis on the European values and human rights. The project includes the creation of common European data spaces for sectors like healthcare, manufacturing finance and the environment; introducing mandatory business to government and government to business data sharing and most importantly, the creation of a European cloud service. This way, the EU hopes to become competitive against the US and China in the digital market and ensure the European data sovereignty. With the words of Axel Voss -a German member of the European Parliaments Special Committee on Artificial Intelligence in a Digital Age- the EU is not “Big Brother China or Big Data US”. Differing from them, the EU wishes to establish the legal framework for trustworthy algorithms and data ecosystems and export a human rights based approach.

In order to complement the European data strategy, the White Paper on AI was drafted. Observing the signs of fragmentation in the internal market (e.g. Denmark launched a Data Ethics Seal and Malta introduced a voluntary certification system for AI), the Commission noted in the White Paper that an EU-wide approach is needed for achieving the objectives of trust, legal certainty and market uptake. For this purpose, an initial plan with very broad strokes for an AI regulation is offered besides marking the need for updating existing sectoral legislation.

The White Paper on Artificial Intelligence

Back in 2019, the Commission published a communication on building trust on human-centric artificial intelligence, welcoming the key requirements identified in the Guidelines of the High-Level expert group:

·Human agency and oversight,

·Technical robustness and safety,

·Privacy and data governance,

·Transparency,

·Diversity, non-discrimination and fairness,

·Societal and environmental wellbeing,

·Accountability

The Commission reiterates these requirements in the White Paper on AI and determines certain areas of legislation where an update might be required. In particular, the Commission marks that the EU safety legislation currently in force applies to products and not to services except in sectors where a lex specialis applies. For one, a software for medical purposes is considered a medical device under the Medical Device Regulation. In absence of such a specific regulation, algorithms are not covered under the safety legislation. In this regard, the Commission also warns that the existing legislation is focusing on the risks that are present at the time of placing on the market whereas software can give rise to new risks in time with software updates (especially with Machine Learning). According to the White Paper, periodic risk assessments and human oversight throughout the lifecycle of the AI products might be necessary to address this issue. Another concern is the allocation of liability since the current legislation in force might be insufficient vis-à-vis the complexity of the digital world (e.g. third parties in other countries programming and/or maintaining AI for the manufacturer etc.). Lastly, the Commission reports that new risks are arising with the various applications of AI. These can be risks related to cyber threats or loss of connectivity; risks caused by the faultiness of the data or undesirable psychological effects. The White paper offers to mitigate the risks stemming from the quality of data by providing specific requirements and also notes that obligations regarding mental safety risks might be needed (e.g. in the context of humanoid robots).

After this analysis, the Commission makes the statement that there may be a need for a new legislation specifically on AI.

According to the White Paper, such a regulation should avoid being “excessively prescriptive” by following a risk based approach. Although it is noted that the German Data Ethics Commission has called for a five-level risk-based system of regulation that would go from no regulation for the most innocuous AI systems to a complete ban for the most dangerous ones, the envisioned regulation in the actual White paper only classifies the AI applications as high-risk or low-risk.

To determine whether an AI application is high-risk or not, two cumulative criteria should be considered:

  1. The sector in which the AI application is employed: The potential high-risk sectors should be specifically and exhaustively listed in the new regulatory framework. (E.g. healthcare; transport; energy and parts of the public sector.) This list should be periodically reviewed and amended where necessary.
  2. The intended use: In addition to being used in a listed sector, AI should be used in such a manner that significant risks are likely to arise. The assessment about whether a given use is risky could be based on the impact on the affected parties. According to the White Paper, uses of AI applications that produce “legal or similarly significant effects” for the rights of an individual or a company; that pose risk of injury, death or significant material or immaterial damage; that produce effects that can not reasonably be avoided by individuals or legal entities.

It is worth noting that the wording of this part of the proposition is very similar to the Article 22 of the GDPR which prohibits solely automated decision making which produces “legal or similarly significant effects”. Remarkably, the GDPR only covers natural persons whereas this article will also cover an effect on a company. According to the expert guidance endorsed by the European Data Protection Board, to affect someone “similarly significant”, a decision must have a) the potential to significantly effect the circumstances, behavior or choices, b) have a prolonged or permanent impact on the data subject or c) lead to exclusion or discrimination. It is yet to be seen if this rather broad definition would be relevant in the context of this new regulation.

Finally, the Commission acknowledges that there might be exceptional instances which could be considered as high-risk per se and the requirements should apply to them regardless of the criteria. The examples given for this are the situations impacting workers rights (like the use of AI in the recruitment processes), situations where consumer rights are harmed and the use of AI for the purpose of remote biometric identification.

If this approach is followed, the new AI regulation will be imposing the below mentioned requirements to the high-risk AI applications, whereas low-risk AI applications will be merely encouraged for certification and labeling on a voluntary basis drawing inspiration from these criteria:

  • training data; training data should lead to a safe and non-discriminatory model which respects privacy and data protection rights.
  • data and record-keeping; Accurate records should be kept on the data used for the training, testing and validation of the AI system including a description of the main characteristics of the data and how it was selected. Moreover, in certain justified cases, such data should be retained as well. Besides the data, programming and training methods should also be documented in a way demonstrating the safety and non-discrimination.
  • information to be provided; In order to achieve further transparency, information about the purpose for which the systems are intended, the conditions under which they can be expected to function as intended and the expected level of accuracy in achieving the specified purpose should be provided. However, the White Paper does not specify the level of information that should be provided to deployers of the system and to the consumers.
  • robustness and accuracy; The AI system might be required to have outcomes that are reproducible. At least they should correctly reflect their level of accuracy and adequately deal with errors or inconsistencies during all life cycle phases.
  • human oversight; The white paper gives a non-exhaustive list of examples for cases of human oversight such as having a human reviewing and validating the outputs of the AI system.

In this respect, Article 22 of the GDPR, the “Kafka provision” as it was called in a report to the Council of Europe, comes into play again. As the wording of the article expresses, in principle “…a decision based solely on automated processing…” is banned if it produces legal or similarly significant effects. However, a way to workaround Article 22 might be inserting a “human in the loop”. The White Paper on the AI contains some intriguing examples for this situation. According to the Commission, the output of the AI system can become immediately effective in some cases as long as human oversight is ensured afterwards. For instance, rejection for a credit card application can be fully automated if there is an ex post human review. Another form of oversight that the Commission mentions is the real time monitoring of the system and having a deactivation button (e.g. in driverless cars).

On the human oversight issue, expert guidance endorsed by the European Data Protection Board requires “meaningful human input” rather than a “token gesture” for the decisions to be not solely automated. Yet, I would like to emphasize that this is trickier than it sounds because of the concept called “machine bias”. As a matter of fact, decision making algorithms tend to take humans “under the loop” because the real person who “decides” does not necessarily know about the “reasoning” of the non-transparent algorithm and tends to trust it instead of risking a wrong decision by ignoring it’s input. Thus, there is a need to further specify the human oversight requirements.

  • specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification.

The Biometric data is defined in the Law Enforcement Directive, Art. 3 (13); GDPR, Art. 4 (14) and Regulation (EU) 2018/1725, Art. 3 (18) as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique authentication or identification of that natural person, such as facial images or dactyloscopic [fingerprint] data.” In principle, EU data protection law prohibits such processing except under specific conditions. It is only possible on a limited number of grounds, like for reasons of substantial public interest. Even so processing must take place on the basis of EU or national law, subject to the requirements of proportionality, respect for the essence of the right to data protection and appropriate safeguards.

Remote biometric identification is one of the most controversial topics within the European AI regulation debate. At the earlier phases of the drafting process, an EU-Wide moratorium on remote facial recognition was being considered to be proposed within the white paper. However, this idea did not make it to the final text.

Responding to the White Paper, civil society organizations like Access Now and European Digital Rights stated that “predictive policing” should be banned. In line with this approach, the Executive Vice President of the European Commission for “A Europe Fit for the Digital Age”, Margrethe Vestager, is also very outspoken about the need to prohibit predictive policing which often leads to prohibited discrimination in practice.

The Feedback by the States on the White Paper on AI

The Commission’s White Paper on AI had different responses from the Member States since it was published. On the one hand, Germany commented that the classifications and requirements in the White Paper were not detailed enough and demanded a layered classification of AI applications based on the relevant risks as well as concrete definitions about the requirements like for how long and when to require mandatory storing of data records. This feedback also underlined the need for legal clarity in the context of biometric identification systems which are considered as a potential threat to human rights and civil liberties. Last but not least, Germany stated that having a mandatory high IT security standard for high-risk AI systems is necessary.

On the other hand, Denmark spearheaded a position paper on behalf of 14 states which argues that a soft law approach should be given weight. Besides Denmark, the paper is signed by Belgium, the Czech Republic, Finland, France, Estonia, Ireland, Latvia, Luxembourg, the Netherlands, Poland, Portugal, Spain and Sweden. According to their position, defining high-risk AI based on the sector and application would categorize too many AI applications as risky. In the paper, an objective methodology adhering to proportionality principle (based on the potential impact and the probability of the risk) is defended to make “high-risk AI” an exception. Instead of over regulation, the mentioned States offer to incentivize AI developers to soft law solutions like self-regulation alongside creating a European labeling scheme and a robust standardization processes. As a side note, although this paper underlines the need for enabling small and medium entrepreneurs to innovate without legal hurdles, big players like Google also responded to the White Paper by putting the emphasis on self regulation in a similar fashion.

Brief Commentary

On the surface, there seems to be a lot of differing approaches to the challenging task of regulating Artificial intelligence. Yet, it is possible to see commonalities beneath the various criticisms on the White Paper. Firstly, Member States agree on that the strong side of the EU is creating trust via regulation. Even so, none of the parties would like a regulation which unnecessarily duplicates the existing safeguarding mechanisms since AI applications should be used in compliance with the existing regulation regardless of the categorization. In particular, the GDPR already covers processing data by automated means following Article 2(1). In point of fact, it is possible to see overlapping notions in the proposed AI regulation and the GDPR. Besides the main principles such as transparency and accuracy, the GDPR contains articles on automated decision making (Article 22) and a data protection impact assessment for data processing which is likely to result in high-risk to the rights and freedoms of natural persons (Article 35). Apparently, it won’t be erroneous to say that the Commission wanted to have consistency with the GDPR while proposing the high-risk AI categorization. However, this “high-risk or not” binary approach might create problems by putting moderately risky AI applications in one of these categories in a rather arbitrary way and thus treat similarly risky AI applications in a dramatically different way. This is a concern shared by various parties commenting on the White Paper as well. Developing an approach based on different levels of risks would also be more useful to regulate the use of biometric identification and banning discriminatory predictive policing.

Another important question is that whether these regulations contain requirements that no one knows how to meet. The GDPR and the proposed legislation are requiring accuracy and robustness while also requiring transparency and contestability of the automated decisions (which requires explainable algorithms) although there is a trade-off between accuracy and explainability according to AI developers. As reported by the US Defense Advanced Research Projects Agency (DARPA), “there is an inherent tension between machine learning performance (predictive accuracy) and transparency ; often the highest performing methods (e.g. deep learning) are the least explainable, and the most explainable (e.g.decision trees) are less accurate”. Thus it is not hard to imagine that striking a balance on this matter would need more interdisciplinary studies.

One way or another, the European Commission took the first step towards the European AI Regulation and this will surely level up the global playing field.

--

--