Artificial Intelligence Act - MEPs adopt landmark law#

What’s this?

This is summary of Wednesday 24th April’s Data Ethics Club discussion, where we spoke and wrote about the Artificial Intelligence Act: MEPs adopt landmark law. The article summary was written by Huw Day and edited by Jessica Woodgate. The discussion summary was written by Jessica Woodgate, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Huw Day helped with the final edit.

Article Summary#

The European Parliament has recently approved the Artificial Intelligence Act with 523 votes in favour, 46 against and 49 abstentions. The AI Act “aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.” The act bans applications that threaten citizen rights including:

  • biometric categorisation systems based on sensitive characteristics
  • untargeted facial scraping from the internet or CCTV to create facial recognition databases
  • emotion recognition in the workplace and schools
  • social scoring
  • predictive policing (when it is based solely on profiling a person or assessing their characteristics)
  • AI that manipulates human behaviour or exploits people’s vulnerabilities

The use of biometric identification systems by law enforcement is prohibited in principle. However, certain situations are excepted where their use is limited in time and geographic scope. These situations could include a targeted search for a missing person or to prevent a terrorist attack. Exceptions will require specific, case by case judicial or administrative authorisation. Using such tools retrospectively (e.g. to link people to a criminal offence) is considered a high-risk use case.

Other high-risk use cases include “critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections).”

Whilst these systems “are required to assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight”, the press release does not go into detail about how this will work in practice. The article additionally states that “citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights”. This press release does not detail how complaint submission would work in practice.

The AI Act also places requirements on general-purpose AI. “General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing, and mitigating systemic risks, and reporting on incidents. Additionally, artificial or manipulated images, audio, or video content (“deepfakes”) need to be clearly labelled as such.”

The article does not outline the penalties for violating any of the act at this time.

How do you feel about the ban of applications threatening citizens’ rights and the exceptions for law enforcement?#

Generally, we liked the approach of those working behind the development of the AI Act, which seems fairly collaborative. In a session from a year ago run by Standford HAI (Human-centred AI), the participating MEP Dragos Tudorache comes across as well informed and willing to listen. The announcement video seemed upbeat, the result of a long consultation and planning session. Given the amount of lobbying involved, this is not surprising. We found it exciting to see it written down as a policy.

Regarding the ban of applications threatening citizens’ rights, we see several benefits. We liked the risk-based approach, which seems to consider the needs of consumers in a forward-thinking manner. The acknowledgment of some of AI’s potential dangers, and outright ban on especially dangerous applications, suggests the AI Act is a step in the right direction. The ban on emotion recognition especially stood out to us, as (regardless of whether it actually works) emotion recognition is a very marketable area which could be abused to take advantage of people.

Whilst the act is a step in the right direction, we did think that restrictions could be potentially stronger and had questions about how the act will be enforced. If the act will be enforced by fines, this gives the wealthy more leeway to violate it. If it is enforced by other means, we wondered where the money will come from to finance enforcement; would it be from (additional) taxes? People will need to be trained in how to adhere to the act and recognise when things are being done wrong. It will not always be clear who should be held responsible for a technology that violates the act, and we wondered whether the act is responsive or preventative.

Care has been taken to specify law enforcement exemptions. However, definitions of what these exemptions are can be ambiguous. With terrorism prevention, there are safety motivations to mask from the general public how AI is being used. This makes the risks opaque and requires the public to trust governments to use collected data responsibly. Opacity allows for the potential of tools being used malignantly by those in positions of power. This is troubling as the definition of a terrorist can change between different government ideologies. Historically, and today, there has been surveillance on peace groups such as the civil rights movement, and examples of stronger police presence with climate protestors than neo-Nazi protests.

Law enforcement exemptions are not the only ambiguity; other areas of vagueness include how the act will be implemented and interpreted by judges. Vagueness could be useful for legal purposes but means there will probably need to be a lot of case-law built up. As there is a long lead up to the act coming into force, we were concerned that lobbying could end up diluting the original intent of the legislation. Even when it comes into force, it could take years for government agencies to implement standards for suppliers. Presumably, the method of implementation will look similar to GDPR, where national agencies are able to fine transgressors.

There are other parallels we see to GDPR, such as uncertainty in the general public’s reaction. When GDPR first came into play, many businesses were concerned about its impact, and how they would ensure they weren’t breaching it. 6 or so years on, people are much more confident with how to identify personal data and GDPR breaches, even if it isn’t something they do day to day. From this, we have some hope that people will have a much better understanding of what the AI Act means when it becomes law.

The influence of GDPR is not isolated to Europe, and many aspects of it have been globally adopted. Most people outside the EU have some understanding of GDPR, as it affects their work even if they aren’t based in an EU country. This highlights a benefit of the AI Act being implemented in the EU, as there is precedence for global adoption of aspects of EU law. However, attitudes towards AI do differ across the world; in Canada, non-live facial recognition has been used by law enforcement with little pushback. It will be interesting to see how countries outside the EU respond; whether they build on what the EU implements, or go in a completely different direction.

Turning attention to the UK, we wondered how Brexit will influence the reaction to the AI Act. It is the first major novel legislation to come in since the UK left the EU. This could be an opportunity for the UK government to prove that they can make completely different decisions to the EU in regard to new legislation. At the moment, the UK is trying to position themselves as a pro-innovation country. However, not following EU legislation would limit the ability of UK companies to sell models and tools to the EU. This may be an issue that plays a part in the upcoming UK elections. For AI regulation to be used in campaign manifestos, the UK public will need to be well informed enough to have an opinion.

How do you feel about the list of high-risk use cases? Is the list incomplete? What would you add? Is there anything listed that you think isn’t high-risk?#

We thought it positive to have obligations on high-risk systems, one benefit being that it promotes discussions about specific scenarios. High-risk applications include insurance, medical care and post-hoc use of facial recognition. It is important to note that risks may also come from drawing conclusions from incomplete data sources, or biases which exist in “complete” datasets. High-risk doesn’t mean that AI can’t be used at all, but that care needs to be taken. Explicitly identifying certain applications as high-risk prevents the excuse of unawareness, marking a departure from previous practices where it was easier to claim unforeseen consequences.

High-risk systems will be required to be “transparent and accurate” and provide “explanations”. There is a tug of war between the desire for better AI, versus the desire for more explainable AI. The more explainable you make AI, the less likely you are to get complex but accurate systems. This entails that spending efforts to make AI more explainable may curb innovation. However, the trade-off between accuracy, innovation and explainability is very industry dependent. For some cases, if the output is unexplainable (e.g. in healthcare) it can’t be used, as practitioners need to understand how the decisions have been reached. It is therefore not always true that the most complex models are the best for the task.

Applications which are presented as technologically advanced can lull people into a false sense of security, encouraging reliance instead of critical engagement. The Robodebt scandal in Australia is an example of very basic mistakes being made in a high-risk application, leading to severe consequences for many individuals. Highlighting such areas as high-risk will encourage more careful approaches to them. Yet, it is also important to ensure that fostering ethical practice does not cross over into fear mongering. To cultivate ethical technological advancement, we need to balance both excitement about the potential and awareness of the risks.

How do you feel about the transparency requirements for General-purpose AI (GPAI) systems?#

Requiring GPAI to be transparent is a commendable task, however there are many questions as to what this would look like in practice. For instance, does transparency include code as well as training data? Is it the company who makes the tool that has to be transparent, or the institution (e.g. government) which is using it? What exactly will “transparency” look like – a single page, or a link to the full training dataset? Does this include examining how outputs have been validated (Kamilla has written a paper on this – check it out)?

The article states that “additionally, deepfakes need to be labelled”, which seems slightly ironic to us as it’s unlikely that many creators of deepfakes will want to highlight that they’re fake. However, we did think it was good that there would be the potential for prosecution. Implementing deepfake labelling might end up looking something like GDPR cookie disclaimers. For many, this just ends up being some buttons which we click without reading the small print. Deepfake disclaimers could also end up being something which is habitually ignored, and thereby deepfakes will still impact people’s psychology.

Attendees#

  • Nina Di Cara, Snr Research Associate, University of Bristol, ninadicara, @ninadicara

  • Melanie Stefan, Computational Neurobiologist, Medical School Berlin @melanieistefan.bsky.social

  • Liam James-Fagg, Data & Insights Manager, allpay Ltd

  • Paul Matthews, Lecturer, UWE Bristol @paulusm@scholar.social

  • Amy Joint, Programme Manager, ISRCTN Clinical Trial Registry, Springer Nature @AmyJointSci

  • Euan Bennet, Lecturer, University of Glasgow, @DrEuanBennet

  • Helen Sheehan, PhD Student, University of Bristol

  • Michelle Wan, PhD Student, University of Cambridge

  • Vanessa Hanschke, PhD Student, University of Bristol

  • Kamilla Wells, Citizen Developer, Australian Public Service, Brisbane

  • Ushnish Sengupta, Assistant Professor, Algoma University, Canada