Data Ethics Club meeting 24-04-24, 1pm UK time#

Meeting info#

Description#

You’re welcome to join us for our next Data Ethics Club meeting on 24th April at 1pm UK time. You don’t need to register, just pop in. This time we’re going to watch/read Artificial Intelligence Act: MEPs adopt landmark law, which is a press release from the European Parliament.

Thank you to Huw Day for suggesting this week’s content and writing the summary below. The article itself is very short, so is worth reading in full if you have time!

Summary#

The European Parliament has recently approved the Artificial Intelligence Act with 523 votes in favour, 46 against and 49 abstentions. “It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.” The act bans applications that threaten citizen rights including:

  • biometric categorisation systems based on sensitive characteristics

  • untargeted facial scraping from the internet or CCTV to create facial recognition databases

  • emotion recognition in the workplace and schools

  • social scoring

  • predictive policing (when it is based solely on profiling a person or assessing their characteristics)

  • AI that manipulates human behaviour or exploits people’s vulnerabilities

The use of biometric identification systems by law enforcement are prohibited in principle, except for certain exceptions where their use is limited in time and geographic scope and given specific, case by case judicial or administrative authorisations (e.g. targeted search for a missing person or preventing a terrorist attacks). Using such tools retrospectively is considered a high-risk use case.

Other high-risk use cases include “critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections).”

Whilst these systems “are required to assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight”, the press release does not go into detail about how this will work in practise. The article also states: “Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights”. This press relief does not detail how complaint submission would work in practise.

“General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.”

The article does not outline the penalties for violating any of the act at this time.

Discussion points#

There will be time to talk about whatever we like, relating to the paper, but here are some specific questions to think about while you’re reading.

  • How do you feel about the ban of applications threatening citizens right and the exceptions for law enforcement?

  • How do you feel about the list of high-risk use cases? Is the list incomplete? What would you add? Is there anything listed that you think isn’t high-risk?

  • How do you feel about the transpracrency requirements for General-purpose AI (GPAI) systems?