UK National AI Strategy: Pillar 3 - Governing AI Effectively#

What’s this?

This is summary of Wednesday 3rd November’s Data Ethics Club discussion, where we spoke about the document UK National AI Strategy: Pillar 3 - Governing AI Effectively.

The summary was written by Huw Day, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Nina Di Cara and Natalie Thurlby helped with a final edit.

Introduction#

This discussion centred around the UK National AI Strategy: Pillar 3 - Governing AI Effectively. As the piece notes, the main goals outlined in this pillar are:

“Ensuring that national governance of AI technologies encourages innovation, investment, protects the public and safeguards our fundamental values, while working with global partners to promote the responsible development of AI internationally.”

We discussed the merits and shortfalls of the various goals as well as the potential conflicts between them, in particular considering how to balance encouraging innovation whilst maintaining a strong ethical framework (which some would argue, we currently do not have in the AI sector).

Which proposals were you most pleased to see from the report? Are there any that you do not think will be welcomed?#

Our discussion bought us to current ethical standards for AI and how they might have to change. They may no longer be valid. A lot of effort needs to be put into innovating our ethics, as we innovate our technology. Perhaps the biggest issue of ethics in science today is that the scientific innovation is innevitably done before the ethical guidelines are introduced (and often before those guidelines are even considered). We have to constantly innovate and question our ethical frameworks to ensure we have best practices which adhere to our values. That said, we thought the report did a good job of raising issues of fairness, accountability, and bias in AI systems.

We thought it was good that the authors acknowledged that they cannot regulate AI in the same way as past regulatory frameworks. It is not viable to just put blanket legislation in the same way. We face a nuanced problem which requires nuanced solutions. A sector-specific approach seems very reasonable.

Discussing what a suitable regulatory body would look like led to us considering the dichotomy between accountability and efficiency. In terms of ethics, it would be ideal to have committees with a wide variety of viewpoints; although we discussed how having too many viewpoints can sometimes lead to stalemate situations and so may impede efficiency. There is also the issue that too much bureaucracy inevitably slows down the government massively and this can get in the way of innovation.

Perhaps a workaround is open source committees managed by trusted communities who can respond faster, but the UK government (and indeed, most governments) would likely be reluctant to give up their governing responsibilities to such an extent.

Another solution we discussed was the ombudsman approach to regulation - a figure is given wide authority in regulatory decision-making, essentially asked to exercise judgement within a broad set of parameters. Typically they are drawn from industry, or at least have some deep understanding of the industry, so that their judgements can be well-founded. The concept originated in Sweden in 1809, with ombudsman coming from the Swedish for “legal representative”.

It has been adopted it in some UK markets (“ombudsman” often used synonymously with “Parliamentary Commissioner for Administration”) but not always successfully. When you have an industry that is moving so quickly it is difficult (if not impossible) to have a regulatory system that can cope if you try to write down lots of rules. An ombudsman approach allows for greater speed of reaction and flexibility.

(As a quick note, ombudsman is used as a gender neutral term, as the suffix “-man” is a gender neutral suffix in the Swedish language. Regardless, the term has fallen out of usage in recent years.)

How will the UK approach this problem across different sectors? Would there be a national/ethical committee for every industry? It sounds to us like a lot of hoops to jump through before we reach a solution.

We saw issue with a seeming dispraity between public and private sector regulation. Whilst the report outlined the public sector restrictions, it didn’t seem like there would be any regulations in the private sector which is troubling but frankly not surprising.

Lastly, we wondered what the authors meant by innovation? Some of us feared a barrage of “new technologies” which will simply be a collection of linear regression soups garnished with a few “for” and “if” loops. This feeds into an overall question of how we might distinguish between regulation of statistics and regulation of AI, when often the line between the two feels fairly meaningless.

How do you see regulation as potentially impacting your work? If you are not UK based, are there similarities/differences with any of your national regulations?#

As we mentioned before with bureaucracy, the ethical processes can be really messy when lots of people get involved. What makes things trustworthy to who? Who should be involved as decision makers? Is it better that the public are more soothed by the decision (populist, religious figures for example) or if there is a stronger technical and ethical base for the decision making? How do we decide what makes a strong “ethical base”?

One of our groups had a lively conversation around an anecdote of the inclusion of a priest in an ethical decision making board. We felt it was a brilliant example of how cultural norms and understandings of right and wrong get assimilated into our decision making processes for new technologies. Perhaps it would reassure a lot of people to know that a priest had signed off on a new technology, but do they have the expertise to do that? Of course, this is unlikely to be a realistic example, but it was an interesting thought experiment about the importance of who is seen to be making decisions.

So, should we seek to reassure the public with the decision making? We discussed the importance of diversity in decision making; different research backgrounds, different races, genders, sexuality etc. The more different experiences people have, the more likely we are to find an issue. We decided that being more likely to find ethical issues is a good thing, at the very least for ethical guidelines, if not necersarily for innovation. Of course if we wanted everyone to be happy we would get never get anything done (as noted last time in our discussion on Decolonising Academia). Even the best decision makers might not be trusted by the public. But are the general public the best people to decide what being trustworthy means? We learned first-hand during the COVID-19 pandemic that public trust in decision makers almost appeared more crucial than the quality of the decisions themselves.

Having different use-cases for different sectors allows flexibility in decision making. Rigidity in guidelines is important for drawing lines we should refuse to cross but needs to be adaptive as our technological capabilities and societal standards shift.

It was good that this report acknowledged fairness, bias, accountability and the commission on race and ethnic disparity. However, it wasn’t clear the report was sure on the practicalities of what they wanted and there was some frustration amongst us that often all reports call for is more reports, rather than making solid recommendations. That said, it is also important to be sure that every issue is being give the time and space it needs to be considered fully.

We discussed some concerns that we had about the proposal to remove the need for government departments to tell people how algorithms make decisions about them, in the name of making innovation easier. For example, if access to universal credit was decided by an algorithm then there would be deep concerns about not having the right to know why you do/do not get that access. If you cannot understand how something works, you cannot question it effectively. This is a common theme explored in data ethics books such as Weapons of Math Destruction by Cathy O’Neil and Automating Inequality by Virginia Eubanks.

The next steps the report outlines seem somewhat vague; write more reports, remove, change and add some things but without clear direction etc. Perhaps in a rapidly developing field, it would feel premature to outline in detail things that are likely to change, but perhaps this vagueness should be accompanied with an admission of uncertainty. Nobody knows what the future of AI will look like so any legislation looking to direct it will necessarily need to be adaptive.

Whilst it was good that the strategy involved upskilling people, there was a clear emphasis of getting rid of all the EU Laws we had been following and reforming the GDPR.

Below are some interesting links for the European Union guidelines for AI and some specific information about Italian AI Strategy:

The first document tries to explain AI and current state-of-the-art AI systems. The EU proposals document defines a set of rules to regulate AI systems and their possible misuses: “Title II establishes a list of prohibited AI” “Title III contains specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons.” “Title IV concerns certain AI systems to take account of the specific risks of manipulation they pose.”

In 2018 Italy adopted a national strategy on AI, following EU guidelines and the example of other European countries. The Italian government will release a new plan in 2022.

“Operating within the framework of the coordinated plan on European AI published in December 2018, Mr Draghi’s government strategy aims to increase public funds for AI research”.

The report states that they want to build the “most trusted and pro-innovation system for AI governance in the world” - can both these things be true?#

Ideally you want to find as many bugs as you can. You cannot find all of them, as you have limited time. So at what point do you start just calling them features - that is to say, how sure in something do we need to be before we can say that we trust it? What sort of precedent do these decisions set? Could you balance reducing the bureaucracy and innovative drive? Probably not consistently. But if you consider the pharmaceutical industry’s approach to developing various COVID-19 vaccines, private companies were able to maintain the strict standards of their respective ethical bodies but with a streamlined process (in their case, producing batches for the next stage of the trial before they had moved along to that trial, risking wasting that entire batch should it fail the stage of the trial).

We recalled studies where different people were given a software task. They found that diverse teams reduce errors because people with different backgrounds made different errors. (Biased Programs? Or Biased Data? A Field Experiment In Operationalizing AI Ethics by Cowgill et al. 2020)

Whilst we tend to be fairly cynical about the benefits of capitalism, the free market does meant that the public can essentially ‘vote with their feet’ (or money) if they decide they no longer trust what is being done with their data. A company will not be able to innovate in the long term if people do not trust their product - they will just pull the plug. A lot of AI being commercial ties into what public thinks about it. If people don’t like a product, they won’t use it. Public opinion almost forms an accidental ethics board in a more consumerist, capitalist society.

That being said, consumerism should not be the only line of defense to unethical practice, especially sicne the public can only react to what they have been informed about. Undoubtedly much of our data is used in ways we do not understand, and therefore cannot protest against. Because of that, we all felt that we needed formal ethical boards to block unethical developments as well as to guide innovators. Someone suggested this as the idea that scientists should innovate in tramlines rather than in an open space.

A recent example is Facebook/Meta. Facebook appear to have reached a point of innovating too hard and not building in enough data protections, which has led to a loss of public trust. However, many of these revelations have only become public knowledge because of whistleblowers. We reflected that in order for people to trust that you can innovate responsibly, you need to show you’re going to say no to new innovations sometimes and that there is a line in the sand. Similarly, Google has also lost public trust after they fired key management figures in their Ethical AI team who were trying to share research about the downsides of AI innovation. Journals are another area that seem to not have as many moral standards as we would hope and are more focussed on “quality” of science and innovation. They also need to do better at turning down papers in ML and AI without adequate ethical oversight.

There aren’t typically headlines about ethics reviews being rejected. Often there aren’t reviews to pass through, but we’re not saying a journal should tweet every time they reject a paper. Perhaps highlighting “hey this kind of paper is not entirely ethical” would be worthwhile. These usually take the form of twitter feeds expressing outrage at certain developments, but not formal objections. That said, conversations around publishing and responsibility for poor ethics can be challenging, especially when the people most hurt by these ‘call outs’ tend to be the first authors who are often Early Career scientists under the supervision of more senior academics.

There is an interesting analogy with pharmaceuticals and drug testing. There are stringent requirements in the pharmaceutical industries enforced by government agencies such as the MHRA in the UK and the FDA in the US. Can we get something like this for AI?

What would the phases of AI trials look like? It would probably depend on what sort of system you were testing, just as there are different clinical trials for drugs, procedures and devices. the system (unlike drugs) throughout that process. Would it involve some sort of Randomised Controlled Trial?

Whilst this is a nice idea in theory, private companies run the game with AI and it’s unlikely a government institution will be able to dislodge half of Apple or Google, especially against the uphill battle of lobbying. This is the tradeoff of the consumerism focus of society. Consumers have a small sway in the directions these companies go by favouring certain products, but the more we support and consume AI developments, the more purview we give these tech giants. We also worried that Big Tech has too much political influence at this stage for any country to be successful in implementing regulations that would severely damage their income; the only way this could really happen would be to go back in time and implement regulations before the concerning AI practices had started to be used by huge companies like Meta and Google! Alas, unlikely.


Attendees#

Name, Role, Affiliation, Where to find you, Emoji to describe your day