A Question of Trust#

What’s this?

This is summary of Wednesday 15th December’s Data Ethics Club discussion, where we spoke and wrote about the lecture A Question of Trust by Onora O’Neill.

The summary was written by Huw Day, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Nina Di Cara and Natalie Thurlby helped with a final edit.

A Question of Trust was the final of Professor O’Neill’s Reith Lectures in 2002, aired on BBC Radio 4. The lecture asked (and attempted to answer) questions about what trust means, and how we develop and sustain it. Professor O’Neill has a particular focus on the role of the press and media, and how much we can trust them to give us information but at Data Ethics Club, we focussed on similar questions about public trust in data science as a profession.

Do you think that the public trust data science as a profession? What makes a data scientist trustworthy?#

It is important to note immediately that this lecture was aired in 2002, a time where social media was much less prominent and public trust in science was typically much higher. Whilst we may not have done enough to keep this trust, misinformation has also had a big impact. At times throughout history, science has developed from something not very trustworthy and has perhaps had already peaked. For instance, the Tuskegee Syphilis Study, and the subsequent [Belmont Report] (https://en.wikipedia.org/wiki/Belmont_Report).

Some people have the viewpoint that anyone who does not trust scientists/doctors/etc. is a conspiracy theorist or simply not worth talking to. This can be extremely polarising. It is also problematic to assume that everyone from a particular group or progression has the same views.

We were unable to come to a consensus as to whether this phenomenom is getting better or worse. Intellectual superiority in universities reinforces this mindset, creating a bubble of knowledge where you become ignorant of other people’s (or indeed your own) ignorance. It is important that we can admit that we are wrong but the culture in academia is not very conducive when it comes to encouraging this. There is very public evidence of people being humiliated by colleagues for making relatively minor mistakes. Issues of irreproducibility in scientific results can also undermine trust.

The current Covid-19 pandemic has bought scientific advice to the forefront of how we live our lives. Perhaps as a result of this, the public trust statisticians less, same as the weatherman who makes predictions that frequently turn out to be inaccurate. There is a difference between what people describe as a data scientist versus a data analyst.

Profession is a big word and maybe it is not yet fair to claim that there is a “data science” profession. “Any claim I make I need to back it up” is a key for scientific disciplines, but not always possible. Assessability is not always available as some data is hidden.

We need to improve public statistical literacy and understanding how quantitative circles back to qualitative. In the future we need to make sure we don’t gatekeep based on education. Value judgements are important, feelings versus fact and what is empirical. Science is not neutral, scientific questions and discoveries are not neutral. Any human endeavour has confirmation bias present.

What are your thoughts on the use of statistics in the media, and how this contributes to trust/mistrust in scientific evidence by the public?#

People are not always trained to interpret graphs or statistics. Even when we are trained it can be difficult. Language can be very opaque - researchers don’t always put in the effort to communicate complex things to the public, which perhaps links with academics assuming the public are stupid?

Social media can be used for good. Scientists are using it to ask the public questions and build trust by being available to discuss feedback. On the other hand social media is an echo chamber of people engaging with the content that they are interested in, and agree with, and so perhaps these messages aren’t always reaching as far as we would like.

We need to remember that science is not designed to be ingested paper by paper. It’s supposed to be a field where messages are shared based on collective evidence. Preprints getting into the news before being peer reviewed set people’s expectations high, and do not represent fields as a cumulative picture, of which a single paper is just a very small rock on a hill.

Building trust with the public is especially important because it’s easier when we already trust someone’s opinion to believe what they say subsequently.

Typically we have observed that the media are not good at dealing with different perspectives. It nowadays seems to favour a single certainty rather than the uncertainty.

Media outlets are constantly struggling with reporting the nuanced viewpoints of scientists in a way that is accessible to a non-scientific audience. They often do this by making large claims with large amounts of certainty in order to optimise number of reads/clicks. This makes it harder to deliver on precautionary results.

Statistics is telling a story with numbers, using quantitative methods to describe qualitative processes. There is always a debate about should the public be more education in maths, or is the onus on the storyteller. If academics are getting it wrong, how feasible is it for the general public?

Often when interpreting scientific results, you have to go through a much more intelligent statistical analysis. It is not clear everyone has the patience to do that. For example, because exponential growth is unintuitive, it’s harder to get your head around.

How can we improve the ‘assessability’ of data and statistics that are reported in the media?#

The key thing that came across from the lecture was that the distinction between accessibility and assessability was super important. The barrier to trust in the data science space is the assessability. The data used is often opaque, if not invisible. There is quite a high burden of trust to get over.

Historically, scientists had trouble communicating uncertainty in science. Science is a process, doing our best given the knowledge availible at the time.

We have seen with discussions on pandemic lockdowns, there is a divergence of viewpoints from different academic fields. The clash of ideas from economicists, behaivoural scientists, public health officials etc. has left many of us wondering who to listen to.

This most recent pandemic is being treated as an optimisation problem with a single “true” answer. People often fail to acknowledge that multiple experts can be correct about multiple different viewpoints. This idea is expanded upon in some of Andy Stirling’s work.

What change would you like to see on the basis of this piece? Who has the power to make that change?#

Not all of us enjoyed this piece. Many of us found ourselves asking: What was the point of this lecture? In this setting some felt that the speaker just repeated a bunch of ideas, which was not very illuminating. The lecture felt quite rambly, but at least used clear language. Points were very anecdotal and made in a random sort of way.

It was very interesting to distinguish between freedom of individual versus coorperations with a bit of an old school appeal to ideas of individual liberties. This often happens with public outreach where experts are dissapointed with the content compared to the a specific talk.

We were pointed to these lectures after one of our members recommended this paper Transparency is Surveillance by C. Thi Nguyen which was somewhat inspired by O’Neil’s lecture. As the paper is 62 pages long, we decided to go for the more accessible, less time consuming piece. Perhaps then, our dissapointment in the lack of detail of the lectures is perhaps a case of us being unable to have our cake and eat it.

Published in 2002, this talk avoids talking about social media, yet remains strangely relevant today. We have talked a lot in the past about how unregulated social media is, often overlooking the discussion about regulating newspapers.

Many default (perhaps naively) to assuming that newspapers are a thing of the past. Whilst it is true that the younger generations now typically consume our media online, this fails to consider a smaller but still significant proportion of the population who still get most of, if not all of their news from newspapers.

There are similar incentive structures in both media publications and academic journals, encouraging publishing quickly rather than with care. At the end of the day, if your idea was published on the front page - the retractions should be too!

In a blog post on Accessability and assessibility, Sir David Spiegelhalter praises the talk and makes a point far better than I could:

“O’Neill makes the fundamental point that when organisations say they want to be trusted, they are missing the whole point. Trust is something that is offered to us, we have to earn it, and we earn it by demonstrating trustworthiness.”

Perhaps this means we need a shift in societal ideas about what outlets to trust and that we the public need to hold those giving us information to a higher standard, approaching first with skepticism instead of assuming trustworthiness before it is demonstrated to us.


Attendees#

Name, Role, Affiliation, Where to find you

  • Natalie Thurlby, Data Scientist, University of Bristol, NatalieThurlby, @StatalieT

  • Nina Di Cara, PhD Student, University of Bristol, ninadicara, @ninadicara

  • Huw Day, PhDoer, University of Bristol, @disco_huw

  • Paul Lee, investor, @pclee27, senseoffairness.blog

  • Sergio Araujo-Estrada, Research Associate, Aerospace Engineering, University of Bristol

  • Zoë Turner, Data Scientist, Nottinghamshire Healthcare NHS Foundation Trust, Lextuga007

  • Darcy Murphy, Computer Science MSc Student, University of Manchester, darcyamurphy, @darcyamurphy

10 people attended in total