A research team led by Wandi Bruine de Bruin received funding for a project entitled “A behavioural science approach for evaluating communications about climate-related risks and uncertainties” from the Models To Decisions Network sponsored by ESRC, EPSRC, NERC, and AHRC (see http://blogs.exeter.ac.uk/models2decisions/).
University of Leeds: Wandi Bruine de Bruin (Leeds University Business School, University of Leeds), Astrid Kause, Suraje Dessai, Piers Forster, Andrea Taylor (Sustainability Research Institute, School of Earth and Environment, and Priestley International Centre for Climate, University of Leeds)
Met Office UK: Adrian Hines, Neil Kaye, Jason Lowe, and Fai Fung (Applied Climate Science, Met Office).
In the UK, the greatest threats from climate change include heavy rainfall and flooding. According to the UK Climate Change Risk Assessment 2017, flood damage to UK businesses and communities cost as much as £1 billion per year. Stakeholders from industry and government therefore face important decisions about preparing for future climate change, even if they may not have a background in climate science. Communications and visualisations about climate projections that are designed to inform decisions about climate change adaptation, may be too complex for non-expert audiences (Lorenz et al., 2015; Taylor et al., 2015).
We conducted interviews in which end users viewed commonly used climate data visualisations. End users were individuals working at water companies, local councils, environmental charities, media outlets and as infrastructure consultants. Most were professionally responsible for making decisions about climate change and adaptation, and did not necessarily have a background in climate science. We identified variations in interpretations and potential misunderstandings of the presented visualisations. We then sought recommendations from the graph design and risk communication literature that could be implemented to address these potential misunderstandings.
Here (and also as a PDF), we present these recommendations, which should be useful for improving communications about climate projections to general non-expert audiences. The PDF also contains a reference list.
Read the journal article: Visualizations of Projected Rainfall Change in the United Kingdom: An Interview Study about User Perceptions, Sustainability 2020, 12(7),2955; https://doi.org/10.3390/su12072955
Our interviewees were confused by terms such as “business as usual” when reading about emissions scenarios, and acronyms for climate models. Similarly, some lacked understanding of statistical terms, like “probabilistic estimates”, “PDF” and “boxplots”.
Example quote: “I don’t understand what the RCP8.5 means. I can only assume that it means it’s one of the projections that they’re using, perhaps this business as usual scenario, but it’s confusing and over complicated maybe.”
When presented with communications about probabilistic modelling data, interviewees wanted to know more about the meaning of terms such as “probabilistic estimate” or “percentile”.
Example quote: “And then the – all the language around, in the next sentence, the projections are probabilistic, the 50th percentile is a central estimate, that makes sense, then it talks about values are very unlikely to be greater than 90th percentile and less than 10th percentile. I’m not sure sort of – unless you’re saying that the 90th percentile and the 10th percentile are definitions of very unlikely, then I’m not sure where – why that language is included.”
In climate projections, future change is often communicated through comparisons to past time periods. Our interviewees thought that it was not always clear why past time periods were chosen as a comparison period, or which future time period was projected. They found it especially confusing when past and future time periods differed in length (e.g. 30 years versus 20 years).
Example quote: “So the first thing is that it’s against a long-term average, which is from 1981 to 2010. So I’m thinking, what’s the difference between this and actually pre-industrial?”
Interviewees indicated that projections of relative percent change in rainfall were confusing, and did not always understand concepts of ‘positive’ or ‘negative’ change. It was unclear how much one percent actually represented in millimetres, because they lacked information about baseline values.
Example quote: “… what does it actually mean in terms of how much dryer or how much wetter it is? It feel like it’s – for a specialist audience this will mean something because they already know the background and they already know how much rainfall there is and they would know how much 20% would really matter to various things … what does this actually mean in amount of rainfall rather than the difference?”
Interviewees requested a better match between visualisations and associated text. Terms that appear in the visualisation should be explicitly referred to and explained in the text. Captions, headers, and main text should use consistent wording. They will also be easier to understand if they are presented closer together.
Example quote: “it says that the map shows annual percentage difference from the long term, 1981 to 2010 average and the average between – so if you’re saying that’s also looking at post-2061. Those years aren’t written on the maps at all.”
Our interviewees were often unable to identify the features of the visualisations that were most relevant for understanding the main message. Visualisations about climate projections should be designed such that the most relevant features draw the most attention, by increasing their sizes and selecting salient colours.
Example quote: “I find that the plot details box thing on top is quite prominent. I think that would be better placed elsewhere, maybe to help the reader concentrate on the actual data instead of the computer output, really.”
Interviewees noted that visualisations about climate projections tended to contain too much information, for example about different seasons, multiple emissions scenarios, or multiple probabilistic thresholds. Understanding will be improved by presenting one visualisation for each key message, and removing distracting ‘visual clutter’ that does not pertain to the main message. If there is more than one key message, perhaps more than one visualisation is needed.
Example quote: “I think there is an awful lot of information on these that is presented in a way that I’m not used to seeing. So I’m trying to understand actually what the graph is framed about and I think the track of observed rainfall helps you to put that into context …”
Before disseminating communication materials, we recommend conducting think-aloud interviews with intended audience members so as to assess whether they find the visualisations understandable and useful. Such interviews may also identify confusion and misunderstandings, as well as design strategies for avoiding those. If time and funding permits, additional survey-based experiments with larger samples can be used to systematically test whether new visualisation designs are better than old ones.