Columns

XXIX.3 May - June 2022
Page: 19
Digital Citation

Understanding uncertainty on a map is harder than you think


Authors:
Lace Padilla

back to top 

We learn about maps from an early age by playing with them in school, drawing ones of our neighborhoods, and locating distant lands with different customs and traditions. Maps can help us explore the world and find our way back home.

However, all maps are merely small-scale depictions of areas with varying degrees of accuracy. According to some semioticians, maps are icons that represent physical features of the world, sometimes resembling a bird's-eye view. It is impossible for these two-dimensional icons to faithfully represent Earth (a three-dimensional sphere). Maps also have symbols layered on them that represent nonphysical information like borders or wind-speed glyphs. Symbols do not look like what they represent, and we must learn their meaning. While seemingly simple, it is somewhat miraculous that most people can mentally connect these 2D icons layered with abstract meaning to their 3D world.

With a lifetime of map use, we may feel very comfortable reading and using them for a wide range of tasks. However, one common symbol layered on maps is more complicated to understand than you might think, causing health and safety issues.

For this column, I spoke with Max Schneider from the United States Geological Survey about how challenging it can be to understand maps that show uncertain forecast data and new visualization approaches that could help.

For decades, researchers have used maps to show weather forecasts, natural disasters, economic forecasts, and many other predictions. More recently, science communicators have widely adopted visual ways of showing forecasts to the public, including the ubiquitous Covid forecast maps.

ins01.gif

The prevalence of forecast maps to show risk to the public presents the problem of ensuring that the average person can accurately understand their danger from the map. This problem is highly challenging because a person must glean two pieces of information from the map: 1) the risk and 2) the uncertainty in the forecast.

Showing the forecast and its uncertainty is necessary for honest and transparent risk communication. Schneider explains:

When modeling something that varies over a map, the amount of uncertainty about what we're modeling often varies over the map as well. There could be some pretty bad outcomes if we don't depict that. People using a forecast map could think that there is no uncertainty related to the values on the map. They might think that precision is much higher than it is, leading to false conclusions and bad decisions. Worse yet, viewers may develop their own idea of where the forecast has higher or lower uncertainty.

Cognitive science research has found that knowing the uncertainty in a forecast is vital for effective decision making [1]. When communicators do not show the uncertainty, viewers will assume uncertainty in a forecast [2] and modify their judgments based on their guesses about the uncertainty.

Most forecasters want to convey the uncertainty on a map in addition to the forecast. But they may decide not to show this information because all currently available visualization methods have caveats, and some lead to serious confusion about risk.

Take, for example, the hurricane forecast and statement in Figure 1. Why is the assertion wrong?

Hurricane Ida forecast visualization shows the storm's predicted path and uncertainty around the path, which is forecasted to impact the Mississippi Delta region. Figure 1. Hurricane Ida forecast produced by the National Hurricane Center on August 28, 2021. This figure includes the statement, "This forecast predicts that the storm will get larger over time."

Careful readers and those with hurricane experience will notice the label on the hurricane forecast that reads "Note: The cone contains the probable path of the storm center but does not show the size of the storm." The National Hurricane Center added this label because many people misinterpreted the cone's increasing size as the storm's physical size growing. The goal of the hurricane forecast is to convey the most likely path of the storm (the mean forecasted path shown with a black center line) and the uncertainty around that forecasted path (a 66 percent confidence interval around the mean predicted path). The cone's diameter increases because it is harder to predict where the storm's eye will be in 72 hours compared with 24 hours. The cone indicates that the uncertainty in the forecast increases over time, not that the size of the storm will grow.

Given the issues with the cone, why doesn't the National Hurricane Center use a different visualization technique? Finding a more effective visualization to show geospatial uncertainty is more complicated than one might think.

There are two common approaches for visualizing uncertainty on a forecast map, and each has limitations. The first approach is to layer symbols that represent uncertainty (e.g., transparency, color, hash marks, and blur, among others) on top of the forecast map. The problem with this approach is how, as children, we first learn to use maps.

In most schools, students read maps from a young age, learning rules and conventions to help them translate our complex 3D world into two dimensions. A typical exercise for students learning about maps involves measuring the size of continents or distances between locations. They then use a scale on the map and some simple math to approximate these distances in the real world.

The concept that physical distance on a map correlates with physical distance in the world may seem innocuous, but it has created a massive problem for uncertainty communicators. For example, with the cone of uncertainty in Figure 1, the area of the cone does not correspond to the distance on the map. When first viewing the cone, most people use conventions they learned about maps and make the automatic assumption that distance on the map represents distance in the world, leading to the reasonable but incorrect belief that the cone shows the storm's size.

To interpret the cone of uncertainty correctly, viewers must:

  • Recognize that it is different from standard maps
  • Suppress a lifetime of strategies they've developed for reading more-typical maps
  • Remap the diameter of the cone to the uncertainty in the storm's forecasted path.

The cone of uncertainty is problematic because the base-layer map uses a standard set of cartographic conventions (i.e., map area = physical distance), and the second layer of symbolic uncertainty information uses different conventions (i.e., area = probability density). All layers should adhere to the same conventions; otherwise, people will become rightfully confused.

A second common approach is to show the uncertainty in the forecast in a separate map. Schneider and collaborators illustrate this approach in the earthquake aftershock forecast map in Figure 2.

Two side-by-side maps showing the number of forecasted aftershocks in color and uncertainty in the forecast in grey. Figure 2. Example of an earthquake aftershock forecast map (left) that shows the (median) forecasted number of aftershocks using different colors. The figure on the right shows the uncertainty in the forecasted number using gray scale.

This side-by-side map approach does not have the same problems as the layered technique in Figure 1. Notably, viewers need to use only one convention per map, likely producing fewer biases. But this approach has its caveats. The side-by-side method requires the viewer to mentally combine the forecasted aftershocks with categorical levels of uncertainty (high, medium, and low), which could be problematic for some viewers. Researchers find that understanding the statistical basics of uncertainty/probability is challenging for both experts and the general public. For example, one study found that 16 to 20 percent of college-educated people could not correctly answer the question, "Which represents the larger risk: 1 percent, 5 percent, or 10 percent?" [3].

In an empirical study, Schneider and collaborators compared the side-by-side approach to a new technique. Describing their method, Schneider states:

In statistics, when we want to show the uncertainty around a prediction or an estimate, we often give a confidence interval (or other type of interval) showing the range of values that would be consistent with the model. There is uncertainty for every location on a forecast map, and we can calculate an interval for each location. On one map, we show the lower bound of the number of aftershocks that's consistent with our model. Then we show the upper bounds of aftershocks on another map. These lower (optimistic) and upper bound (pessimistic) maps are another way of communicating the uncertainty around our forecast, or the range of values that are consistent with the forecast distribution (Figure 3).

Two side-by-side maps showing the number of forecasted aftershocks in color. The one on the left is an optimistic forecast and the right is a pessimistic forecast. Figure 3. Example of aftershock forecasts showing optimistic and pessimistic estimates representing the lower and upper bounds of the number of aftershocks consistent with the forecast model.

The innovative element of the optimistic/pessimistic forecast approach is that viewers need to use only one map convention (e.g., comparing the difference in colors between the two maps to assess the uncertainty at a given location). The maps do not require viewers to understand probability. Instead, viewers can compare a range of outcomes that could reasonably occur.

Schneider and collaborators compared the optimistic/pessimistic map to the side-by-side and layer map approaches. When viewers used the optimistic/pessimistic maps, they were more able to correctly understand the implications of uncertainty and identify how aftershocks were more likely in areas of high uncertainty, all else being equal. This result suggests that the optimistic/pessimistic maps more effectively conveyed when extremes are possible, which is critical information for hazard forecasts.

However, as with all the other uncertainty visualizations, the optimistic/pessimistic map has caveats. The optimistic/pessimistic maps produced slightly worse performance for reporting the level of uncertainty. As the uncertainty was not shown directly, viewers understandably had a harder time reporting it. Experts may need to read the exact uncertainty level in an area precisely, and the optimistic/pessimistic map would be less effective for such users.

Schneider provides the following advice for people who want to know if they need to prepare for more earthquakes:

First, find your location on the maps. The most likely outcome for the next week will be between the two maps, which is important to know. But you should also prepare yourself for the worst-case scenario shown in the pessimistic map…. Know that the worst-case scenario is still consistent with the forecast model. It could happen, and making yourself prepared for that outcome is key to being resilient to natural disasters.

Due to climate change, the world will experience a rise in the frequency and severity of hazards. Risk communicators are making substantial advancements in communicating hazard forecasts and uncertainty, but no one-size-fits-all approaches exist. When viewing any forecast, attempt to determine the most likely event(s) but also prepare yourself for the extremes that the forecast shows. By becoming more resilient to extreme circumstances, we will learn to adapt to uncertainty as a society.

Maps are a powerful tool to support uncertainty adaptation but can also confuse when poorly designed. Data visualization researchers are actively developing and testing new methods to convey the abstract concept of uncertainty in more-accessible ways. As uncertainty has no physical form, map viewers must also be willing to learn new symbols. Luckily, humans have a miraculous ability to integrate such icons and symbols, as long as designers refrain from creating maps with conflicting conventions.

back to top  References

1. Castro, S.C., Quinan, P.S., Hosseinpour, H., and Padilla, L. Examining effort in 1d uncertainty communication using individual differences in working memory and nasa-tlx. IEEE Transactions on Visualization and Computer Graphics 28, 1 (2021), 411–421.

2. Joslyn, S. and LeClerc, J. Decisions with uncertainty: The glass half full. Current Directions in Psychological Science 22, 4 (2013), 308–315.

3. Lipkus, I.M., Samsa, G., and Rimer, B.K. General performance on a numeracy scale among highly educated samples. Medical Decision Making 21, 1 (2001), 37–44. https://doi.org/10.1177/0272989X0102100105

back to top  Author

Lace Padilla studies decision making with uncertainty visualizations. Her work focuses on improving forecast visualizations to help people make high-risk decisions such as hurricane evacuation or managing Covid-19 risks. She is also a strong advocate for diversity, serving on the IEEE VIS diversity committee and the SPARK Society governing board. lace.padilla@ucmerced.edu

back to top  Sidebar: FURTHER READING

Aftershock forecast maps: Schneider, M., McDowell, M., Guttorp, P., Steel, E.A., and Fleischhut, N. Effective uncertainty visualization for aftershock forecast maps. Natural Hazards and Earth System Sciences Discussions (2021), 1–39.

Alterative hurricane forecast visualizations: Liu, L., Padilla, L., Creem-Regehr, S.H., and House, D.H. Visualizing uncertain tropical cyclone predictions using representative samples from ensembles of forecast tracks. IEEE Trans. on Visualization and Computer Graphics 25, 1 (2018), 882–891.

back to top 

Copyright held by author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.

Post Comment


No Comments Found