Thoughts on "Visual Analytics and Uncertainty: It's Not About the Data"

MacEachren, A.M. (2015). Visual Analytics and Uncertainty: It's Not About the Data. In Bertini, E., & Roberts, J.C. (Eds.) Proceedings of the 2015 EuroVis Workshop on Visual Analytics.

Objective

In his own words, MacEachren is aiming to "provide a base for developing ... a conceptual framework to understand and facilitate visually-enabled reasoning/decision-making under uncertainty and to use that framework to develop visual analytics methods to achieve this objective".

Summary

MacEachren's primary contention in this paper is that too little research has looked at how visual interfaces actually facilitate reasoning under uncertainty, rather than just visualize the uncertain components of data for consumption by sense-makers. He suggests that one solution towards more meaningful, comparable studies is to to create a framwork of the nature, problems, and methods for assessing tool effectiveness while reasoning under uncertainty.

MacEachren draws the foundaiton of his framework from three other papers. Kahneman and Tversky (1982), who segmented uncertainty into distributional, singular, reasoned, and introspective components, which connect back to external (distributional, singular) and internal (reasoned, introspective) sources of uncertainty. Courtney (2003), who defined four levels of uncertainty — a clear enough future, a choice among alternative futures, a range of futures, and true ambiguity — related to the possible outcomes of a decision. Zack (2007), who categorized the broader space of uncetainty into four sub-categories — uncertainty, complexity, ambiguity, equivocality — based on the amount information or knowledge avilable. These classifications of uncertainty allow researchers, designers, and developers to understand where uncertainty is coming from in the reasoning process and how to adapt their tools to assist the user in their process.

At the end of the paper MacEachren outlines the following challenges facing future research in this area.

understand components of uncertainty and their relationships to: use domains, information needs, and expertise;

understand how knowledge of uncertainty (or lack of it) influences reasoning, decision making, and outcomes;

understand how (or whether) uncertainty visualization aids/hinders exploratory analysis, reasoning, and decisions;

leverage understanding to develop useful/usable methods/tools to: signify multiple kinds of uncertainty; interact with uncertainty depictions; support reasoning/decisions under uncertainty; capture/encode analysts uncertainty;

assess usability and utility of the methods/tools — design studies for reproducibility and comparability.

Thoughts and Reactions

I think this paper is a step along the right path. It provides the components for the framework MacEachren is trying to build, but more work is needed to formalize this framework into an actionable set of guidelines that can inform tool evaluation in fields like human-factors and cognition.

I agree with MacEachren's assertion that the reasoning tasks in most studies are not representative of real-world scenarios. Unfortunately, this is a difficult problem to solve. In particular, I see two large challenges in creating and evaluating these tasks.

From the creation standpoint, it is possible (perhaps even easy) to design a sufficiently complex reasoning task. However, defining a task in a way that makes it comparable with others is very difficult. The most efficient solution to this problem, from my perspective, is to curate a standard list of reasoning tasks that can be used in evaluations across tools.

From the evaluation standpoint, academic researchers, especially those in a university setting, are often limited in the subject pools they can draw from. These sorts of decision making tasks often require a certain level of experience/expertise to be relevant to practitioners in a domain. Subject pools at unviersities tend to be novice undergraduate students, making it difficult to infer meaningful relationships between tools and reasoning processes. A possible partial solution to this would be a move to more targeted participant selection techniques, using services like Subjects Wanted. Additionally, there is some question, at least in my mind, of which methods (e.g. interaction logging, eye tracking) and metrics (e.g. speed, accuracy, precision) should be used to compare performance across tools and domains.

One thing I believe is missing from MacEahren's initial list of challenges is an expicit acknowledgement of collaboration. Reasoning is increasingly a team activity, with mixed teams of generalists and specialists working to understand the outcomes associated with decisions (and their associated courses of action). I know that in my own work we have had to think about and account for changes in metric calculations in team-based analysis. I think a specific callout to this may help keep researchers focused on the understanding of uncertaity in dyanmic team environments.

Related Reading

Courtney, H. (2003). Decision-driven Scenarios for Assessing Four Levels of Uncertainty. Strategy Leadership 31, 1, 14-22.

Kahneman, D. & Tversky, A. (1982). Variants of uncertainty. Cognition 11, 2, 143–157.

Zack, M.H. (2007). The Role of Decision Support Systems in an Indeterminate World. Decision Support Systems 43, 4, 1664-1674.