evaluation of data visualisation options for land-use policy and decision making in response to...

21
Environment and Planning B: Planning and Design 2013, volume 40, pages 213 – 233 doi:10.1068/b38159 Evaluation of data visualisation options for land-use policy and decision making in response to climate change Ian D Bishop Department of Infrastructure Engineering, The University of Melbourne, Melbourne 3010, Australia; e-mail: [email protected] Christopher J Pettit Faculty of Architecture, Building and Planning, The University of Melbourne, Melbourne 3010, Australia; e-mail: [email protected] Falak Sheth, Subhash Sharma Victorian Department of Primary Industries, 32 Lincoln Square North, Carlton, Victoria, Australia; e-mail: [email protected]; [email protected] Received 6 September 2011; in revised form 10 January 2012 Abstract. Decision makers are facing unprecedented challenges in addressing the likely impacts of climate change on land use. Changes to climate can affect the long-term viability of certain industries in a particular geographical location. Government policies in relation to provision of infrastructure, management of water, and incentives for revegetation need to be planned. Those responsible for key decisions are unlikely to be expert in all aspects of climate change or its implications, and thus require scientific data communicated to them in an easily understood manner with the scope to explore the implications. It is often felt that a range of visualisation techniques, both abstract and realistic, can assist in this communication. However, their effectiveness is seldom evaluated. In this paper we review the literature on processes for the evaluation of visualisation tools and representations. From this review an evaluation framework is developed and applied through an experiment in visualisation of climate change, land suitability, and related data using a variety of tools and representational options to advance our knowledge of which visualisation technique works, when, and why. Our region of interest was the south- western part of Victoria, Australia. Both regional and local data and their implications were presented to end users through a series of visualisation products. The survey group included policy makers, decision makers, extension staff, and researchers. They explored the products and answered both specific and exploratory questions. At the end of the evaluation session their knowledge and attitudes were compared with those at the commencement and they were also asked to assess the visualisation options subjectively. The findings relate to both the visualisation options themselves and the process of evaluation. The survey group was particularly keen to have access to multiple interactive tools and the ability to see scenarios side-by-side within a deeper informational context. A number of procedural recommendations for further evaluation were developed, including the need for consistency in approach among researchers in order to develop more generalisable findings. Keywords: visualisation, evaluation, land use, climate change 1 Introduction Data visualisation, as rst dened by McCormick et al (1987), now takes many forms. Dened forms are various and sometimes vague—including scientic visualisation, geographic visualisation, landscape visualisation, and information graphics—however, all forms are targeted at improving either our understanding of data or the communication of information.

Upload: subhash

Post on 07-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Environment and Planning B: Planning and Design 2013, volume 40, pages 213 – 233

doi:10.1068/b38159

Evaluation of data visualisation options for land-use

policy and decision making in response to climate

change

Ian D Bishop

Department of Infrastructure Engineering, The University of Melbourne, Melbourne 3010, Australia; e-mail: [email protected] J Pettit

Faculty of Architecture, Building and Planning, The University of Melbourne, Melbourne 3010, Australia; e-mail: [email protected] Sheth, Subhash Sharma

Victorian Department of Primary Industries, 32 Lincoln Square North, Carlton, Victoria, Australia; e-mail: [email protected]; [email protected] 6 September 2011; in revised form 10 January 2012

Abstract. Decision makers are facing unprecedented challenges in addressing the likely

impacts of climate change on land use. Changes to climate can aff ect the long-term

viability of certain industries in a particular geographical location. Government policies

in relation to provision of infrastructure, management of water, and incentives for

revegetation need to be planned. Those responsible for key decisions are unlikely to be

expert in all aspects of climate change or its implications, and thus require scientifi c data

communicated to them in an easily understood manner with the scope to explore the

implications. It is often felt that a range of visualisation techniques, both abstract and

realistic, can assist in this communication. However, their eff ectiveness is seldom evaluated.

In this paper we review the literature on processes for the evaluation of visualisation tools

and representations. From this review an evaluation framework is developed and applied

through an experiment in visualisation of climate change, land suitability, and related

data using a variety of tools and representational options to advance our knowledge of

which visualisation technique works, when, and why. Our region of interest was the south-

western part of Victoria, Australia. Both regional and local data and their implications

were presented to end users through a series of visualisation products. The survey group

included policy makers, decision makers, extension staff , and researchers. They explored the

products and answered both specifi c and exploratory questions. At the end of the evaluation

session their knowledge and attitudes were compared with those at the commencement and

they were also asked to assess the visualisation options subjectively. The fi ndings relate

to both the visualisation options themselves and the process of evaluation. The survey

group was particularly keen to have access to multiple interactive tools and the ability to

see scenarios side-by-side within a deeper informational context. A number of procedural

recommendations for further evaluation were developed, including the need for consistency

in approach among researchers in order to develop more generalisable fi ndings.

Keywords: visualisation, evaluation, land use, climate change

1 IntroductionData visualisation, as fi rst defi ned by McCormick et al (1987), now takes many forms. Defi ned forms are various and sometimes vague—including scientifi c visualisation, geographic visualisation, landscape visualisation, and information graphics—however, all forms are targeted at improving either our understanding of data or the communication of information.

214 I D Bishop, C J Pettit, F Sheth, S Sharma

In this paper we are specifi cally interested in the communication of information to people who must make decisions in a complex environment. More specifi cally, the context of interest is rural land-use management in the situation of anticipated climate change. Stakeholders and decision makers are facing unprecedented challenges in addressing land productivity and sustainability issues associated with a changing climate. While seasonal and annual weather changes confront the individual farmer with uncertainties about which crops to plant or stocking rate to adopt, changes to climate can affect the long-term viability of certain industries in a particular geographical location (Khan 2008). These likely changes need to be anticipated so that policies and decisions in relation to provision of infrastructure, management of water, incentives for revegetation, and other responsibilities of government can be planned effectively and eventually implemented as needed rather than as a delayed response.

The premise of this research was that those responsible for key decisions are unlikely to be expert in all aspects of climate change or its implications and hence need information presented in ways that are clear, digestible, and yet nontrivial. This context means that our focus is geographic visualisation with data and information that vary across a geographic region of interest to decision makers. This information can be complex or simple in nature and can be communicated at a variety of scales (farm, landscape, regional–national, or continental–global) showing past and present conditions as well as future scenarios (such as under a changing climate) (Pettit et al, 2012). Examples include:

● At farm–landscape scale: the Victorian Department of Primary Industries’ geographical visualisation portal(1) provides information on climate-change futures and how these might impact farms and regions in south-west Victoria. ● At regional–national scale: the Australian Bureau of Meteorology(2) provides information on climate trends and variations based on data, observation, and forecasts via its website;. ● At continental–global scale, the climate change in Google Earth’s (GE) climate-change site(3) and NASA’s Climate Time Machine(4) provide visualisations of the Earth under climate change. There are two key elements to geographic visualisation: the graphical representation

of the data, and the user interface and associated data access tools. In some instances the representation is the key to the success of the visualisation; in other situations the interface and the interaction with data that this provides is central to success. Visualisation research involves both representational and interactional innovation of complex systems involving the development of new strategies for visual presentation and data exploration. MacEachren and Kraak (1997) identifi ed these different aspects of visualisation and the different uses according to the focus of, and audience for, the visualisation.

At one end of the spectrum of visualisation applications, MacEachran and Kraak (1997) identifi ed ‘present’. That is, presentation of known information to a public audience with limited scope for interaction. Visualisations at this end of the spectrum fall into two main categories with the following objectives:

● to communicate information effectively, often in abstract form, for cognitive processing; ● to create a simulation of an environmental situation, usually in a visually realistic way, to elicit affective responses.At the other end of their visualisation spectrum is the ‘explore’ mode which includes

applications that allow high levels of interaction in private use for the discovery of previously

(1) http://www.dse.vic.gov.au/dpi/vro/vrosite.nsf/pages/geovis(2) http://www.bom.gov.au/climate/change/(3) http://www.google.com/landing/cop15/(4) http://climate.nasa.gov/ClimateTimeMachine/climateTimeMachine.cfm

Evaluation of data visualisation options 215

unknown data relations. The quality of graphic representation may be important but is not the main feature for evaluation in such systems. Visualisations at this end of the spectrum fall into two main categories with the following objectives:

● to allow personal or collaborative exploration of information to aid scientifi c insight; ● to support decision making in complex data-rich situations.While the technology moves rapidly and researchers fi nd innovative ways to extend their

visualisation repertoire, far less effort has been spent on judgment of the effectiveness and utility of the tools (Pettit et al, 2011). A signifi cant challenge is the development of evaluation criteria and procedures through which to judge the success of a visualisation product.

In section 2 of this paper we review visualisation principles and evaluation with particular reference to the four categories of visualisation application outlined above. From this review an overall evaluation framework appropriate to the specifi c circumstances and application for natural-resource management and agriculture is developed (section 3).

In turn, a specifi c approach and methodology for immediate application within the Victorian Climate Change Adaptation Program (VCCAP) was developed (section 4). The outcomes of an application of this methodology are reported in section 5 and this is followed by conclusions and recommendations (section 6).

2 Visualisation and evaluation: principles and approachesIn this section we review some of the key criteria which should be considered in relation to visualisation products. The kinds of procedures typically used in empirical research and validity criteria which should be applied to any systems testing are also investigated. Literature evaluating the four categories of visualisation application (as outlined in section 1) is examined.

2.1 Visualisation principlesVisualisation products created by one party and made available to others should meet certain basic criteria for their suitability for use. Sheppard (1989, 2001) gave guidance through his defi nition of the key principles of understanding, credibility, and lack of bias. Not only does the visual material need to perform its role in communication, it also needs to be accepted by the end user as accurate information coming from reliable sources (supported by appropriate documentation) and as not being selective, unclear, or sensationalised in ways that distort the message. Knight (2001), on the other hand, identifi ed basic criteria of functionality, effectiveness, effi ciency, usability, and usefulness. The gap between the two indicates the need for visualisation producers to create tools and representations that assist, rather than persuade, a range of consumers. These also need to be made available at acceptable costs.

2.2 Evaluation principlesIn general, a good evaluation procedure will meet a range of validity criteria, provide suffi cient precision to be useful, and be generalisable beyond the immediate subject group. McGrath (1995) identifi ed these criteria as realism, precision, and generalisability but pointed out that no evaluation approach is capable of providing the best outcome for all three factors. For example, a fi eld study of users may maximise the realism of the fi ndings because it takes place in the most realistic setting for the user. However, because the conditions cannot be controlled the precision and generalisability of the fi ndings are limited. A laboratory experiment can provide precision through strong experimental control but lacks realism. As will be proposed later, some combination of approaches can support the multiple aims.

Many authors have distinguished content, criteria-related, and construct validity. Brown (1996) however argues that all can be considered as aspects of a unifi ed concept of construct validity. Whatever the nomenclature, it is important that the evaluation process includes a suitable range of items, assesses changes in attitude or behaviour, and is measuring what it

216 I D Bishop, C J Pettit, F Sheth, S Sharma

purports to measure. Brown (1996) identifi es the areas in which validity can be challenged: the environment of the test administration, administration procedures, examinees, scoring procedures, and test construction (or quality of test items). In all, evaluation of visualisation is a complex undertaking, which is refl ected in both the limited number of publications on the subject and, according to some (see section 2.3), the low-quality of fi ndings. The challenge is to ensure validity when dealing with very large numbers of potential variables, and to test subjects with diverse background and interests. While the tools typically require extended use (weeks or months) for increased familiarity, and hence usefulness, testing over more than two hours is problematic logistically.

2.3 Visualisation evaluationIn this subsection we review the literature on the evaluation of visualisation, primarily based on a number of review articles. The review is divided into the four categories of visualisation application:

● effective communication of information; ● elicitation of corresponding cognitive and affective responses; ● personal exploration of information to aid scientifi c insight; ● decision support.

It was not possible to work only with evaluation of geographic visualisation because of the paucity of relevant work. However many of the concepts apply across visualisation styles as well as across visualisation applications.

2.3.1 Information communicationEllis and Dix (2006) reviewed sixty-fi ve papers reporting a range of new information-visualisation applications or techniques. Only twelve of these included any evaluation and of these, they quickly dismissed ten as either fl awed or trivial. The kinds of problems discussed in the paper included:

● studies with foregone conclusions, where the stimuli presented to the users were so different in their qualities that no other result could have been anticipated; ● the wrong sort of experiment, where the key variables were not identifi ed adequately or there was inadequate control over confounding variables; ● fi shing for results, in a manner similar to studies with foregone conclusions, but wishing to establish the value of a particular solution.Of the two studies they commended, one used an iterative process in which the results

of one study informed the design of the next and the participants returned for this series of experiments. The other employed users with a sound existing knowledge of the domain and the datasets being used. In both cases the users were in a position to comment knowledgably on the visualisation approaches.

Ellis and Dix (2006) argue that visualisations are not something that have values in their own right (unlike art, for example) but only in their ability to yield context-dependent results. They apply the term ‘generative artefacts’ and argue that for such artefacts the outcome is user dependent and very case specifi c, and any testing is of low generalisability. Indeed, they conclude that “for the validation of generative artefacts empirical evaluation is methodologically unsound” (page 5). However, they provide a measure of hope and argue that an empirical approach can be used in conjunction with a justifi cation, or reason-based approach. A pure reason-based approach is not possible because of our incomplete knowledge of human perception and cognition. However, when a theoretical justifi cation is combined with empirical investigations, which explore the gaps in the justifi cation, a strong validation is possible. An exploratory approach to evaluation that asks questions such as ‘what kind of things is this representation/tool good for?’ is recommended.

Evaluation of data visualisation options 217

Schroth et al (2009) adopt robust methodological justifi cation and use both quantitative and qualitative (mixed) methods in their work. They evaluate the use of multidimensional navigation (defi ned by the authors as “the combination of spatial, temporal and thematic navi-gation”) in facilitation of “understanding of complex climate change impacts and adaptation and mitigation options.” Their approach was based on the collection of multiple sources of evidence following protocols described by Yin (2003). There were three main stages to an assessment of community workshops held in regional British Columbia:

● use of preworkshop and postworkshop questionnaires with analysis of the participants changes in awareness and understanding; ● subjective ranking of the visualisation media used in the workshop; ● in-depth user interviews documenting the experience of using multidimensional navigation within a virtual globe (GE) model.Pettit et al (2011) also used mixed methods to identify some of the strengths and weakness

in using virtual globes to communicate land-use-change scenarios under a changing climate. The study evaluated responses from environmental managers and planners as well as the future-user community and showed that different user groups evaluate visualisation products against different criteria.

2.3.2 Cognitive and affective responsesSeveral researchers (Appleton and Lovett, 2003; Bergen et al, 1995; Bishop and Rohrmann, 2003; Lange, 2001) have been keen to establish the validity of realistic landscape or urban visualisation as a tool for illustrating environmental change or eliciting landscape preferences. This work has tended to concentrate on determination of similarities of affective response, such as scenic beauty, rather than on cognitive responses, such as way fi nding. The primary mechanism for these evaluations has been to compare responses to computer-generated images with responses to either the real environment (Bishop and Rohrmann, 2003), photographs (Bergen et al, 1995; Lange, 2001), or other computer-generated imagery (Appleton and Lovett, 2003). The point of comparison may be either a judgment of the effectiveness of the representation (Appleton and Lovett, 2003)—although an individual’s ability to judge the quality of a landscape surrogate was cast into doubt by Daniel and Meitner (2001)—or the affective response revealed through preference ratings (Daniel and Boster, 1976) or semantic differentials (Russell and Lanius, 1984).

When the visualisation options include realistic or semirealistic representations, it is important to be especially clear about the purpose of these and to evaluate accordingly. A complex set of purposes including both affective and cognitive components, possibly also involving persuasion as well as information, may be involved—as in the recent work of Sheppard et al (2008).

2.3.3 Scientifi c discoveryThis area of visualisation application typically involves large volumes of data arising from complex relationships and processes. The goal is to help researchers direct their attention and so gain analytical insights. This has also been referred to as visual analytics. Scholtz (2006) discusses evaluation of visual analytic environments and argues for a set of evaluation metrics and methodologies. The identifi ed areas for evaluation are:

● situation awareness, including perception, comprehension, and projection; ● collaboration, such as supporting people with different domain expertise; ● interaction, such as including a large number of options with the key being suitability for the task; ● creativity, such as asking if new alternatives and solutions emerge in the use environment;

218 I D Bishop, C J Pettit, F Sheth, S Sharma

● utility, such as the inclusion of better products, lower time, less-cognitive workload, and confi dence in the outcomes.

Scholtz (2006) suggests some approaches to measurement in these areas, including contests between systems, but does not recommend any type of evaluation study.

Plaisant (2004) reviewed approximately fi fty studies of information visualisation. These were primarily tools used by individual scientists for data exploration and discovery. She concluded that there are four main types of evaluation study:

● controlled experiments comparing design elements such as specifi c widgets (eg comparing alpha slider designs) or mappings of information to graphical display; ● usability evaluation of a tool providing feedback on the problems encountered by users; ● controlled experiments comparing two or more tools, usually a novel, state-of-the-art technique; ● case studies of tools in realistic settings.The advantage of case studies using domain experts, as recommended by Ellis and Dix

(2006), is that they report on users in their natural environment doing real tasks, demonstrating feasibility and in-context usefulness. The disadvantage, as discussed by McGrath (1995), is that they are time consuming to conduct and results may not be replicable and generalisable.

2.3.4 Decision makingThere is very little literature in this area. A prima facie case and rudimentary research agenda are discussed by Kellen (2005) who writes:

“Other threads in decision sciences and organizational development areas, strategic decision making and more recent forays into affect, emotion and imagery, have here or there hinted at the relationship between visualisation and decision making. If, as the neurosciences indicate, human visualisation processing is a foundation for higher-order cognitive processes, it could follow that representing problems visually would engage different cognitive mechanisms and thereby lead to different decision processes and decision outcomes” (no page number).Kellen (2005) sees decision making and information visualisation as two large and

hitherto independent research areas. He suggests, however, that there are common concerns and grounds for integrative research. Several of the issues he identifi es are in common with those raised by other authors above, including:

● Should research be done in a laboratory or in more natural work settings? ● Should the variables considered include cultural, emotional, and other personal or contextual factors? ● Can information visualisation reduce classic decision-making biases? ● What problem-solving tasks are best suited to the problem representations afforded by combinations of symbolic and spatial cues?

3 A framework for evaluationAs identifi ed in the literature, there are inherent diffi culties in the empirical evaluation of systems and their tools. Usually there is a range of functions available, but the nature of the test environment is such that only a few of these can be assessed within reasonable experimental constraints. Since different systems and different tools have different capabilities and performances, the initial choice of tasks can introduce bias into the assessment. Plaisant (2004, page 111) argues “data and task selection remains an ad hoc process which would be aided by the development of task taxonomies and benchmark repositories of datasets and tasks.” She further summarises the challenges ahead:

Evaluation of data visualisation options 219

“Some characteristics of information visualisation render its evaluation particularly challenging. Users often need to look at the same data from different perspectives and over a long time. They also may be able to formulate and answer questions they did not anticipate having before looking at the visualisation” (page 11).Our review suggests additional complexities in the evaluation process. For example,

it may be appropriate to use expert rather than novice users when advanced tools or systems involve the transfer of data between heterogeneous tools. Longitudinal case studies, possibly involving a group of people rather than just an individual, appear most promising in this context. The diffi culty of anticipating discovery suggests that people might be asked to simply explore and report on what they fi nd, rather than being set specifi c tasks. Letting users undertake the evaluation with their own data will provide further engagement with the tools. Over an extended period, effective visualisation tools may trigger changes in awareness or changes in work practices that lead to profound outcomes, and direct links back to the visualisation may be diffi cult to establish.

Seeking a balance between the many methodological diffi culties suggests an approach which combines the rigour of iterative controlled experiments with a more ad hoc, end-user-based approach, which tests products as they are developed as part of a larger multidisciplinary research effort. The fi rst approach can answer fundamental questions about geographic visualisation, which will underpin development into new areas as new demands emerge. The second approach provides a strong basis for defi ning software development priorities with existing specifi c user needs in mind.

On the basis of these considerations an evaluation framework is proposed. A series of evaluations under this framework should:

● ensure that as time goes by less effort is spent on visualisation products which do not meet user needs; ● provide feedback to investors and managers on the value of the generated products; ● maximize opportunities for comparisons across study components and with other evaluation research; ● create fi ndings of a more general nature than the specifi cs of single-product evaluation.Thus, an evaluation process should include more than one technique in order to provide

a level of confi rmation of the interpretation of results available from a single technique. Each process should include, for example, a combination of:

● quantitative subject responses to visualisation products as either ratings (Likert scale) or rankings (preferred options); ● qualitative subjective responses to visualisation products provided by a selection of during-use running commentary and/or post-use debriefi ng (note that use needs to be by an individual or in small groups for this to be meaningful); ● objective assessment of the user’s ability to answer questions or perform tasks as a result of exposure to the visualisation products; ● before and after tests of beliefs, attitudes, and knowledge; ● the tracking of actions during product use to reveal visibility and popularity of product elements.

4 Evaluation experimentIn this section we describe the tools and representations used in our evaluation process and also the details of the survey designed on the principles of the framework described above.

4.1 The visualisation tools and representationsThe focus of the evaluation in our case was a visualisation exploration interface (VEI) that integrates a range of visualisation tools and products that have been developed and assembled to inform decision making in relation to climate change. Each product (interactive maps,

220 I D Bishop, C J Pettit, F Sheth, S Sharma

reports, animations, and websites) was designed to facilitate the understanding of scientifi c climate-change information. The products and the integrating interface required minimum levels of prior training while seeking to maximise effective knowledge transfer. The focus for the majority of the visualisation was the south-western region of Victoria, Australia, an area of approximately 40 000 km2.

The VEI (fi gure 1) is an integration of the GE Application Programming Interface (API) and various Microsoft Windows libraries. The GE API provided the functionality to support KML or KMZ GE fi les that contained spatial information in various forms including animated maps and point-location graphs. The Windows libraries helped to open video fi les, portable document format (pdf) fi les, and web pages. This standalone application was built on a Windows platform (x86) using Visual Basic.Net where all the libraries and API were programmed and packaged into a single installation package which provided easy deployment. An Internet connection was required for some of the VEI options.

The visualisation features presented in the VEI were: ● a GE-based animation (fi gure 2) of projected annual average-temperature changes across Victoria from the present to 2050 according to CSIRO (The Commonwealth Scientifi c and Industrial Research Organisation) Climate Model Mk3.5 (5) based on the IPCC (Intergovernmental Panel on Climate Change) High Fuel Fossil Emission Scenario A1F1;(6) ● a 5 km × 5 km array of points, spatially explicit in GE, with links to annual average temperature and rainfall projection for those individual locations (fi gure 3); ● a 3D model in GE of a local cooperative farm ‘Demo Dairy’ which links to (1) pasture-yield projections, (2) an animated fl yover of current conditions (made in 3DS max), and (c) 360° panoramic renderings of existing and projected 2070 conditions from one view point (fi gure 4);

(5) http://www.cmar.csiro.au (6) http://www.ipcc.ch

Figure 1. [In colour online.] Screenshot showing the visualisation exploration interface (VEI) with embedded the Google Earth Application Programming Interface. Users can navigate through VEI by clicking the ‘Explore’ buttons linked to the key themes.

Evaluation of data visualisation options 221

Figure 2. [In colour online.] Screenshot of the visualisation exploration interface showing a temporal animation of annual average temperature over Victoria, Australia.

Figure 3. [In colour online.] Screenshot of the visualisation exploration interface showing a 5 km × 5 km array of Google Earth points displaying temperature and rainfall information for each selected location.

222 I D Bishop, C J Pettit, F Sheth, S Sharma

● a comparative mapping, called ‘where is my farm?’, of the current climate at Demo Dairy with (1) other locations which will have that climate in 2070, and (2) areas that now have the climate which Demo Dairy will have in 2070; ● an animation of sea-level rising in the town of Portland (fi gure 5); ● a link to a third-party website (http://fl ood.fi retree.net/) showing global sea-level-rise effects;

Figure 4. [In colour online.] Screenshot of the visualisation exploratory interface displaying the virtual Demo Dairy site linking 3D buildings and trees with information points on pasture-yield, climate, and a fl yover overview of the farm.

Figure 5. [In colour online.] Screenshot of the sea-level-rise visualisation for Portland, Victoria.

Evaluation of data visualisation options 223

● 2D maps of projected land suitability for four agricultural land uses (ryegrass pasture, phalaris pasture, bluegum plantation, and pine plantation) in the years 2010 and 2070 [fi gure 6(a)]; ● projected land suitability for the same four agricultural land uses displayed in the GE environment with interpolated animations of the transition [fi gure 6(b)];

Figure 6. [In colour online.] (a) Screenshot of the visualisation exploration interface displaying a map of land-use suitability for bluegum for 2050 for the southwest region of Victoria; (b) a similar map seen in the Google Earth environment.

(a)

(b)

224 I D Bishop, C J Pettit, F Sheth, S Sharma

● panoramas at a specifi c location illustrating the changes in land suitability through changing land-cover types (fi gure 7).The visualisation products included in the survey were all based on single models [see

Aurambout et al (2012) for more details] and did not attempt to communicate the uncertainty associated with the underlying scenarios, the input data, or the range of available models. This is an important topic (Spiegelhalter et al, 2011) but was beyond the scope of this work.

4.2 The survey designAt the beginning, participants were asked a set of questions about their existing knowledge of, and attitudes to, climate change and its possible effects on the southwest region of Victoria. They were then given a brief demonstration of the user interface and an introduction to the available tools. The participants had a chance to explore the system and were then asked to fi nd answers to specifi c questions. They were also asked to undertake some deeper exploration of climate, land use, and policy futures for the study region; this involved the use of any chosen tool(s) from the VEI interface. A copy of the survey is available online.(7)

After completing their exploration and tasks on the computers, the participants were asked to provide their subjective views of the tools, data, and representations provided to them. Finally they were asked for a second time their views on certain aspects of the future (their knowledge and beliefs) to determine if there had been changes in their views as a result

(7) http://people.eng.unimelb.edu.au/idbishop/Questionnaire_visualisation_evaluation.pdf

Figure 7. [In colour online.] Excerpts from panoramic views of the pasture environment at Demo Dairy in (a) 2010 and (b) 2050.

(a)

(b)

Evaluation of data visualisation options 225

of their exposure to the visualisation environments. The relationships between these parts of the survey are identifi ed in fi gure 8.

The knowledge and attitudes components (A and E in fi gure 8) used questions similar to those used in other international studies (Schroth et al, 2009) and were designed to show whether working with the visualisation tools had any effect on participant beliefs or attitudes. Post-event responses (E) could be collected immediately or some days later when the experience may have been assimilated to a greater degree. However, the latter raises logistical problems. The search for specifi c answers (B) determined whether people can fi nd and interpret information effectively by using the visualisation tools, while the free exploration (C) encouraged them to delve more deeply into aspects of the visualisation products provided. Ideally this use phase should run over days or weeks in the normal work environment, however there were insuffi cient resources to support the necessary level of monitoring that would require. Consequently, a subjective report of the components is necessary (D). The fi nal part of the survey (F) provides background information of the user’s typical use of climate information, a place for general comments, and some demographic background. These parts of the survey parallel aspects of the work of Schroth et al (2009) and also the metrics recommended by Scholtz (2006) (see subsection 2.3.3). In particular, part B measures aspects of situation awareness, particularly perception and comprehension, part C depends on system interactivity, and part D refl ects perceived utility. Our study was based on individual use and so could not address aspects of collaboration or creativity.

This approach also provides for triangulation—a technique used to determine whether the measures are consistent with each other. For example, parts B and C give objective information on the usability of the presented tools and the information represented. Part D is infl uenced by these and reports subjective preferences for the tools. Comparison of results in parts A and E shows changes in knowledge and attitudes as a result of the process. Thus, parts B/C, D, and A/E give three different methods for evaluating the success of the visualisation tools and representations. Part F provides background information on the participants and supports analysis within a single evaluation session and comparison between sessions. Generalisation

Figure 8. The six parts of the survey and their relationships provide comparative analysis. A = existing knowledge and attitudes; B = search for specifi c answers; C = free exploration; D = subjective report on tools/data; E = post-event knowledge and attitudes; F = background information.

226 I D Bishop, C J Pettit, F Sheth, S Sharma

is enhanced if the same broad approach is followed whenever there are new representations or tools being developed under specifi c projects and with specifi c end users in mind.

The actions of the users were tracked by running Camtasia Studio(8) (which records all screen activity) during the sessions.

5 The experimentTwenty-six people undertook the survey. Most were tertiary educated (although this was not surveyed specifi cally) and worked in land-related areas (N = 25). Many worked directly with farmers (N = 14); in some cases this was with direct links to climate-change policy (N = 6). Others were more purely research scientists (N = 7). Several were studying for research degrees in related areas (N = 8). The group, therefore, approximated a group of domain experts, as advocated by Ellis and Dix (2006), rather than a cross-section of the broader population. Thus, the results reported have a balance of realism (because actual potential users were involved and they had opportunities for free exploration) and precision (because parts of the study were carefully controlled in requiring a search for specifi c information), but are only generalisable to a similar grouping of land management, research, and policy interests.

In each instance the timing of the survey was planned as: ● Welcome, statement of objectives, and administration of part A of the survey (10 min). ● Demonstration of the overall visualisation interface and the tools available to the user (20 min). ● Introduction to parts B and C of the survey. We stressed that the reason for part B is to ensure that people become familiar with the tools and also to get an indication of how intuitive they are to use. Support could not be provided because this would make the results very diffi cult to analyse. ● Time for the users to explore the available tools and answer the questions in part B (40 min) and part C (20 min). ● Administration of remaining parts of the survey (15 min).The survey was conducted in four separate sessions because of the diffi culties of

recruitment and the provision of a computer for each participant. Two key factors restricted our ability to garner a larger response set. The survey was planned to run for two hours (in effect it typically took more than three) which is a signifi cant time commitment. Secondly, the target participants may have had a degree of workshop fatigue as several had been part of a number of prior workshops on scenario development and institutional adaptation.

5.1 Results 5.1.1 Knowledge and attitudes of participants (parts A and E)Generally the group were already concerned about climate change before commencing the survey. On a scale of 1 to 5 (1 = little concern, 5 = great concern) the mean score for concern about global effects was 4.5, for local agriculture 4.2, for the local natural environment 4.7, for their families 4.0, and for future generations 4.6.

Because of the high level of existing concern about the likely impact of climate change there was not great scope for movement and many people gave very similar responses at the end of the workshop. However there were some changes, as shown in fi gure 9.

All changes were by just one point on the fi ve-point scale. The increases are as might have been anticipated but the decrease in concern about the local natural environment is marked and somewhat unexpected. However, we believe this result could be infl uenced by two factors. Firstly, the land suitability analysis undertaken for various land uses focused on production landscapes and associated commodities. Land suitability and impact on

(8) http://www.techsmith.com/Camtasia

Evaluation of data visualisation options 227

national parks, state forests, and biodiversity values were not included in this research. Secondly, the selected study region is likely to be less severely impacted by climate change than drier northern regions of Victoria.

Seven people had attended previous briefi ngs or workshops run by the South-West Regional Climate Change Forum, eight had attended events run by VCCAP, thirteen had been to other climate-change events which they could name, and nine had been to events they could not name. Overall, twenty of the twenty-six people had attended at least one briefi ng or workshop on climate change.

While this existing awareness was positive in the sense that our participants had existing expertise in the domain, it restricted the potential to test changes in knowledge as a result of exposure to the visualisations. After exploring the visualisations (part E), the mean levels of knowledge about both the global situation (3.8 on a fi ve-point scale) and the local situation (3.8) had both risen from their prior levels (part A) of 3.7 and 3.4, respectively. In all ten people (40%) reported an increase in local knowledge.

5.1.2 Ability to fi nd information on climate change (part B)There were thirteen questions requiring short or multiple-choice answers. Unambiguous answers could be found by exploration of the interactive or animated visualisation options. Overall the answers to these questions were 77% correct. A lower success rate would have allowed more discrimination in the results. While early drafts of the survey had included more diffi cult questions, we chose to provide more clues because of concern that participants may become disheartened if the tasks were too diffi cult. Finding the right balance of question diffi culty was challenging. The ideal may be to have questions of gradually increasing diffi culty; however, this is hard to judge before application.

The questions related to specifi c stimuli and the individual rates of correct answers were as follows.

● GE-based animated maps and subsequent search for place with lowest temperature 75%. ● Use of grid of specifi c climate-change projections through GE 77%. ● Identifi cation of climate-change projections at a specifi c location, Demo Dairy, 100%. ● Pasture-yield projection at Demo Dairy from hyperlink 85%. ● Interpretation of animated fl yover and search for silage pit 68%.

Figure 9. Changes in concern about climate change from the start (part A) to the end (part E) of the experiment.

Perc

enta

ge c

hang

es in

con

cern

100

80

60

40

20

0Globaleffects

Localagriculture

Localnatural

environment

Family Future generations

Unchanged

Decreased

Increased

228 I D Bishop, C J Pettit, F Sheth, S Sharma

● Recognition of the image matching the pasture outcome in the future panorama 88%. ● Identifi cation of location with similar current climate to Demo Dairy’s in 2070 83%. ● Degree of inundation of houses in Portland from animation 81%. ● Location of greatest inundation along southwest coast from interactive mapping 88%. ● Regional land-suitability changes using maps only 45%. ● Identifi cation of land-suitability changes, near a user-chosen town, using a choice of either 2D noninteractive screen-based maps or interactive GE-based exploration 61%. ● Specifi ed town land-suitability changes using a choice of the above or realistic panoramas 54%.The land-suitability changes proved the most challenging to participants, perhaps in part

because the answers required a greater degree of interpretation. Part of the rationale for these three questions was to determine which presentation medium gave the best results and which was the most preferred. Among those who identifi ed their preferred visualisation product the results were as follows.

● Areas around a chosen town: maps preferred 12%, GE preferred 65%, used equally 24%. ● Specifi c location with panoramas available: maps preferred 6%, GE preferred 33%, panoramas preferred 50%, and all equally 11%.

However, those who preferred panoramas were less successful with their answers than those preferring the other media. Perhaps the visualisation product that people prefer to use is not necessarily the one which will give them the best result.

5.1.3 Free exploration (part C)In part C of the experiment people could explore the whole region or a self-selected subregion. Eight people chose to look at the whole region; the rest selected a more specifi c location. Their approach to the free exploration can only be discerned from the Camtasia recordings. This approach proved to be rather temperamental in securing complete screen-capture recordings and from the twenty-six participants we only managed to retrieve part C data in seven cases. The remainder seemed to fail, either because the recordings failed to start properly or because fi les became too large for the systems to handle. Within these seven cases, most of the available visualisations were used to some extent. The exceptions were the animation of Portland inundation and the fl y-over of the existing Demo Dairy. These exceptions are notable because they are the two animated sequences that provide no scope for interaction. That is, people chose options which gave scope for personal exploration. In all seven cases at least two of the interactive tools were used by participants during this phase of the evaluation. This is clear evidence of comparative analysis being undertaken using multiple interactive visualisation products.

5.1.4 Subjective assessments of visualisation options (part D)After answering free-form questions about the likely impact of climate change on the southwest Victoria study area, and likely changes in land use and land management and the policy implications, participants were asked their views on the different visualisation products. All were found to be helpful. On a similar 1 to 5 scale the mean responses were:

● animated mapping 4.3; ● array of point data 4.0; ● pasture-yield projection hyperlink 3.8; ● ‘where is my farm’ 3.7; ● photorealistic fl yover and panoramas 3.3; ● inundation simulations 4.0.Separation of the participant group into different stakeholder types was not simple

because many people had overlapping roles in the community (eg, policy and stakeholder

Evaluation of data visualisation options 229

engagement, project coordinator and family farm, or research and extension). However, we could broadly distinguish those with a strong science background (researcher, scientist, research student) and those working in different areas (coordinator, consultant, lobbyist). For fi ve of the six tools, all but the animated mapping, the nonscientists rated the visualisation techniques more highly than the scientists did, but with the small sample size the differences were not statistically signifi cant.

Schroth et al (2009) found a bimodal-response distribution to GE-based visualisations suggesting that “the globe metaphor can also alienate users” (page 30). However, our small sample showed no evidence of such a bimodal response and indeed the initial GE-based animation of temperature change had the lowest standard deviation of any of the visualisations when assessed for helpfulness in part D of the survey.

Part D also gave participants the opportunity to comment on the visualisations or to suggest additional features. Some of the comments on the representations of particular interest included:

● scenarios should be available side-by-side (rather than sequentially) to make comparisons easier; ● maps (including GE-based mapping) need more reference points in the form of towns and roads; ● more access to underlying data layers might aid interpretation (eg, soils); ● more contextual information should accompany the pictures.

Thus, while people found the visualisations helpful, they would have been even more helpful if seen and used more in context and less in isolation. However, this raises the question of how much contextual information is enough? Also, how are people wanting to access visualisation products to explore likely futures, to access underlying datasets?

5.1.5 General comments (part F)A number of participants took the opportunity to make some general remarks on the interface and the visualisations. A clear theme among the comments was the desire for more information to accompany the visualisations. The suggestions included more numbers on maps ‘like contours’, ability to visually compare outcomes, information on ‘how’ the changes occur and the subsequent effects on water and carbon cycles, some summary statistics, access to underlying databases, and more variables included in map products. Particularly interesting, given that our approach had been based on individual experiences with the visualisations, was the suggestion that tools were also required to enable users to share data and communicate. A number of comments also praised the systems and suggested applications in other domains.

5.2 TriangulationThe survey design, as illustrated in fi gure 8, included the opportunity for triangulation in order to test the consistency in the fi ndings by addressing questions such as: would multiple measures reinforce the overall evaluation of the effectiveness of the visualisation?, and would the utility of different visualisation tools become more apparent?

We expected, for each individual, some correlation between changes in attitude, information retrieval, and subjective assessment of the tools. This was not evident in the bivariate Pearson correlations calculated across individuals. However, the correlation of 0.36 between overall usefulness assessment and overall ability to fi nd information was near to signifi cant with p = 0.08. Correlation analysis was also used to explore any relationship between the level of knowledge or concern about climate change and the responses to the provided visualisations. There was no signifi cant correlation of prior concern or prior knowledge with overall ability to answer the questions in part B, degree of exploration of the visualisations in part C, or overall subjective assessment of their helpfulness in part D. This was actually a positive fi nding since it means that those with lesser concerns and those with

230 I D Bishop, C J Pettit, F Sheth, S Sharma

greater concerns found the visualisations helpful and able to supply information to the same degree.

Looking more closely at individual questions, signifi cant correlations were found: ● among the information-fi nding questions (part B), all the signifi cant correlations were positive suggesting that people who found some of the visualisations easy to use also found others easy to use; ● assessed helpfulness of the sea-level-rise simulations correlated positively with ability to answer the question about inundation levels in Portland (0.540, p = 0.006); ● ability to answer the question on broader coastal inundation correlated positively with assessed helpfulness of the GE animations (0.611, p = 0.001), the 5 km grid of temperature and rainfall predictions (0.521, p = 0.006) and ‘where is my farm’ (0.638, p = 0.001); ● assessed helpfulness of the GE animations correlated positively with overall success in extracting information to address the part B questions (0.509, p = 0.008). This suggested that if people made a positive start in their survey experience, they tended to do well on information fi nding; ● assessed helpfulness of the arrays of projected temperature and rainfall conditions correlated positively with prior concern for local agriculture (0.428, p = 0.033), concern for the local natural environment (0.442, p = 0.027), and existing knowledge of local climate change (0.438, p = 0.028); ● assessed helpfulness of the pasture-yield estimations correlated positively with prior concern for local agriculture (0.456, p = 0.025); ● assessed helpfulness of the simulations of sea-level rise correlated positively with overall levels of prior concern about climate change (0.427, p = 0.042).In part E (after the VEI explorations) all the relationships between prior concern and

helpfulness (except with local agriculture) had ceased to be statistically signifi cant. This suggests that either those previously concerned about the impacts of climate change had been reassured (which seems to be the case for the local natural environment) or those who were previously less concerned about the impacts of climate change became more so as a result of seeing the visualisations (as seems to be the case for the other areas of concern).

6 ConclusionIn this research a number of visualisation tools and representations were developed to assist with communication of a number of facets of climate change: downscaled climate-change projections of localised temperature and rainfall; likely effects of these on land suitability, pasture growth, and, consequently, the visual landscape; and also sea-level changes. An important part of the research was to evaluate the degree to which the visualisations achieved their purpose of effective communication—especially to those involved in land policy and decision making.

Despite the constrained focus of this evaluation, we have identifi ed some pertinent outcomes. The evaluation design was effective in producing a rich set of basic statistics that achieved more than commonly employed surveys of subjective assessment. An area that could be improved signifi cantly was the tracking of use. Camtasia Studio was effective to some degree but experienced diffi culty recording the long sessions. More problematic was the need to review the recordings manually. A better solution involves background software to record visits to specifi c elements of the interface, the time spent working with them, and the range of options located (Smith et al, 2012). A well-resourced study could also consider the option of using eye-movement tracking, which has been used by Fuchs et al (2009) to evaluate cartographic design.

In relation to the survey results themselves, there were some clear outcomes as reported in subsections 5.1 and 5.2. There was an increase in knowledge of the local climate-change

Evaluation of data visualisation options 231

situation and also an equalisation of self-assessed knowledge. Concern also increased modestly, from a high base, on all aspects except the local natural environment. There was an apparent preference for interactive tools (although the survey should have been more explicit in separating certain visualisations which were grouped together in part D).The people who preferred the interactive tools were also happy to use multiple tools to fi nd their required information. Those who used one visualisation option successfully also used others successfully. Those without science training generally seemed to fi nd (not statistically established) the visualisations helpful. This is encouraging as they are the primary target audience. Several people agreed that the ability to see different datasets, different conditions, or different scenarios side-by-side is particularly desirable. More worrying was the indication that people sometimes prefer to use a particular option even though it may give an inferior result. This suggests that we must seek to adapt preferred techniques so as to make them also effective. We cannot expect to provide a limited range of options and have everyone simply fi t in with these, at least in the short term. In the longer term certain conventions may develop, as has occurred to some degree in two-dimensional cartography, which will allow users even more intuitive access to three-dimensional or four-dimensional visual representation.

In addition, because there are so few reported experiments from which to gain guidance on some of the key choices necessary in establishing an evaluation experiment, we offer some further observations on the process itself.

Sample size. Recruitment for an extended experiment such as this (up to three hours) is challenging and so large sample sizes cannot be expected. Conversely, when interactive tools are provided everyone will have a slightly different experience and consequently analysis is problematic with a small sample. One solution would be to build a large set of responses gradually by repeating experiments on an opportunistic basis, while trying to maintain balance in the sample to refl ect the user target group. The diffi culty here is that the technology changes so rapidly that it does not make sense to repeat an experiment with identical tools twelve months later. The focus of applications may also change, meaning that both the available audience and the appropriate domain representations also change.

Group sessions versus a web-based approach. The Internet has become very popular as a means of survey distribution and response collection. Our VEI could have been set up as a website and participants recruited using professional networks. We chose group sessions in a workshop because with visual material a common view (such as screen and colour resolution, and browser) is important to the user experience. In a long survey, keeping a web-based user’s attention for the full period is also problematic.

Target audience. The results emerging from this evaluation are diffi cult to generalise to the wider public and particularly to land managers and the farming community. Ideally, the exercise would be repeated with groups of farmers, groups of school children, and others with whom effective communication is of particular importance.

Therefore, we offer two procedural options which would help longer-term awareness of the benefi ts of visualisation tools and representation. Firstly, this experience reinforces the earlier observations that ‘in-use’ tests are likely to be the most benefi cial in terms of progressive systems development. Extended observation of a real decision environment will provide data on particular aspects of the current survey—information retrieval and interactive exploration—but also on those aspects which we were unable to consider, such as collaboration and creativity. The uncontrolled nature of this approach, on the other hand, means that the fi ndings would have only limited generalisability. Secondly, more generalisable results, from which some theory of interactive visualisation might emerge, are dependent on controlled experiments (with all their diffi culties) as we have attempted here. The only scope for building suffi cient response data appears to lie in the pooling of results from different

232 I D Bishop, C J Pettit, F Sheth, S Sharma

research groups. That requires some prior agreement on terms and procedures such as: use of the same terminology (eg, disagree, strongly agree); common scaling (eg, decide whether a neutral position is wanted or not, keep Likert scales the same length); instructions (common phrases that can be used in introducing an evaluation process to minimise case-by-case bias); tracking (track similar aspects of product use where possible); and demographics (such as age groups, education levels, computer experience, job description).

While the diffi culties of fi nding the best visualisation tools and representations is challenging, and fi nding why they are best even more daunting, we need not be disheartened by the challenges. It is clear from this experiment, and a wide range of other evaluation experiments, that although the relative merits of different approaches may be hard to gauge and impossible to predict, there is ongoing enthusiasm for the assessment attempts being made. In general, our users were successful in fi nding new information, impressed by the possibilities, and complementary in their commentary. A user comment that “This has the seeds of an excellent tool” sums up the situation. We still have work to do.ReferencesAppleton K, Lovett A, 2003, “GIS-based visualisation of rural landscapes: defi ning ‘suffi cient’

realism for environmental decision-making” Landscape and Urban Planning 65 117–131 Aurambout J-P, Seth F, Bishop I D, Pettit C, 2012, “Simplifying climate change communication: an

application of data visualization at the regional and local scale”, in Geospatial Vision 2: New Dimensions in Geovisualization and Cartography Eds A Moore, I Drecki (Springer, Berlin) (chapter 8)

Bergen R D, Ulricht C A, Fridley J L, Ganter M A, 1995, “The validity of computer-generated graphic images of forest landscape” Journal of Environmental Psychology 15 135–146

Bishop I D, Rohrmann B, 2003, “Subjective responses to simulated and real environments: a comparison” Landscape and Urban Planning 65 261–277

Brown J D, 1996 Testing in Language Programs (Prentice-Hall, Upper Saddle River, NJ)Daniel T C, Boster R S, 1976 Measuring Landscape Esthetics: The Scenic Beauty Estimation

Method USDA Forest Service Research paper RM-167, Rocky Mountain Forest and Range Experiment Station, Fort Collins, CO, http://www.fs.fed.us/rm/ pubs_rm/rm_rp167.pdf

Daniel T C, Meitner M M, 2001, “Representational validity of landscape visualisations: the effects of graphic realism on perceived scenic beauty of forest vistas” Journal of Environmental Psychology 21 61–72

Ellis G, Dix A, 2006, “An explorative analysis of user evaluation studies in information visualisation”, in Proceedings of the 2006 Conference on Beyond Time and Errors: Novel Evaluation Methods for Information Visualisation, Venice, Italy, 23–26 May (ACM Press, New York) pp 1–7

Fuchs S, Spachinger K, Dorner W, Rochman J, Serrhini K, 2009, “Evaluating cartographic design in fl oor risk mapping” Environmental Hazards 8 52–70

Kellen V, 2005, “Decision making and information visualisation: research directions”,http://www.kellen.net/Visualisation_Decision_Making.htm

Khan S, 2008, “Managing climate risks in Australia: options for water policy and irrigation management” Australian Journal of Experimental Agriculture 48 265–273

Knight C, 2001, “Visualisation effectiveness”, in Proceedings of International Conference on Imaging Science, Systems and Technology, 25 – 28 June, Las Vegas, Nevada, paper 1036CT, http://www.fi v2001.pdf

Lange E, 2001, “The limits of realism: perceptions of virtual landscapes” Landscape and Urban Planning 54 163–182

McCormick B H, DeFanti T A, Brown M D, 1987, “Visualization in scientifi c computing—a synopsis” IEEE Computer Graphics and Applications 7 61–70

MacEachren A M, Kraak M-J, 1997, “Exploratory cartographic visualisation: advancing the agenda” Computers and Geosciences 23 335–344

McGrath J E, 1995 Methodology Matters: Doing Research in Behavioral and Social Science (Morgan Kaufmann, San Francisco, CA) pp 152–169

Evaluation of data visualisation options 233

Pettit C J, Raymond C, Bryan B, Lewis H, 2011, “Identifying strengths and weaknesses of landscape visualisation for effective communication of future alternatives” Landscape and Urban Planning 100 231–241

Pettit C J, Bishop I, Sposito V, Aurambout J-P, Sheth F, 2012, “Developing a multi-scale visualisation framework for use in climate change response” Landscape Ecology 27 487–508

Plaisant C, 2004, “The challenge of information visualisation evaluation”, in Proceedings of Working Conference on Advanced Visual Interfaces 25–28 May, Gallipoli, Italy (ACM, New York) pp 109–116

Russell J A, Lanius U F, 1984, “Adaptive level and the affective appraisal of environments” Journal of Environmental Psychology 4 119–135

Scholtz J, 2006, “Beyond usability: evaluation aspects of visual analytic environments”, in IEEE VAST 145–150

Schroth O, Pond E, Muir-Owen S, Campbell C, Sheppard S R J, 2009, Tools for the Understanding of Spatio-temporal Climate Scenarios in Local Planning: Kimberley (BC) case study report PBEZP1-122976, Swiss National Science Foundation, Bern, http://www.cal.p.forrestry.ubc.ca/wp-contents/uploads/2010/02/ schroth_2009_Final_SNSF_Report.pdf

Sheppard S R J, 1989 Visual Simulation: A User’s Guide for Architects, Engineers, and Planners (Van Nostrand Reinhold, New York)

Sheppard S R J, 2001, “Guidance for crystal ball gazers: developing a code of ethics for landscape visualisation” Landscape and Urban Planning 54 183–200

Sheppard S R J, Shaw A, Flanders D, Burch S, 2008, “Can visualisation save the world? Lessons for landscape architects from visualizing local climate change”, in Proceedings of 9th International Conference on Digital Design on IT in Landscape Architecture May 29–31 (Anhalt University of Applied Sciences, Dessau) pp 2–21

Smith L, Bishop I D, Williams K J H, Ford R M, 2012, “Scenario Chooser: an interactive approach to eliciting landscape preference” Landscape and Urban Planning 106 230–243

Spiegelhalter D, Pearson M, Short I, 2011, “Visualizing uncertainty about the future” Science 333 1393–1400

Yin R K, 2003 Case Study Research. Design and Methods (Sage, Thousands Oaks, CA)