meeting at the crossroads: interactivity, technology, and ......this interactivity to build a common...

17
Meeting at the Crossroads: Interactivity, Technology, and Evaluation Utilization Kate Svensson and J. Bradley Cousins University of Ottawa Abstract: is article is a review and integration of evaluation utilization literature with a new focus on the use of technology to increase evaluation utility. Scholarship on evaluation utilization embodies one of the major and ongoing quandaries in the evaluation profession: What constitutes usefulness and relevance to stakehold- ers? We think that a constructivist lens is helpful in making sense of the trajectory this literature has taken, where what is “useful” and what culminates in “use” have become much more flexible notions that are in a constant state of negotiation be- tween evaluators and evaluation stakeholders. We posit that it may be important for evaluators who are closely engaged with stakeholders to pay greater attention to this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task, we posit that evaluators may have something to gain by exploring the wealth of digital technologies and social media tools that are available. e use of these tools in local level, participatory-oriented contexts may be valuable for encouraging interactivity, potentially encouraging learning, creativity, and ownership. is article aims to stress that integrating technology into everyday evaluation practice, where possible, may ultimately enhance evaluation usefulness and relevance. Keywords: digital technology and social media, evaluation use, interactivity, tech- nology, utilization Résumé : Cet article est une critique et une intégration de la littérature portant sur l’utilisation de l’évaluation. Il met un accent novateur sur l’utilisation de la technologie pour accroître l’utilité de l’évaluation. Les recherches sur l’utilisation de l’évaluation représentent l’un des importants dilemmes en cours dans le domaine de l’évaluation : qu’est-ce que les intervenants considèrent comme étant utile ou perti- nent? Nous pensons qu’une approche constructiviste est utile pour donner un sens à la trajectoire de ce corpus, dans lequel ce qui est « utile » et ce qui évolue en « usage » sont devenus des notions plus flexibles, lesquelles sont continuellement renégociées par les évaluateurs et les parties intéressées. Nous postulons qu’il peut être important pour les évaluateurs travaillant étroitement avec les intervenants de porter une plus grande attention à cette interactivité afin de développer une vision commune de ce qui est « utile ». Nous postulons que, bien qu’il ne s’agisse pas d’une mince tâche, les évaluateurs pourraient tirer profit d’une exploration des innombrables technologies © 2015 Canadian Journal of Program Evaluation / La Revue canadienne d'évaluation de programme 30.2 (Fall / automne), 143–159 doi: 10.3138/cjpe.223 Corresponding author: Kate Svensson, [email protected]

Upload: others

Post on 27-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

Meeting at the Crossroads: Interactivity, Technology, and Evaluation Utilization

Kate Svensson  and J. Bradley Cousins University of Ottawa

Abstract: Th is article is a review and integration of evaluation utilization literature with a new focus on the use of technology to increase evaluation utility. Scholarship on evaluation utilization embodies one of the major and ongoing quandaries in the evaluation profession: What constitutes usefulness and relevance to stakehold-ers? We think that a constructivist lens is helpful in making sense of the trajectory this literature has taken, where what is “useful” and what culminates in “use” have become much more fl exible notions that are in a constant state of negotiation be-tween evaluators and evaluation stakeholders. We posit that it may be important for evaluators who are closely engaged with stakeholders to pay greater attention to this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task, we posit that evaluators may have something to gain by exploring the wealth of digital technologies and social media tools that are available. Th e use of these tools in local level, participatory-oriented contexts may be valuable for encouraging interactivity, potentially encouraging learning, creativity, and ownership. Th is article aims to stress that integrating technology into everyday evaluation practice, where possible, may ultimately enhance evaluation usefulness and relevance.

Keywords: digital technology and social media, evaluation use, interactivity, tech-nology, utilization

Résumé : Cet article est une critique et une intégration de la littérature portant sur l’utilisation de l’évaluation. Il met un accent novateur sur l’utilisation de la technologie pour accroître l’utilité de l’évaluation. Les recherches sur l’utilisation de l’évaluation représentent l’un des importants dilemmes en cours dans le domaine de l’évaluation : qu’est-ce que les intervenants considèrent comme étant utile ou perti-nent? Nous pensons qu’une approche constructiviste est utile pour donner un sens à la trajectoire de ce corpus, dans lequel ce qui est « utile » et ce qui évolue en « usage » sont devenus des notions plus fl exibles, lesquelles sont continuellement renégociées par les évaluateurs et les parties intéressées. Nous postulons qu’il peut être important pour les évaluateurs travaillant étroitement avec les intervenants de porter une plus grande attention à cette interactivité afi n de développer une vision commune de ce qui est « utile ». Nous postulons que, bien qu’il ne s’agisse pas d’une mince tâche, les évaluateurs pourraient tirer profi t d’une exploration des innombrables technologies

© 2015 Canadian Journal of Program Evaluation / La Revue canadienne d'évaluation de programme30.2 (Fall / automne), 143–159 doi: 10.3138/cjpe.223

Corresponding author : Kate Svensson, [email protected]

Page 2: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

144 Svensson and Cousins

© 2015 CJPE 30.2, 143–159 doi: 10.3138/cjpe.223

et des outils sociaux disponibles. L’utilisation de ces outils dans des contextes lo-caux axés sur la participation pourrait être utile pour encourager l’interactivité, ce qui pourrait encourager l’apprentissage, la créativité et l’appropriation. Cet article cherche à mettre l’accent sur le fait que l’intégration de la technologie dans la pratique quotidienne d’évaluation, lorsque possible, pourrait, fi nalement, améliorer l’utilité et la pertinence de l’évaluation.

Mots clés : technologies numériques et médias sociaux, utilisation de l’évaluation, interactivité, technologie, utilisation

Evaluation can be a costly and time-intensive exercise but, despite these invest-ments, knowledge emerging from evaluation does not always infl uence decision-making or practice ( Neuman, Shahor, Shina, Sarid, & Saar, 2013 ). Over the years, the somewhat troubling implication that fruits of evaluation labour end on dusty shelves has incited a strong response from the evaluation community. Dating back to the mid 1970s, or what Henry and Mark (2003) and others refer to as the “golden age” of research on evaluation use, much energy has been invested in uncovering just what ultimately makes evaluation useful to its clients and stake-holders. Previously, characteristics such as credibility, relevance, and eff ective communication ( Cousins & Leithwood, 1986 ; Leviton & Hughes, 1981 ) have all been identifi ed as being key factors explaining utilization, some even becoming embodied in the standards for professional practice such as the Program Evalu-ation Standards of the Joint Committee on Standards for Education Evaluation ( Yarbrough, Shulha, Hopson, & Caruthers, 2011 ). Another important infl uence—the “personal factor”—was identifi ed by Patton and associates long ago ( Patton et al., 1977 ) and centred quite directly on the relationship between evaluators and stakeholders. Although the search for factors that contribute to use has waned over the years, prominent contemporary approaches such as “developmental evaluation” ( Patton, 2011 ) stress that fl exibility and relationship building arising from interactivity between the evaluators and program community stakehold-ers may be essential to use. Viewed from this perspective, what evaluators see as “usefulness” exists in a specifi c time and place, and therefore should not carry the expectation of being generalizable across contexts ( Weiss, 1998 ), or even sustain-able once the evaluation project is completed.

Th is emphasis on fl exibility has strong implications for how future program evaluators will approach utilization. Our work and research is becoming increas-ingly situated in a fast-paced world where digital technologies and social media continue to gain prevalence. Given the growing emphasis on social network-ing and communication at a peer-to-peer level, the implications for evaluation practice may be profound; however, the use of these media as tools to achieve usefulness so far appears to be almost absent from evaluation literature. Th e avail-ability of these new channels presents an opportunity for new forms of creative stakeholder engagement, yet it also presents a formidable challenge. Unless tech-nology allows for suffi cient interactivity between evaluators and stakeholders, we

Page 3: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

Interactivity, Technology, and Evaluation Utilization 145

CJPE 30.2, 143–159 © 2015doi: 10.3138/cjpe.223

miss the opportunity to transform information into valuable knowledge ( Galen & Grodzicki, 2011 ), and the technology becomes an added burden. Nevertheless, to ensure that the evaluation fi eld is not left behind, the potential of these tools must be thoroughly explored.

Th is article thus focuses on three distinct objectives. In the fi rst of these, we discuss key theories surrounding use—mainly literature on utilization evalua-tion (instrumental, conceptual, symbolic) and process use—to understand how evaluators have to date viewed “usefulness” or “utility.” Second, we review how constructivist thought has shift ed the nature of this discussion away from iden-tifying specifi c attributes such as relevance, timeliness, or evaluator credibility to instead focusing on fl exibility and the importance of contextual factors. Here, we off er our own refl ections and, drawing on recent knowledge mobilization literature, propose that interactivity in communication between evaluators and key stakeholders is what may be at the root of fl exibility. Lastly, we invite future researchers to consider how we can increase interactivity in evaluation, and discuss the extent to which technology and social media may contribute to the relevancy of our fi eld, ensuring greater engagement of those for whom our evalu-ations are conducted.

DEFINING USEFUL EVALUATIONS Before embarking on a discussion of what use may mean to evaluators now, it is worthwhile to briefl y discuss how it has been conceptualized to date. In this sec-tion, we will present a synthesis of literature focused on utilization evaluation, fi rst defi ning important concepts and then summarizing what factors or predictors of achieving use in practice have been highlighted in the literature.

Over the last several decades, scholars interested in utilization of evaluation and, later, process use theories have produced rich theoretical discussions. With its roots stemming from an interest in utilization of social research in federal programs, the earliest article on evaluation utilization was an article published by Weiss in 1967 ( Weiss, 1998 ). Since then, the pursuit for utility has been a recurring theme at the heart of this vast body of literature. However, just what constitutes “useful evaluation” appears to still be contested. Rather than adopting a single defi nition, evaluation theorists agreed that use can manifest itself in a variety of ways, and began to categorize its various meanings with the hopes that this will eventually create a more comprehensive understanding of the concept ( Shulha & Cousins, 1997 ). By 1986, these categories included instrumental, conceptual, and symbolic use of evaluation fi ndings or results, with “process use” being added as a category in the late 1990s ( Patton, 1997 ). In the following section we provide a very brief overview of each of these categories.

Instrumental use occurs when the program or policy goes through substan-tial changes as a result of the evaluation, and these eff ects are clear and visible ( Neuman et al., 2013 ). Th ese changes predominantly originate from fi ndings reports, recommendations and lessons learned, and other documents produced at the end of the evaluation ( Forss, Rebien, & Carlsson, 2002 ). Instrumental use

Page 4: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

146 Svensson and Cousins

© 2015 CJPE 30.2, 143–159 doi: 10.3138/cjpe.223

implies a somewhat linear process, and is presumably achieved when fi ndings are noncontroversial, the proposed changes are small-scale, and the organizational environment is stable ( Ashley, 2009 ; Best & Holmes, 2010 ; Weiss, 1998 ). Th e sec-ond type of use, conceptual use, relates to wider changes in understandings and attitudes of the program community members, but an absence of demonstrable change at the program level. What may occur instead is a discussion on possible future directions a program may take in the future, or a production of “lessons learned” ( Patton, 2001 ). By engaging with evaluation fi ndings, stakeholders not only further their understanding of what their program is meant to accomplish, but also gain an insight as where its strengths and weaknesses lie ( Neuman et al., 2013 ; Weiss, 1998 ). Symbolic use occurs when there is support to undertake an evaluation without any real intent to make use of its fi ndings ( Patton, 2008 ).

In contrast to these categories of use of fi ndings, process use is a more recent development in evaluation utilization theory ( Shulha & Cousins, 1997 ), stressing the importance of stakeholder involvement and learning during the various stages of evaluation ( Patton, 1998 ). Although the term was initially coined by Patton (1997) , the idea of learning as a benefi t of evaluation to stakeholders became a recurring theme in evaluation throughout the late 1990s ( Preskill & Caracelli, 1997 ), and vestiges of process use have been evident in research on evaluation use since the 1980s ( Amo  & Cousins, 2007 ). Unlike the previous categories, process use may emerge in a variety of ways, ranging from skill development ( Patton, 1998 ) to the creation of shared understanding ( Forss et al., 2002 ; Taut, 2008 ). It may take place at an individual, group, or organizational level ( Amo & Cousins, 2007 ; DeLuca et al., 2009 ; Preskill & Caracelli, 1997 ), and may concern participants’ operational, theoretical, or symbolic understanding ( DeLuca et al., 2009; Weaver & Cousins, 2004 ). From this perspective, fi nal report fi ndings are not as important as the lessons an organization or group of stakeholders grasp through the process of engaging with evaluation. Th e benefi ts of process use are also presumed to be long-lasting, outweighing that of a recommendation report, which can quickly lose its timeliness to policymakers ( Patton, 1998 ). What sets this category apart from the others is its emphasis on “learning how to learn” ( Patton, 1998 , p. 226). Th is means that even if there is no immediate use at the level of policymaking, the interaction and the learning gained by participating in evaluation are still able to benefi t managers or program staff that took part in those activities. Furthermore, rather than giving participants an evaluator-written fi nal report containing information and recommendations constructed as “valid knowledge,” process use may instill in a program a culture that promotes analytically derived decision-making ( Preskill et al., 2003 ). Th is suggests that par-ticipants get a chance to closely experience evaluation, arrive at their own conclu-sions, and construct their own knowledge ( Preskill et al., 2003 ), as we will discuss in greater detail below. In practice, these categories of use of fi ndings (instrumen-tal, conceptual, symbolic) and process use are not mutually exclusive, as there can be considerable overlap among them ( Shulha & Cousins, 1997 ). For instance, it is possible that users whose program is faced with fi nancial constraints are not

Page 5: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

Interactivity, Technology, and Evaluation Utilization 147

CJPE 30.2, 143–159 © 2015doi: 10.3138/cjpe.223

immediately able to make instrumental changes to the program’s functioning; however, their changes in attitude toward program components plant the seed for future decision-making ( Neuman et al., 2013 ; Weiss, 1998 ). Similarly, the learning that takes place throughout the evaluation (process use), whether through data collection and analysis or other evaluation-related tasks, can indirectly shed light on the program’s shortcomings (conceptual use), and eventually lead to concrete modifi cation (instrumental use).

Although scholars are now able to articulate what evaluation “use” may re-semble in practice, understandings of how to reach this state and what factors are essential are not yet fully formed. In the following section we examine the various issues surrounding factors driving use, and off er some of our own refl ections.

FROM FIXED TO FLUID: FACTORS INFLUENCING USE Th e previous section outlined four types of use that can be achieved by evaluation, but what has puzzled evaluators for the past several decades and continues to puz-zle them is determining which factors may lead to tangible realization of these uses. A systematic review of 65 evaluation studies by Cousins and Leithwood (1986) determined that factors driving successful use were mainly implementation- or context-related. Implementation-related factors deal largely with the day-to-day tasks of evaluation, and included such criteria as “evaluation quality, credibility, relevance, communication, the fi ndings themselves, and the timeliness of evalua-tions for users” (p. 359). Some of the factors identifi ed in this study, such as cred-ibility, relevance, eff ective communication, and timeliness, have even become incorporated into program evaluation standards that help guide evaluation prac-tice ( Yarbrough et al., 2011 ). On the other hand, context-related factors take into consideration the circumstances surrounding a particular exercise, and comprise “information needs of users, decision characteristics, political climate, competing information, personal characteristics of users, and user commitment and recep-tiveness to evaluation information” ( Cousins & Leithwood, 1986 , p. 359). At the time, the data emerging from the chosen studies pointed to evaluation methods characteristics—such as quality, sophistication, and intensity—as the most per-suasive factors of use ( Shulha & Cousins, 1997 ). Given the continued interest in the subject, other scholars have continued to build on this existing knowledge and contributed to this evolving body of literature. In 1988, the famous Weiss-Patton debate focused on questioning to what degree usefulness is the evaluator’s respon-sibility, with Weiss (1988a , 1988b ) arguing that utility is outside the evaluator’s realm of control, and Patton (1988) stating that it is the evaluator’s obligation. Smith & Chircop (1989) noted that these disagreements arose because the authors were referencing two distinct types of evaluation: “Weiss is talking about national, broad-scale evaluations, while Patton is considering local, smaller-scale studies, or . . . Weiss is focusing on policy evaluation and Patton on program evaluation” (p. 6). Although both authors disregarded these claims ( Smith & Chircop, 1989 ), it sheds light on an important distinction that use may manifest in diff erent ways

Page 6: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

148 Svensson and Cousins

© 2015 CJPE 30.2, 143–159 doi: 10.3138/cjpe.223

based on scope. By 1997, myriad new conditions, such as “anticipated degree of program change” and “perceived value of evaluation as management tool,” were added as potential predictors of use ( Shulha  & Cousins, 1997 ). Whether the evaluation was conducted by an internal or an external evaluator was also cited as having a role ( Love, 1993 , 1998 ). Internal evaluators are said to be more aware of their organization’s informational needs and thus may have a better notion of what is useful ( Love, 1993 ). Love (1998) wrote:

[E]ff ective internal evaluation units demonstrated top management support for evalu-ation, positive leadership by the head of the internal evaluation unit, and an organiza-tional culture that supported continual learning and critical program review. Another important factor was adaptation by the internal evaluation unit to the culture and decision-making style of the organization. Finally, eff ective internal evaluation units were proactive and had a highly visible public image in the organization, which they achieved by soliciting topics for evaluation, promoting the results of evaluations, and using evaluations for program improvement. (p. 149)

Love (1998) , Mathison (2011) , and Volkov and Baron (2011) also noted that interpersonal relationships within the organizational structure determined the eff ectiveness of these units; hence, their ability to produce useful results hinged on those relationships.

Other authors ( Forss et al., 2002 ; Preskill et al., 2003 ; Weiss, 1998 ) identifi ed “active user involvement” and “eff ective communication” as factors most likely to lead to use, while Grasso (2003) noted the importance of “meeting clients’ infor-mation needs” by tailoring the fi nal report to the many audiences. As a result of these contributions, the current version of the Program Evaluation Standards of the Joint Committee on Standards for Educational Evaluation now incorporates additional guiding criteria for practice, such as attention to stakeholders, negoti-ated purposes based on the needs of stakeholders, explicit values emerging from personal and cultural diff erences, meaningful processes, and concern for conse-quences and infl uence of the evaluation ( Yarbrough et al., 2011 ).

From this overview, we can say that fi nding a single, adaptable predictor of use that can work across contexts is not viable. Th is, in many ways, is unsurpris-ing, given the number of existing evaluation approaches and the vast variety of clients and interpersonal networks that exist within them ( Best & Holmes, 2010 ; Contandriopoulos & Brousselle, 2012 ; Weiss, 1988a, 1988b ). Authors began to note that having a “check list” of characteristics was inferior to customizing evaluation designs according to contextual characteristics ( Contandriopoulos & Brousselle, 2012 ; Leviton, 2003 ; Pawson  & Tilley, 1997 ; Preskill  & Caracelli, 1997 ; Patton, 2008 ; Shulha & Cousins, 1997 ), saliency of the issue, and range of audiences ( Grasso, 2003 ). As Patton (2011) noted, “there are no absolute rules an evaluation can follow to know what to do with specifi c users in a specifi c situation” (p. 15). Th is approach thus carries a simple but powerful message: there is no single way to conduct an evaluation and, likewise, no single recipe for success.

Page 7: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

Interactivity, Technology, and Evaluation Utilization 149

CJPE 30.2, 143–159 © 2015doi: 10.3138/cjpe.223

Having provided this overview, we will now discuss the philosophical transi-tion from seeing “use” as a fi xed state to be achieved via a single path, to a spec-trum dependant on evaluation knowledge co-creation.

EVALUATION USE THROUGH A CONSTRUCTIVIST LENS: FLEXIBILITY AND INTERACTIVITY Th e review of evaluation use literature presented in the last section implies that many of the theoretical underpinnings of program evaluation went through an epistemological shift away from single notions of “use” and “utility” to a broader recognition that use can be manifested in a variety of ways. Similarly, the view that predictors of use are not homogeneous signifi es an important shift toward the broader recognition that evaluation knowledge in itself can be quite varied and context-derived ( Shulha & Cousins, 1997 ; Weiss, 1998 ). In the context of this fi eld, we think that constructivist evaluation theory such as that developed by Guba and Lincoln (1995) can be a useful lens for understanding the journey program evalu-ation has taken over the last several decades. At the heart of the constructivist view of evaluation is the argument that any knowledge will develop as a source of “competing social-psychological constructions that are oft en multiple, uncertifi -able, constantly problematic and changing” ( Stuffl ebeam, 2008 , p. 1394). In his review of social constructivism, Kim (2001) notes that constructivist thinkers such as Au (1998) , Ernest (1999) , Gredler (1997) , and Prat & Floden (1994) view knowledge as something in a state of becoming—a man-made product—rather than something that is rooted in objective fact. Instead, knowledge itself is a tran-sient, cyclical process where what we create infl uences the created, and in turn the created serves as the foundation for future negotiation and meaning-building. In essence, this means that knowledge is not fi nite but a temporary and “gradual accommodation” of ideas that manages to achieve “relative fi t” ( von Glasersfeld, 2005 , p. 6).

When viewed from a constructivist standpoint, users of evaluation do not just absorb new information without fi rst seeing how this information concurs or confl icts with existing knowledge and the wider political and interpersonal relationships within the organization ( Leviton, 2003 ; Weiss, 1988a, 1988b ). Evalu-ation thus fuses together diff erent perspectives to create a new, unique form of knowledge that has meaning in the distinct conditions of that unique context ( Guba & Lincoln, 1995 ; Leviton, 2003 ; Weiss, 1998 ).

We reviewed the criteria of implementation-based and context-based factors identifi ed by Cousins and Leithwood (1986) and Shulha and Cousins (1997) from a constructivist standpoint. By our donning constructivist glasses, it becomes ap-parent that very few of these elements are truly within the control of the evaluator. Instead, we presume that every factor would have parallel spectra—that which is based on the perceptions of the evaluator, and that which is infl uenced by the per-spectives of the program community stakeholders. For instance, notions such as quality, credibility, relevance, or timeliness are highly subjective, and are arguably

Page 8: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

150 Svensson and Cousins

© 2015 CJPE 30.2, 143–159 doi: 10.3138/cjpe.223

infl uenced by the specifi c context and particular point in time. Credibility may be improved by making the evaluation rigorous through triangulation techniques, but, much like relevance, it is based on “perceptual dimensions that are very much infl uenced by users’ pre-existing opinions and preferences” ( Contandriopou-los & Brousselle, 2012 , p. 65; see also Chelimsky, 2007 ). Even before becoming involved in an evaluation, clients may already have an idea as to the extent of their information needs and how relevant the exercise will be for the organization ( Contandriopoulos & Brousselle, 2012 ). Users’ knowledge structures are complex in that they are infl uenced by many considerations at both interpersonal and col-lective levels ( Leviton, 2003 ). Th is is also the case with overall opinions of evalu-ation and its value as a management tool. Lastly, given their intimate knowledge of the program activities, stakeholders may foresee what changes will or will not be feasible. Th ey may oppose recommendations if the messages they contain run contrary to their beliefs, and therefore will be resistant to using the evaluation ( Contandriopoulos & Brousselle, 2012 ).

Th is is not to say that these factors cannot be infl uenced—on the contrary, process use promotes this point quite extensively. Clients’ perceptions of the evaluation utility, the program or intervention, or even of themselves as actors within the process are not fi xed points along a spectrum. Instead, if we draw on constructivist evaluation theory, we can conceptualize that these perceptions are continually infl uenced by time, and multiple and competing sources of infl uence ( Stuffl ebeam, 2008 ). In other words, how the evaluator and stakeholders see use will change in response to change in conditions. For example, if we imagine that a shift of organizational priorities led by a key stakeholder happens concurrently, evaluation recommendations may be changed before they are implemented, or even discarded altogether. Similarly, participants may experience considerable transformation with respect to how they conceptually view their program, but on an individual level, when determining future direction, each participant may also rely on prior knowledge and past work experiences ( Leviton, 2003 ). As an outsider, the external evaluator may have quite a diff erent point of view, and may not necessarily see the value in these ideas ( Leviton, 2003 ). Scholars such as Contandriopoulos and Brousselle (2012) and Cooper and Levin (2010) stress that knowledge is co-created, re-created, and enmeshed. Ottoson (2009) summarized all these complex dimensions in the following defi nition:

Knowledge in some form (ideas, innovation, skills, or policy) moves in some direction (laterally, hierarchically, spreads, or exchanges) among various stakeholders (knowl-edge producers, end users, or intermediaries) and contexts (national, community, or organizational) to achieve some outcomes (intended benefi ts, unanticipated outcomes, or hijacked eff ects). ( Ottoson, 2009 , p. 8, emphasis in the original)

Given the amount of interconnection implied by this defi nition, we would like to propose that no matter how much time the evaluator dedicates to perfecting the implementation design, without a certain synergy, the eff ectiveness of these strategies will refl ect only one end of the spectrum—that of the evaluator. If

Page 9: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

Interactivity, Technology, and Evaluation Utilization 151

CJPE 30.2, 143–159 © 2015doi: 10.3138/cjpe.223

stakeholders are not suffi ciently involved in the evaluation that they can actively engage with the information with which they are presented, and do not have an opportunity to question the emerging data, it may be diffi cult to create this middle ground, and neither instrumental, conceptual, or symbolic use of fi ndings, nor process uses may be eff ectively achieved. We think that the notion of evaluators inhabiting complex adaptive systems ( Patton, 2011 ) is an important one, and it is not contradictory to constructivist theories of evaluation in a sense that, to respond to the fi ckle nature of these systems, evaluators must be fl exible ( Patton, 2011 ) and strive to build a common view of what evaluation use may mean for that particular context. To build this common view, there is clearly a need for ongoing interactivity—a process of engagement and knowledge co-construction between evaluators and potential users of evaluation they work with. Without interactivity, we are once again left to, as Weiss (1998 , p. 32) put it, “keep fi ngers crossed that audiences pay attention.” Th is means that to reach a unifi ed plateau, two (or more) parallel spectra should shift depending on the level of mutual communication and interactivity, as well as the level of learning throughout the evaluation process un-til, ideally, there is an overlap based on mutually built meanings. Stakeholders and evaluators are assumed to be under the infl uence of a variety of factors, but they are also highly infl uenced by one another. Th ese interactions in turn aff ect their perceptions of how evaluation may be useful. In other words, if this knowledge is always in the state of “becoming” ( Nutley, Walter, & Davies, 2003 ), it suggests that that evaluators’ perceptions of what is useful may (and arguably, should) change, lest fi xed knowledge becomes an unmoving cog in an otherwise turning machine. Ultimately, the fl exibility that comes from shift ing one’s perception and being open to reconstruction of knowledge is very much in line with the fl exibility that Patton (1988 , 2011 ) and others have identifi ed as a key factor in utilization.

We realize that what we have outlined above may be at odds with the re-ality that some evaluators experience, especially if their work takes place in a policy-oriented world where evaluation is guided by deadlines, predetermined objectives, or even design guidelines ( Chelimsky, 2007 ; Ginsburg & Rhett, 2003 ). Grasso (2003) pointed out that there is usually a wide range of audiences, from the funding entity that requested the evaluation to the managers and program staff , and each of these stakeholders have their own information needs and unique ideas as to what is useful. Meeting the needs of all these groups is nigh impossible. In those cases, we recognize that knowledge co-creation and interactivity may be seen as a “loft y goal” that is at odds with “what actually happens” ( Sridharan, 2003 , p. 483). Th erefore, we think that the interactivity approach makes the most sense in participatory-oriented and developmental evaluation contexts, or in contexts where the evaluator is working closely with managers or program staff who have exhibited a willingness to engage. Th ese stakeholders may be the individuals who are most likely to employ the evaluation’s fi ndings and recommendations ( Grasso, 2003 ) or to benefi t from the process.

Evaluation is a fi eld increasingly situated in a world that is fast-paced and ever-changing. However, an optimist would assert that this new age is more open

Page 10: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

152 Svensson and Cousins

© 2015 CJPE 30.2, 143–159 doi: 10.3138/cjpe.223

to fl exibility and creativity in our profession. As we continue to build networks and communities of practice and to engage diverse stakeholders, digital technolo-gies and social media can be tools of great value, as the following section will aim to show.

THE USE OF TECHNOLOGY AS A MEDIUM FOR PROMOTING INTERACTIVITY AND FOSTERING UTILIZATION Given the deeply complex relationships that occur in evaluation, what may be the value of integrating technology into the evaluation toolkit? With respect to instrumental use, systems that make data widely available in a clear and under-standable format, whenever or wherever it is needed, may help the program staff to utilize evaluation fi ndings or build upon them ( BenMoussa, 2010 ). Similarly, using tools such as forums, blogs, and social media in conjunction with concept mapping soft ware (such as Coggle, XMind, and Popplet) can advance an or-ganization’s shared understanding of what the program is meant to achieve, thus drawing out users’ mental models ( Leviton, 2003 ) and fostering conceptual use. Lastly, technology can assist with process use by giving evaluators tools to better engage their stakeholders in core evaluation activities such as concept mapping, theory of change modelling, data gathering, and analysis (when those activities are a priority).

With respect to stakeholder involvement, communication technologies may help to facilitate inclusion, as well as allow for persistent and sustained contact between members of the evaluation team. Communication technologies have the potential to overcome these barriers or, at the very least, lessen this disconnection felt by users who are not able to engage with the rest of the team due to distance ( BenMoussa, 2010 ). GoogleVoice and Skype are platforms for Internet com-munication, and are compatible with data recording and transcription add-ons that are available at minimal cost ( Clementz, 2012 ). Online bulletin board focus group platforms are another way to include technology in the evaluation process, as many of these tools (such as itracks, for example) are both quantitative and qualitative while also embedding social media directly into the application. Other tools such as Padlet allow for both synchronous and asynchronous document and multimedia sharing, giving a group of collaborators an intuitive virtual space. Th e Padlet platform allows a great amount of customization, from making the space fully public and shareable in a variety of formats, to serving as a private workspace where stakeholders can provide feedback in a confi dential manner. Th is space can be used for the sharing of ideas and brainstorming, and as a database of related project documents ( Kistler, 2014 ). It can also be an excellent option for providing quick or even real-time feedback and preliminary data fi ndings.

Th e primary benefi t of including such tools in one’s practice is that without the ongoing communication among users, positive relationship building and knowledge creation risk running out of steam and becoming sporadic rather than sustained ( Preskill et al., 2003 ; Telfair & Leviton, 1999 ). Th ere are also additional

Page 11: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

Interactivity, Technology, and Evaluation Utilization 153

CJPE 30.2, 143–159 © 2015doi: 10.3138/cjpe.223

benefi ts, such as the reduction of costs so oft en associated with participatory and collaborative evaluation approaches by allowing the evaluation team and participants to communicate while also saving valuable time. Data visualization tools such as infographic makers (such as Padlet, Infogr.am, and Visual.ly, among many others) or digital storytelling tools (such as Steller and Storehouse) can also help us to off er our fi ndings in formats diff erent from the traditional fi nal report or executive summary. Th is is benefi cial for interacting with a range of audiences who may have diff erent levels of comfort when it comes to evaluation ( Grasso, 2003 ). Visual tools like those above entice these stakeholders to engage with the data in new and creative ways. As an added benefi t, infographics and digital stories can also be shared, which is an attractive prospect for evaluators who hope the fi ndings will reach a wider audience.

As we discussed before, interactivity should in theory be less about dissemi-nating “valid” fi ndings and more about arriving at a common understanding. By using these tools, we posit that the evaluator’s role becomes that of a facilitator who is not unwilling to challenge his or her own mental models of what consti-tutes “useful” for that particular group of stakeholders.

It is now time for an obligatory word of caution. Perhaps of critical impor-tance is that evaluators ought to consider ethical ramifi cations of using technology and social media, and ensure that using these tools does not impinge on guide-lines for ethical conduct. Th is may include taking the time to assess any possible risks to stakeholders and taking the necessary precautions to safeguard data and participant identities.

Yet another consideration is the degree to which the integration of these tools is congruent with the evaluation context. Despite some of the advanced systems such as SharePoint and other collaborative knowledge soft ware, evaluator and participant perceptions may be deeply infl uenced by processes that are not so easily mapped by the tools we have at our disposal today. Th ere is a challenge in facilitating verbal information without the natural supplementation of physical gestures and nonverbal cues ( Andres, 2011 ) that we are so used to. To circum-vent this challenge, research on what may be key to successful virtual teamwork ( DeRosa, 2011 ) suggested that commencing the teamwork with a face-to-face meeting can be important, as is ensuring that virtual teamwork happens on a regular basis. Also, we are seeing more and more tools, such as VoiceTh read, that allow for virtual dialogue via digital audio and video. We believe that these tools will become more widespread as this kind of technology develops.

Lastly, if knowledge mobilization research is any indication, building websites and forums does not necessarily mean that users will make use of these technolo-gies ( Cooper & Levin, 2010 ; Nutley et al., 2003 ). In other words, technology may be a tool that eases interactivity, but it should not be an added burden; thus, we posit that it ought to be used as little or as much as the evaluation context de-mands. Th e use of these technologies must be a negotiated process that involves the stakeholders; however, having access to a multitude of diff erent platforms and allowing clients to engage with evaluation in fun and creative ways can

Page 12: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

154 Svensson and Cousins

© 2015 CJPE 30.2, 143–159 doi: 10.3138/cjpe.223

undoubtedly improve the utilization of results and improve ownership. Th us, evaluators should not be afraid to experiment with technology, and discover the balance that makes sense for them.

Various resources are now available to evaluators interested in integrating technology into their practice. One of these resources, the American Evaluation Association 365 Blog (http://aea365.org/blog/), commonly lists useful resources and makes suggestions for how evaluators can make use of new tools and social media to improve their work. Another, a tool titled “Harnessing Collaborative Technology,” lists several useful technologies based on the desired function (such as “learning,” “building community and trust,” and “assessing progress and results,” among others), rates their ease of use, and makes suggestions for what technology may be optimal based on the size of the collaboration. Although this tool was designed by GrantCraft (2015) for the philanthropy sector, we think that many of these technologies could also be integrated into evaluation practice, and we invite practitioners to make use of this resource.

As an example, we have begun to use infographics as a mode of sharing data fi ndings with stakeholders in a multiyear government-funded evaluation project. In our case, qualitative data were coded and analyzed, and the preliminary fi nd-ings were aggregated in a visual form using Piktochart templates and the colour scheme of the organization we were working with. We found that this approach was positively received, and the stakeholders responded to the data with both cu-riosity and interest. Rather than reading a long midway report, we provided them with an alternate way of interacting with the fi ndings, one that we believe yielded a greater degree of engagement on their part. At the present time, there is also willingness to incorporate the fi ndings in this format into the organization’s im-pact report. Other potential uses for technology within the context of this project may include the use of XMind for mapping users’ mental models related to what aspects of the project were helpful. Lastly, we hope to use a digital storytelling tool like Steller to capture the lessons learned along the way; this can be co-constructed with our partners and shared with the project funder.

In closing, research on the use of technology in evaluation practice ( Jamie-son & Azzam, 2012 ) demonstrated that there is already interest in technology, at least with respect to tools such as those for e-mail, survey development, and digital collection. However, we believe that integrating a wider range of technologies such as those mentioned above could be the next step forward, and a valuable stride in the ongoing professionalization of evaluation. Only by actively using these tools can we begin to learn what works and does not work, under what conditions, and in what ways technology facilitates the interactivity that we believe is of such inherent value to evaluation use.

CONCLUSION Th e fear that evaluation fi ndings will end on a dusty shelf continues to permeate the fi eld, and arguments for more “use”—including conceptual use, instrumental use, symbolic use of evaluation fi ndings, and process use—continue to feature

Page 13: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

Interactivity, Technology, and Evaluation Utilization 155

CJPE 30.2, 143–159 © 2015doi: 10.3138/cjpe.223

prominently in evaluation literature ( Leeuw, 2009 ). Numerous studies through the years have aimed to determine what factors can lead to useful evaluations, though it appears that scholars have yet to reach consensus. Concepts of utilization, im-pact, uptake, and so on continue to be explored and redefi ned at the same time, challenging our understanding of what it means to engage in evaluation as both producers (and coproducers) and users of evaluation knowledge. Owing to the constructivist theories of Guba and Lincoln (1995) , there has also been an unde-niable transition away from thinking of utilization as a combination of a few key ingredients, to a view that contextual and human factors and allowing for fl ex-ibility may be the provisional answers we are looking for ( Contandriopoulos & Brousselle, 2012 ; Leviton, 2003 ; Patton, 1988 ; Preskill & Caracelli, 1997 ; Shulha & Cousins, 1997 ).

Building on this epistemological standpoint and linking it to Patton’s recent work on developmental approaches to evaluation, we argued that neither evaluator nor stakeholder perceptions of utility are objective nor permanent. It also seems evident that linear communication models seldom lead to eff ective knowledge use ( Contandriopoulos & Brousselle, 2012 ; Cooper & Levin, 2010 ; Weiss, 1998 ). In-stead, knowledge building evolves in a fl exible and situational manner. From these viewpoints, we argue that achieving use is more likely by ensuring that the evalua-tion process is interactive. Without interaction, how evaluators see usefulness rep-resents only one spectrum of ideas—that of the evaluator. Emphasizing interactivity does not devalue the role of context, a characteristic that, as we tried to demonstrate, is oft en emphasized in program evaluation literature. Contextual characteristics and process use can turn the tide of an evaluation, but making evaluation interactive challenges us to question and redefi ne what we see as useful. Naturally, what the evaluator may fi nd is that interactions with users immersed in context can lead to change in their own perceptions, thereby also ensuring fl exibility.

Lastly, we suggest that there is value in exploring technology as a possible way to foster interactivity and build bridges of consensus. It must not be framed as a miracle solution, or aim to turn evaluation into “E-valuation,” and while it goes without saying that fl uidity of interaction cannot be easily replaced by technology, it can certainly augment it. Technology is meant to add value rather than replace key interpersonal dynamics, as the nature of these dynamics, for the time being, cannot be reproduced by an algorithm. Instead, ongoing adaptation and negotia-tion are demanded.

ACKNOWLEDGEMENTS A previous version of this article was presented at the biennial meeting of the European Evaluation Society in Dublin, Ireland, October 2014. Th e authors are grateful for the generous fi nancial support provided by the University of Ottawa and the Canadian Evaluation Society Educational Fund (CESEF) for dissemina-tion activities associated with this research. Th e views and opinions expressed in this article do not necessarily refl ect those of the funding organizations.

Page 14: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

156 Svensson and Cousins

© 2015 CJPE 30.2, 143–159 doi: 10.3138/cjpe.223

REFERENCES Amo , C. ,  & Cousins , J. B. ( 2007 ). Going through the process: An examination of the

operationalization of process use in empirical research on evaluation . In J. B. Cousins (Ed.), Process use in theory, research and practice. New Directions for Evaluation, 116 (pp. 5 – 26 ). San Francisco, CA : Jossey-Bass . http://dx.doi.org/10.1002/ev.240

Andres , H. P. ( 2011 ). Shared mental model development during technology-mediated col-laboration . International Journal of e-Collaboration , 7 ( 3 ), 14 – 30 .

Ashley , S. R. ( 2009 ). Innovation diff usion: Implications for evaluation . New Directions for Evaluation, 2009 ( 124 ), 35 – 45 .

BenMoussa , C. ( 2010 ). Exploiting mobile technologies to build a knowledge mobiliza-tion capability: A work system-based method. Proceedings of the 43rd Hawaii In-ternational Conference on System Sciences . Retrieved from https://www.computer.org/csdl/proceedings/hicss/2010/3869/00/02-06-04.pdf http://dx.doi.org/10.1109/HICSS.2010.197

Best , A. ,  & Holmes , B. ( 2010 ). Systems thinking, knowledge and action: towards bet-ter models and methods. Evidence  & Policy , 6 ( 2 ), 145 – 159 . http://dx.doi.org/10.1332/174426410X502284

Chelimsky , E. ( 2007 ). Factors infl uencing the choice of methods in federal evaluation practice. New Directions for Evaluation , 2007 ( 113 ), 13 – 33 . http://dx.doi.org/10.1002/ev.213

Clementz , A. R. ( 2012 ). A. Rae Clementz on evaluation tech resources [Web log post]. Retrieved from http://aea365.org/blog/a-rae-clementz-on-evaluation-tech-resources/

Contandriopoulos , D. ,  & Brousselle , A. ( 2012 ). Evaluation models and evaluation use. Evaluation , 18 ( 1 ), 61 – 77 . http://dx.doi.org/10.1177/1356389011430371

Cooper , A. ,  & Levin , B. ( 2010 ). Some Canadian contributions to understanding knowledge mobilisation. Evidence  & Policy , 6 ( 3 ), 351 – 369 . http://dx.doi.org/10.1332/174426410X524839

Cousins , J. B. ,  & Leithwood , K. A. ( 1986 ). Current empirical research on evalua-tion utilization. Review of Educational Research , 56 ( 3 ), 331 – 364 . http://dx.doi.org/10.3102/00346543056003331

DeLuca , C. , Poth , C. , & Searle , M. ( 2009 ). Evaluation for learning: A cross-case analysis of evaluator strategies. Studies in Educational Evaluation , 35 ( 4 ), 121 – 129 . http://dx.doi.org/10.1016/j.stueduc.2009.12.005

DeRosa , D. ( 2011 ). Collaborating from a distance: Success factors of top-performing virtual teams. International Journal of e-Collaboration , 7 ( 3 ), 43 –54. http://dx.doi.org/10.4018/jec.2011070104

Forss , K. , Rebien , C. C. , & Carlsson , J. ( 2002 ). Process use of evaluations: Types of use that precede lessons learned and feedback. Evaluation , 8 ( 1 ), 29 – 45 . http://dx.doi.org/10.1177/1358902002008001515

Galen , M. , & Grodzicki , D. ( 2011 ). Utilizing emerging technology in program evaluation. In S. Mathison (Ed.), Really new directions in evaluation: Young evaluators’ perspec-tives. New Directions for Evaluation, 131, 123–128. http://dx.doi.org/10.1002/ev.389

Page 15: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

Interactivity, Technology, and Evaluation Utilization 157

CJPE 30.2, 143–159 © 2015doi: 10.3138/cjpe.223

Ginsburg , A. , & Rhett , N. ( 2003 ). Building a better body of evidence: New opportunities to strengthen evaluation utilization. American Journal of Evaluation , 24 ( 4 ), 489 – 498 . http://dx.doi.org/10.1177/109821400302400406

GrantCraft . ( 2015 ). Harnessing collaborative technologies . Retrieved from http:// collaboration.grantcraft .org/

Grasso , P. G. ( 2003 ). What makes an evaluation useful? American Journal of Evaluation , 24 ( 4 ), 507 – 514 . http://dx.doi.org/10.1177/109821400302400408

Guba , E. , & Lincoln , Y. ( 1995 ). Fourth generation evaluation ( 2nd ed. ). Newbury Park, CA : Sage .

Henry , G. T. , & Mark , M. M. ( 2003 ). Beyond use: Understanding evaluation’s infl uence on attitudes and actions. American Journal of Evaluation , 24 ( 3 ), 293 – 314 . http://dx.doi.org/10.1177/109821400302400302

Jamieson , V. , & Azzam , T. ( 2012 ). Th e use of technology in evaluation practice. Journal of Multidisciplinary Evaluation , 8 ( 18 ), 1 – 15 .

Kim , B. ( 2001 ). Social constructivism . In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology (pp. 55–61). Retrieved from http://epltt.coe.uga.edu/index.php?title=Social_Constructivism

Kistler , S. ( 2014 ). Susan Kistler on Padlet: A free virtual bulletin board and brainstorming tool [Web log post]. Retrieved from http://aea365.org/blog/susan-kistler-on-padlet-a-free-virtual-bulletin-board-and-brainstorming-tool/

Leeuw , F. ( 2009 ). Evaluation: A booming business but is it adding value? Evaluation Journal of Australasia , 9 ( 1 ), 3 – 9 .

Levin , B. ( 2011 ). Mobilising research knowledge in education. London Review of Education , 9 ( 1 ), 15 – 26 . http://dx.doi.org/10.1080/14748460.2011.550431

Leviton , L. C. ( 2003 ). Evaluation use: Advances, challenges, and applications . American Journal of Evaluation , 24 ( 4 ), 525 – 535 .

Leviton , L. C. , & Hughes , E. F. X. ( 1981 ). Research on the utilization of evaluation: A review and synthesis. Evaluation Review , 5 ( 4 ), 525 – 548 . http://dx.doi.org/10.1177/0193841X8100500405

Love , A. J. ( 1993 ). Internal evaluation: An essential tool for human service organizations. Canadian Journal of Program Evaluation , 8 ( 2 ), 1 – 15 .

Love , A. J. ( 1998 ). Internal evaluation: Integrating evaluation and social work prac-tice. Scandinavian Journal of Social Welfare , 7 ( 2 ), 145 – 151 . http://dx.doi.org/10.1111/j.1468-2397.1998.tb00216.x

Mathison , S. ( 2011 ). Internal evaluation, historically speaking. New Directions for Evalua-tion , 2011 ( 132 ), 13 – 23 . http://dx.doi.org/10.1002/ev.393

Neuman , A. , Shahor , N. , Shina , I. , Sarid , A. ,  & Saar , Z. ( 2013 ). Evaluation utilization research—Developing a theory and putting it to use. Evaluation and Program Plan-ning , 36 ( 1 ), 64 – 70 . http://dx.doi.org/10.1016/j.evalprogplan.2012.06.001

Nutley , S. , Walter , I. , & Davies , H. T. O. ( 2003 ). From knowing to doing: A framework for understanding evidence-into-practice agenda. Evaluation , 9 ( 2 ), 125 – 148 . http://dx.doi.org/10.1177/1356389003009002002

Page 16: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

158 Svensson and Cousins

© 2015 CJPE 30.2, 143–159 doi: 10.3138/cjpe.223

Ottoson , J. M. ( 2009 ). Knowledge-for-action theories in evaluation: Knowledge utiliza-tion, diff usion, implementation, transfer, and translation . In J. M. Ottoson  & P. Hawe (Eds.), Knowledge utilization, diff usion, implementation, transfer, and translation: Implications for evaluation. New Directions for Evaluation, 124 (pp. 7–20). San Fran-cisco, CA : Jossey-Bass .

Patton , M. Q. ( 1988 ). Th e evaluator’s responsibility for utilization. Evaluation Practice , 9 ( 2 ), 5 – 24 . http://dx.doi.org/10.1016/S0886-1633(88)80059-X

Patton , M. Q. ( 1997 ). Utilization-focused evaluation: Th e new century text ( 3rd ed. ). Th ou-sand Oaks, CA : Sage .

Patton , M. Q. ( 1998 ). Discovering process use. Evaluation , 4 ( 2 ), 225 – 233 . http://dx.doi.org/10.1177/13563899822208437

Patton , M. Q. ( 2001 ). Evaluation, knowledge management, best practices, and high qual-ity lessons learned. American Journal of Evaluation , 22 ( 3 ), 329 – 336 . http://dx.doi.org/10.1177/109821400102200307

Patton , M. Q. ( 2008 ). Utilization-focused evaluation ( 4th ed ). Th ousand Oaks, CA : Sage . Patton , M. Q. ( 2011 ). Developmental evaluation: Applying complexity concepts to enhance

innovation and use . New York, NY : Guilford Press . Patton , M. Q. , Grimes , P. S. , Guthrie , K. M. , Brennan , N. J. , French , B. D. , & Blyth , D. A.

( 1977 ). In search of impact: An analysis of the utilization of federal health evalu-ation research . In C. H. Weiss (Ed.), Using social research in public policy making (pp.  141 – 163 ). Lexington, MA : D. C. Heath .

Pawson , R. , & Tilley , N. ( 1997 ). Realistic evaluation . London, UK : Sage . Preskill , H. , & Caracelli , V. ( 1997 ). Current and developing conceptions of use: Evaluation

use TIG survey results. Evaluation Practice , 18 ( 3 ), 209 – 225 . http://dx.doi.org/10.1016/S0886-1633(97)90028-3

Preskill , H. , Zuckerman , B. , & Matthews , B. ( 2003 ). An exploratory study of process use: Findings and implications for future research. American Journal of Evaluation , 24 ( 4 ), 423 – 442 . http://dx.doi.org/10.1177/109821400302400402

Shulha , L. M. ,  & Cousins , J. B. ( 1997 ). Evaluation use: Th eory, research and practice since 1986. Evaluation Practice , 18 ( 3 ), 195 – 208 . http://dx.doi.org/10.1016/S0886-1633(97)90027-1

Smith , N. L. ,  & Chircop , S. ( 1989 ). Th e Weiss-Patton debate: Illumination of the fun-damental concerns. American Journal of Evaluation , 10 ( 1 ), 5 – 13 . http://dx.doi.org/10.1177/109821408901000102

Sridharan , S. ( 2003 ). Introduction to special section on “What is a useful evaluation?” American Journal of Evaluation , 24 ( 4 ), 483 – 487 .

Stufflebeam , D. L. ( 2008 ). Egon Guba’s conceptual journey to constructiv-ist evaluation: A tribute. Qualitative Inquiry , 14 ( 8 ), 1386 – 1400 . http://dx.doi.org/10.1177/1077800408325308

Taut , S. ( 2008 ). What have we learned about stakeholder involvement in program evalua-tion? Studies in Educational Evaluation , 34 ( 4 ), 224 – 230 . http://dx.doi.org/10.1016/j.stueduc.2008.10.007

Page 17: Meeting at the Crossroads: Interactivity, Technology, and ......this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task,

Interactivity, Technology, and Evaluation Utilization 159

CJPE 30.2, 143–159 © 2015doi: 10.3138/cjpe.223

Telfair , J. , & Leviton , L. C. ( 1999 ). Th e community as client: Improving the prospects for useful evaluation fi ndings. New Directions for Evaluation , 1999 ( 83 ), 5 – 15 . http://dx.doi.org/10.1002/ev.1142

Volkov , B. B. , & Baron , M. E. ( 2011 ). Issues in internal evaluation: Implications for prac-tice, training, and research . In B. B. Volkov  & M. E. Baron (Eds.), Internal evaluation in the 21st century. New Directions for Evaluation, 132 (pp. 101–111). San Francisco, CA : Jossey-Bass .

von Glasersfeld , E. ( 2005 ). Introduction: Aspects of constructivism . In C. Fosnot (Ed.), Constructivism: Th eory, perspectives, and practice (pp. 3 – 7 ). New York, NY : Teachers College Press .

Weaver , L. , & Cousins , J. B. ( 2004 ). Unpacking the participatory process . Journal of Multi-disciplinary Evaluation , 1 , 19 – 40 .

Weiss , C.H. ( 1988a ). Evaluation for decisions: Is anybody there? Does anybody care? Evalu-ation Practice , 9 ( 1 ), 5 – 19 . http://dx.doi.org/10.1016/S0886-1633(88)80017-5

Weiss , C.H. ( 1988b ). If program decisions hinged only on information: A response to Patton. Evaluation Practice , 9 ( 3 ), 15 – 28 . http://dx.doi.org/10.1016/S0886-1633(88)80042-4

Weiss , C. H. ( 1998 ). Have we learned anything new about the use of evaluation? American Journal of Evaluation , 19 ( 1 ), 21 – 33 . http://dx.doi.org/10.1177/109821409801900103

Yarbrough , D. B. , Shulha , L. M. , Hopson , R. K. , & Caruthers , F. A. ( 2011 ). Th e program evaluation standards: A guide for evaluators and evaluation users ( 3rd ed. ). Th ousand Oaks, CA : Sage .

AUTHOR INFORMATION Kate Svensson is a doctoral candidate at the University of Ottawa where she is pursuing her Ph.D. in Education (Leadership, Evaluation, Curriculum and Policy Studies). Her research interests include studying the potential uses of digital technology and social media in the fi eld of program evaluation, and in mobilization of evaluation knowledge. J. Bradley Cousins is Professor of Evaluation at the Faculty of Education, University of Ottawa, and former Director of the Centre for Research on Educational and Community Services. Cousins’ main interests in program evaluation include participatory and col-laborative approaches, use, and capacity building. He received his Ph.D. in educational measurement and evaluation from the University of Toronto in 1988. Th roughout his career he has received several awards for his work in evaluation including the “Contribu-tion to Evaluation in Canada” award (CES, 1999 ), the Karl Boudreau award for leadership in evaluation (CES-NCC, 2007 ), and the Paul F. Lazarsfeld award for theory in evaluation (AEA, 2008 ). He has published numerous articles and books on evaluation and was Editor of the Canadian Journal of Program Evaluation from 2002 to 2010.