systematic literature reviews in engineering education and other developing interdisciplinary fields

32
Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields Maura Borrego, a Margaret J. Foster, b and Jeffrey E. Froyd b a VirginiaTech, b Texas A&MUniversity Abstract Background In fields such as medicine, psychology, and education, systematic reviews of the literature critically appraise and summarize research to inform policy and practice. We argue that now is an appropriate time in the development of the field of engineering education to both support systematic reviews and benefit from them. More reviews of prior work conducted more systematically would help advance the field by lowering the barrier for both researchers and practitioners to access the literature, enabling more ob- jective critique of past efforts, identifying gaps, and proposing new directions for research. Purpose The purpose of this article is to introduce the methodology of systematic reviews to the field of engineering education and to adapt existing resources on systematic reviews to engineering education and other developing interdisciplinary fields. Scope/Method This article is primarily a narrative review of the literature on conducting systematic reviews. Methods are adapted to engineering education and similar developing interdisciplinary fields. To offer concrete, pertinent examples, we also conducted a system- atic review of systematic review articles published on engineering education topics since 1990. Fourteen exemplars are presented in this article and used to illustrate systematic re- view procedures. Conclusions Systematic reviews can benefit the field of engineering education by synthe- sizing prior work, by better informing practice, and by identifying important new directions for research. Engineering education researchers should consider including systematic reviews in their repertoire of methodologies. Keywords methods; review; systematic review Introduction Scholarly contributions must be situated in the context of prior work; however, the number of articles published each year is increasing rapidly, with no signs of abating. Individual researchers cannot read all of the literature required to stay current. In response, diverse fields are developing systematic review approaches to synthesizing primary studies as a research methodology in and of itself. For example, in fields such as medicine, psychology, and educa- tion, systematic reviews of the literature are conducted to critically appraise, summarize, and attempt to “reconcile the evidence in order to inform policy and practice” (Petticrew & Roberts, 2006, p. 15). Systematic reviews make important contributions to the evidence base Journal of Engineering Education V C 2014 ASEE. http://wileyonlinelibrary.com/journal/jee January 2014, Vol. 103, No. 1, pp. 45–76 DOI 10.1002/jee.20038

Upload: jeffrey-e

Post on 28-Mar-2017

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Systematic LiteratureReviews inEngineeringEducationandOtherDeveloping Interdisciplinary Fields

Maura Borrego,a Margaret J. Foster,b and Jeffrey E. Froydb

aVirginiaTech, bTexasA&MUniversity

AbstractBackground In fields such as medicine, psychology, and education, systematic reviews ofthe literature critically appraise and summarize research to inform policy and practice.We argue that now is an appropriate time in the development of the field of engineeringeducation to both support systematic reviews and benefit from them. More reviews ofprior work conducted more systematically would help advance the field by lowering thebarrier for both researchers and practitioners to access the literature, enabling more ob-jective critique of past efforts, identifying gaps, and proposing new directions for research.

Purpose The purpose of this article is to introduce the methodology of systematic reviewsto the field of engineering education and to adapt existing resources on systematic reviewsto engineering education and other developing interdisciplinary fields.

Scope/Method This article is primarily a narrative review of the literature on conductingsystematic reviews. Methods are adapted to engineering education and similar developinginterdisciplinary fields. To offer concrete, pertinent examples, we also conducted a system-atic review of systematic review articles published on engineering education topics since1990. Fourteen exemplars are presented in this article and used to illustrate systematic re-view procedures.

Conclusions Systematic reviews can benefit the field of engineering education by synthe-sizing prior work, by better informing practice, and by identifying important new directionsfor research. Engineering education researchers should consider including systematic reviewsin their repertoire of methodologies.

Keywords methods; review; systematic review

IntroductionScholarly contributions must be situated in the context of prior work; however, the numberof articles published each year is increasing rapidly, with no signs of abating. Individualresearchers cannot read all of the literature required to stay current. In response, diverse fieldsare developing systematic review approaches to synthesizing primary studies as a researchmethodology in and of itself. For example, in fields such as medicine, psychology, and educa-tion, systematic reviews of the literature are conducted to critically appraise, summarize, andattempt to “reconcile the evidence in order to inform policy and practice” (Petticrew &Roberts, 2006, p. 15). Systematic reviews make important contributions to the evidence base

Journal of Engineering Education VC 2014 ASEE. http://wileyonlinelibrary.com/journal/jeeJanuary 2014, Vol. 103, No. 1, pp. 45–76 DOI 10.1002/jee.20038

Page 2: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

of a discipline by providing synthesized reviews on important issues or topics (Gough, Oliver,& Thomas, 2012). Reviews can demonstrate gaps in recent work or highlight areas where aconcept is accepted as true but little evidence exists to support it (Petticrew & Roberts,2006). Systematic review purposes and procedures may overlap with narrative reviews, yetthey constitute a distinctive approach to synthesizing the literature. Specifically, systematicreviews follow transparent, methodical, and reproducible procedures that might be groupedbroadly into two arenas: (1) selecting a collection of appropriate studies that will address thereview question from the vast and rapidly increasing knowledge base and (2) extractingtrends, patterns, relationships, and the overall picture from the collected studies. While proce-dures to select a collection of appropriate studies are coalescing, procedures to make meaningfrom a set of primary studies are still emerging. One set of quantitative meaning-making pro-cedures, i.e., meta-analysis, has emerged to study efficacy of interventions, but newer qualita-tive and quantitative synthesis procedures for meaning making are still evolving.

Now is an appropriate time in the development of the field of engineering education to bothsupport systematic reviews and benefit from them. Borrego (2007b) has applied Fensham’s(2004) criteria for evaluating development of science education research disciplines to engineer-ing education and found dramatic improvements are still needed to develop its foundation. Spe-cifically, the improvements needed in engineering education include:

Progression: evidence that researchers are informed by previous studies and buildupon or deepen understanding

Model publications: publications that other researchers hold up as models of conductand presentation of research studies in the field

Seminal publications: publications recognized as important or definitive because theymarked new directions or provided new insights

Systematic reviews have strong potential to become model and seminal publications onwhich engineering education research can support progress(ion) by uncovering patterns, con-nections, relationships, and trends across multiple studies. For example, Johnson, Johnson,and Smith’s (1991, 1998) systematic reviews of active and collaborative learning are highlycited, seminal publications providing the foundation for a variety of active learning interven-tions within and beyond engineering education.

Further, engineering education is an interdisciplinary field, drawing on multiple – oftenmore mature – fields (e.g., cognitive sciences, physical sciences, education, life sciences, orga-nizational development and change, traditional engineering disciplines) for its content, theo-retical frameworks, research methodologies, and key findings (Borrego, Streveler, Miller, &Smith, 2008; Jesiek, Newswander, & Borrego, 2009; Johri & Olds, 2011; Lucena, Downey,Jesiek, & Elber, 2008). Identifying, codifying, and synthesizing across these multiple fieldswill provide crucial foundations for advancement and reduce the likelihood of “reinventing thewheel.” Although there has been work on engineering education for over 100 years (Lohmann& Froyd, 2010), research in engineering education is in its infancy. In this emerging state(Haghighi, 2005; Johri & Olds, 2011), providing wide-ranging, transparent, repeatablereviews would help provide key underpinnings and help focus on productive directions likelyto accelerate development of the field. Finally, a systematic review of systematic reviews inengineering education (described briefly later in this article) demonstrates growing interest inthis type of synthesis among engineering education researchers. Providing clearer criteria and

46 Borrego, Foster, & Froyd

Page 3: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

standards for conducting and evaluating systematic reviews would improve potential contribu-tions. In sum, more reviews of existing work across multiple fields, conducted more systemati-cally, will help advance the field of engineering education by lowering barriers for bothresearchers and practitioners to access relevant findings, by enabling more objective critique ofpast efforts, by identifying gaps, and by proposing fruitful directions for research.

PurposeThis article describes systematic reviews as a methodology to promote development of the fieldof engineering education and adapts existing resources on systematic reviews to engineeringeducation and other developing interdisciplinary fields. Several guides to conducting systematicreviews already exist (e.g., Gough et al., 2012; Petticrew & Roberts, 2006). However, theseguides were written for other disciplines with greater degrees of scholarly maturity and greaterbreadth and depth of research – and often for different purposes (e.g., efficacy of interventions).Our principal contribution lies in adapting these procedures specifically to engineering educa-tion through collaboration between experienced engineering education researchers and a librar-ian whose area of scholarly expertise is conducting systematic reviews. While we do notexplicitly address other developing fields that may be at a similar stage, we anticipate this articlecan be a potential resource to similar fields that rely on more established disciplines for theoryand research methods. Methodologically, this article contributes a primarily narrative review ofthe literature on conducting systematic reviews, specifically for engineering education.

In the following sections, we describe the roles and procedures of systematic reviews inother disciplines and compare the primary purposes and types of reviews. Then we describethe steps in conducting and reporting the results of a systematic review, including establish-ing validity and reliability. We followed systematic review procedures to identify high-quality systematic reviews of topics related to engineering education. These 14 systematicreviews serve as concrete examples for applying the various steps of systematic review meth-ods in engineering education. (It is beyond the scope of this article to include complete find-ings and synthesis from our systematic review; rather, we hope the examples will offerspecific models for engineering education researchers.)

SystematicReviews inOtherDisciplinesOver the past decade, the number of systematic reviews published in medicine, psychology,and education has dramatically increased. A search of Scopus, a comprehensive abstract andcitation database of peer-reviewed literature, demonstrated that the most prolific of the dis-ciplines is medicine, with 8,243 reviews published in 2012 alone. Systematic reviews in med-icine have become highly valued types of evidence for clinical practices and public policies.The Cochrane Collaboration, an international, nonprofit organization, is leading efforts toinform clinical practices and public policy in medicine by publishing systematic reviews andmeta-analyses. Other disciplines also produce reviews; the trends in engineering, psychology,education, and engineering articles with education topics are shown in Figure 1. Althoughnot reaching the scale of health-related fields, reviews in education and psychology are beingpublished at rapidly increasing rates.

In education, three organizations have made strides in creating reviews and developingmethods. The Norwegian-based Campbell Collaboration is an international, nonprofit or-ganization funded in 1999 as a related organization of the Cochrane Collaboration. TheCampbell Collaboration conducts reviews in education, criminal justice, and social welfare

Systematic Literature Reviews in Engineering Education 47

Page 4: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

(Campbell Collaboration, n.d.). They have adapted systematic review methods to accommo-date broad disciplinary perspectives, especially those considering social justice and disparities.Another international organization, the Evidence for Policy and Practice Information andCo-ordinating Centre (EPPI-Centre), was founded in 2000 by the Department for Educa-tion and Skills, part of the Social Science Research Unit at the Institute of Education, Uni-versity of London (Andrews, 2005). The EPPI-Centre provides methods and tools forconducting systematic reviews and collects research on education with a global scope in theDatabase of Education Research. In 2002, the United States Education Sciences Reform Actestablished the What Works Clearinghouse (Institute of Education Sciences, 2012), whichconcentrates on effectiveness of education interventions by appraising and synthesizing quan-titative studies.

Systematic reviews in medicine, psychology, and education tend to focus on efficacy ofinterventions. For example, they may seek to understand whether a drug treatment programprevents or delays relapse, or whether a specific curriculum increases student standardizedmath test scores. However, reviews in these disciplines also explore etiology (causal effects),determine cost effectiveness, and generate theories.

Perhaps the most important consideration in selecting models for systematic reviews inengineering education is the availability of quantitative experimental studies from which todraw for reviews. The more established fields of psychology, education, and medicine havemore quantitative experimental studies than are generally available for a given engineeringeducation topic; hence these fields have longer histories of systematic reviews and meta-analyses. As a result, many of the resources describing how to conduct systematic reviews

Figure 1 Number of systematic reviews by subject and year, 1995–2012. Datasource: Scopus.

48 Borrego, Foster, & Froyd

Page 5: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

tend to privilege positivist and quantitative approaches (e.g., U.S. Department of Education,2007). In contrast, developing fields, which rely more heavily on qualitative research, aremore accommodating of a variety of study designs in applying the logic of systematic reviews(Gough et al., 2012). For example, public health researchers in the United Kingdom haveadapted systematic review approaches from clinical medicine to include quasi-experimentaldesigns and qualitative studies while still emphasizing randomized control trials when avail-able (National Institute for Health and Clinical Excellence [NICE], 2009). Public healthapproaches to systematic reviews may be particularly relevant to engineering education.

The systematic review literature in public health also identified a number of limitations tosystematic reviews that overemphasize tightly controlled studies. Slavin (1984) criticized meta-analysis methods that rely strictly on randomized control trials for focusing too much onresearch design and not enough on the quality of the study; he noted that well-conductedquasi-experiments may yield more useful information than poorly executed randomized controltrials. Publication bias favoring significant results tends to overinflate the effects of an interven-tion (Petticrew & Roberts, 2002). Petticrew and Roberts (2002) noted that emphasis on effi-cacy of interventions does not take into account the complexity of interactions around publichealth issues. Public health often deals with cost effectiveness and the gap between efficacyand adoption of effective behaviors, and similar issues are receiving increased attention in engi-neering education in terms of adoption of research-based instructional strategies (e.g., Borrego,Froyd, & Hall, 2010; Montfort, Brown, & Pegg, 2012). Thus, for many reasons a publichealth approach to systematic reviews might be most informative to engineering education.

A second important difference between engineering education and psychology, medicine,and education is the interdisciplinary nature of engineering education. As a newer field, engi-neering education has a greater tendency to draw its literature and theory base from relatedfields, including more established cognitive psychology and industrial and organizationalpsychology. Particularly if the scope is not limited to studies of engineers or students major-ing in engineering, the majority of sources in a systematic review undertaken by an engineer-ing education researcher may come from other sources than engineering education journalsand conferences. The availability of experimental studies and the norms of the field(s) fromwhich sources are drawn are important considerations in the inclusion criteria and analysisapproaches selected for a systematic review.

Later in this article we will compare and contrast systematic reviews from other types ofreviews. First, we provide stronger motivation for conducting reviews and describe the stepsin conducting a systematic review to help readers understand the nature of systematic reviews.(Readers seeking an overview may skim the headings and tables or skip ahead.)

PurposesandGoalsofReviewsResearchers may have several different goals in mind when undertaking a review, including“to enable the researcher both to map and to assess the existing intellectual territory, and tospecify a research question to develop the existing body of knowledge further” (Tranfield,Denyer, & Smart, 2003, p. 207); “to evaluate the competing theoretical approaches to con-ceptual representations” (Kiefer & Pulverm€uller, 2012, p. 805); and “to identify and describethe extent to which theory or theoretical frameworks informed the development and evalua-tion of decision support technologies” (Durand, Stiel, Boivin, & Elwyn, 2008, p. 125).

Guides on systematic studies offer multiple purposes. For example, Gough et al. (2012)succinctly stated reasons that reviews of prior work are needed:

Systematic Literature Reviews in Engineering Education 49

Page 6: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

1. Any individual research study may be fallible, either by chance, or because of howit was designed and conducted or reported.

2. Any individual study may have limited relevance because of its scope and context.

3. A review provides a more comprehensive and stronger picture based on many stud-ies and settings rather than a single study.

4. The task of keeping abreast of all previous and new research is usually too large foran individual.

5. Findings from a review provide a context for interpreting the results of a new pri-mary study.

6. Undertaking new primary studies without being informed about previous researchmay result in unnecessary, inappropriate, irrelevant, or unethical research. (p. 3)

Baumeister and Leary (1997) included the following purposes for reviews: evaluatingintervention efficacy, tracing historical development, describing state of knowledge or prac-tice on a topic, developing or evaluating theory, and identifying opportunities for future re-search and innovation. Reviewers may seek to address multiple purposes in the same review.Since justification for intervention efficacy is well understood in the engineering educationcommunity, we briefly describe other purposes and offer examples in engineering educationof which we are aware.

Trace Historical DevelopmentHistorical reviews trace development of a field or research area chronologically, often withan “ongoing commentary regarding the impact and shortcomings of various contributions”(Baumeister & Leary, 1997, p. 312). Historical development reviews are not particularly com-mon. Since engineering education research has a relatively short history, initial historical devel-opment reviews usually encompass the entire timeline of engineering education research.Wankat, Felder, Smith, and Oreovicz (2002) and Jesiek et al. (2009) included historical devel-opment of engineering education in the literature review in their publications (neither of whichwere systematic reviews).

Describe State of Knowledge or Practice on a TopicMany of the earliest systematic reviews were intended as guides for practitioners to synthesizewhat was currently known about practice in a specific field. Many reviews in engineeringeducation to date have been intended for this purpose (e.g., Froyd & Ohland, 2005; Litzinger,Lattuca, Hadgraft, & Newstetter, 2011; Prince, 2004; Prince & Felder, 2006; Streveler,Litzinger, Miller, & Steif, 2008).

Evaluate or Develop TheoryEngineering education research has been criticized for lack of connections between relatedstudies, and many scholars have called for deeper engagement with theory to promote cumula-tive growth of understanding (Beddoes & Borrego, 2011; Koro-Ljungberg & Douglas, 2008).Engineering education has few, if any, of its own theories; researchers typically borrow theories(as well as methods) from many other fields including but not limited to psychology, educa-tion, sociology, and women’s studies. Systematic reviews can support both evaluating existingtheories and developing new ones, as described further in the following subsections.

50 Borrego, Foster, & Froyd

Page 7: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Evaluate theory Engineering educators are often skeptical that research results or theo-ries developed by other fields are relevant to engineering education settings (Borrego, 2007a;National Research Council, 2011; Wankat et al., 2002). At least two potential approachesmay improve relevance. One is to replicate studies with engineering education participants.The other is through scholarly synthesis and translation procedures such as those developedfor systematic reviews. Systematic reviewers in engineering education might, for example,combine primary studies conducted with engineering and nonengineering students to con-sider whether a particular motivation theory applies to engineering undergraduates and whatadditional evidence is needed for the theory’s relevance to engineering students. We did notidentify any engineering education reviews that took the approach of evaluating theory.

Develop theory The ability to develop and evaluate theoretical frameworks depends, inpart, on the extent to which authors apply theoretical frameworks that have been constructedand how their applications inform refinement of the frameworks. Construction of a frameworkrequires assembly of published literature into patterns; in turn, framework construction requiresknowledge of appropriate literature across multiple disciplines to include published partial pat-terns. After initial framework development, the refinement and improvement of the frameworkare informed by applications. Strengths of the framework can be established, and opportunitiesfor improvement can be identified. Applications may show where inclusion of more factors,more elaborate descriptions of interactions among the factors, or more sophisticated analysis offactor interaction would improve the framework. Framework refinement and improvement areless likely if the framework is never or infrequently applied. Frequency of application of a frame-work depends on how many researchers know about the framework and existence of models forapplying the framework; if frequency of application can be increased, it is likely that theory willdevelop faster. As a result, theoretical development in an emerging field can be catalyzed ifauthors are aware of reviews on which they can rely for the best available answers to questionsabout theoretical frameworks and their applications. Examples of engineering education-relatedreviews that sought to develop theory include Crismond and Adams (2012), Henderson, Fin-kelstein, and Beach (2010), and Mehalik and Schunn (2007).

IdentifyOpportunities for Future Research or InnovationsAll reviews, regardless of their primary goal, should also identify opportunities for future work.For example, in highlighting gaps in past work, some reviews emphasized areas of opportunity(Beddoes & Borrego, 2011; Koro-Ljungberg & Douglas, 2008). Few engineering educationreviews that we identified, though, offered specific implications for future work.

Steps inConductingaSystematicReviewNow that we have established some of the reasons a researcher may undertake a review, wenow describe systematic review procedures in detail.

Deciding to Do a Systematic ReviewSince the data source for a systematic review is primary studies conducted by others, thequantity, quality, and accessibility of primary studies will determine whether a systematicreview can be conducted. The quality of a systematic review relies on consistency and appro-priateness between goals, research questions, selection criteria, and synthesis approaches.Developing goals, criteria, and approaches is likely to involve an iterative process to alignprocedures with the type and number of primary studies available.

Systematic Literature Reviews in Engineering Education 51

Page 8: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Protocols often change throughout the course of a systematic review, particularly duringliterature searching and appraisal of the primary sources. Petticrew and Roberts (2006) rec-ommended conducting a scoping review to estimate the number and accessibility of sourcesand decide whether to conduct a systematic review at all. The results of a scoping reviewshould be included in a proposal if one is required (e.g., for funding or dissertation work)before starting a systematic review. Petticrew and Roberts (2006) provided specific situationsthat could warrant or particularly benefit from systematic reviews:

When there is uncertainty, for example about the effectiveness of a policy or a service,and where there has been some previous research on the issue.

In the early stages of development of a policy, when evidence of likely effects of anintervention is required.

When it is known that there is a wide range of research on a subject but where keyquestions remain unanswered – such as questions about treatment, prevention, diagno-sis, or etiology, or questions about people’s experiences.

When a general overall picture of the evidence in a topic area is needed to directfuture research efforts.

When an accurate picture of past research, and past methodological research isrequired to promote the development of new methodologies. (p. 21)

Identifying Scope and ResearchQuestionsAs with other research methods, well-defined questions that accurately reflect the intent ofthe researchers suggest criteria to guide design decisions later in the process (Petticrew &Roberts, 2006). For example, later in the process, researchers will have to articulate criteriafor excluding or including studies in the collection to be analyzed. If the question is phrasedas “What is known about X?”, it will be difficult to determine if a particular study should beincluded or excluded, since virtually every study could potentially shed some light on thequestion. The question might be refined in several different ways:

Is X effective (under particular conditions)? or What factors influence effectiveness of X?

What learning outcomes are associated with X?

What factors contribute to increasing or decreasing likelihood of X?

How has X been operationalized, measured, or assessed?

What methods have been used to teach X, increase learning of X, or increase applica-tion of X?

What theoretical frameworks and perspectives have been applied to X?

What research and evaluation methods have been used to study X?

For this example, X might be a topic or construct of interest, such as global competency,entrepreneurship, thermodynamics, motivation, diversity, teamwork, or instructor beliefsabout teaching.

The EPPI-Centre (2010) advised systematic reviewers to spend time conceptualizing theresearch question so that it will guide all other stages of the review process:

52 Borrego, Foster, & Froyd

Page 9: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

As with any piece of research, defining the research question for a systematic review isthe most important stage of the process, as it provides the framework for all the otherstages. The question being asked by a review will determine the method of review andthe studies that are considered by the review. The question is likely to include implicitassumptions about the topic and thus any underlying conceptual framework (or logicmodel) that will be used to interpret and understand the research evidence in thereview should be made explicit. (p. 4)

More specifically, Petticrew and Roberts (2006) advised that “what works?” review ques-tions are best answered by synthesis of quantitative experimental studies, while “what matters?”review questions are best addressed through synthesis of qualitative studies (2006, p. 57).

To ensure that relevant parameters are defined in a research question and in later stagesof analysis, the National Institute for Health and Clinical Excellence offered the PICOframework (Table 1). Salleh, Mendes, and Grundy (2011) used the PICO framework indescribing the interventions for their systematic review of pair programming in computerscience education.

The National Institute for Health and Clinical Excellence (2009) offered detailed adviceon writing different types of questions for effectiveness, correlation, cost effectiveness, andqualitative systematic reviews. Effectiveness (of an intervention) and qualitative reviews arethe two most relevant to engineering education. For effectiveness reviews, researchers shouldconsider outcomes, implementation issues, context, and other factors that could influence theoutcomes. For qualitative reviews, the Institute advised to avoid overly broad questions, e.g.,

Table 1 PICO framework for Structuring Research

Questions and Analysis for Systematic Reviews

PICO Framework component Engineering education examples and considerations

Population: Which populations are weinterested in? How can they best bedescribed? Do any subgroups need tobe considered?

Engineering students, faculty members or practicing engineers

Year in school or years of work experience

Specific engineering discipline(s), course(s)

Gender, race/ethnicity, disability status

Institution type, country

Intervention: Which intervention,activity or approach should be used?

Careful definition of what is and is not considered with respectto the intervention:

durationincentives (e.g., required course or elective)curriculumfacilitator training requirements

Comparison: What is/are the mainalternative/s to the intervention beingconsidered?

Careful definition of any control or comparison group designsto include or exclude, e.g., is a control group required,considering control groups are uncommon in engineeringeducation studies?

What is the alternative to the intervention, e.g. lecture-onlycourse or no intervention?

Source. National Institute for Health and Clinical Excellence (2009, p. 45).

Systematic Literature Reviews in Engineering Education 53

Page 10: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

What is the experience of engineering students in design courses? Table 2 includes examplesadapted for engineering education.

Defining Inclusion CriteriaOnce the purpose, research questions, and scope have been clearly defined, the next step is todevelop inclusion and exclusion criteria for selecting primary studies. Published systematicreview procedures describe at least three types of inclusion criteria. The first type is criteria forselecting databases that will be searched for articles. More detail on selecting databases is pro-vided in the following section. The second type is a set of combinations of search words orphrases and logical connectors (AND, OR. . .) to winnow all the articles in a database downto a small enough set that more detailed inclusion criteria can be examined. In our opinion, intheir math education systematic review, Li and Ma (2010) were exemplary in clearly definingterms and showing how search terms and research questions related to inclusion criteria. Thethird type of criteria guides selection of articles that will be analyzed. In all cases the purposeand research questions for the systematic review should guide inclusion criteria.

In summary, authors should determine the inclusion criteria and the process for identify-ing the articles that will address the purpose of systematic reviews and the research ques-tions. Extensive documentation of the protocol and criteria will allow other researchers toreplicate the systematic search process.

We identified two helpful examples of inclusion criteria in systematic reviews related toengineering education. Schmid, Bernard, et al. (2009) selected articles that

Address the impact of computer technology (including CBI [computer-based instruc-tion], CMC [computer-mediated communication], CAI [computer-assisted instruc-tion], simulations, e-learning) on students’ achievements (academic performance).

Be conducted in formal post-secondary educational settings (i.e., a course) leading to acertificate, diploma, or degree.

Represent CI [classroom instruction], blended, or computer lab-based experiments,but not distance education environments. (p. 99)

Table 2 Example Systematic Review Questions

for Effectiveness and Qualitative Reviews

Effectiveness examples Qualitative examples

Which active learning pedagogies have been shownto increase conceptual understanding in engineeringscience courses?

What types of peer mentoring program designseffectively increase retention of engineering under-graduate students from underrepresented groups?

Do engineering design activities in afterschool pro-grams increase K-12 students’ interest inengineering?

How is engineering design conceptualized in K-12formal and informal education?

What barriers and perceptions prevent more engi-neering faculty members from using student-centeredpedagogies?

What reasons are given to explain underrepresenta-tion of women in engineering in different countriesor regions?

How do students decide to persist or switch out ofengineering during their first year of college?

Note. Adapted from National Institute for Health and Clinical Excellence (2009).

54 Borrego, Foster, & Froyd

Page 11: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Whereas, Li and Ma (2010) applied the following criteria:

The study uses CT [computer technology] for instructional (or learning) purposes.

Participants of the study are students in regular classrooms in grades K-12.

The study employs an experimental or quasi-experimental design (as defined later).

The study is published during 1990 to 2006 (without restriction to geographical areaor language).

The study uses mathematics achievement as outcome.

The study reports quantitative data in sufficient detail for the calculation of an inter-vention effect size. (p. 222)

Implementation of inclusion or exclusion criteria is often formalized as part of a protocol,which assists researchers in executing the procedures, and in documentation of systematicreview methods. Gough (2004) emphasized the importance of well-documented and system-atic procedures. Both the focus of the review and the inclusion criteria should be explicit.Criteria should reflect the focus and assumptions of the research question (EPPI-Centre,2010). Inclusion criteria should be free of bias; in other words, they should not intentionallyor unintentionally exclude undesirable or inconclusive results (Petticrew & Roberts, 2006).

D. A. Cook and West (2012) explain:

Regardless of the actual criteria selected, it is important to clearly define these criteriaboth conceptually (often by using a formal definition from a dictionary, theory or pre-vious review) and operationally (by using detailed explanations and elaborations thathelp reviewers recognise the key concepts as reported by authors in published articles).Although some operational definitions will be defined from the outset, many of thesemay actually emerge during the process of the review as reviewers come across articlesof uncertain inclusion status. Such cases should be discussed by the group with thegoal not only of deciding on the inclusion or exclusion of that article, but also ofdefining a rule that will determine the triage of similar articles in the future. Suchdecisions, along with brief examples of what should and should not be included, canbe catalogued in an explanatory document. Although the conceptual definitions shouldremain unchanged, the explanatory document and the operational definitions it con-tains often continue to evolve throughout the review process. (p. 947)

Developing and refining systematic review protocols is an iterative process as review teammembers begin to apply criteria and discuss and reconcile disagreements at each phase. Cookand West (2012) recommended testing the inclusion criteria through a pilot phase and resolv-ing all discrepancies before moving onto later phases.

Certainly the point of systematic searching is to ensure that all relevant sources are identi-fied. The sampling logic (Gall, Gall, & Borg, 2007) is to identify all relevant sources, ratherthan a subset that is either representative or particularly informative (although a subset maybe selected for analysis due to time or other resource constraints). Professional judgmententers into all phases of research, and systematic reviews are no exception. If researchers areaware of valuable sources that do not fit the inclusion criteria, then the criteria should beadjusted or additional search procedures implemented.

Systematic Literature Reviews in Engineering Education 55

Page 12: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Finding and Cataloging SourcesSeveral databases can be used to locate primary resources, as shown in Table 3. The principaltype of resource is subject (or disciplinary) databases such as ERIC, Education Full Text,Compendex, and Inspec. When selecting databases, it is important to include fields otherthan engineering education that may be publishing relevant research on the topic of interest,for example, psychology, communication, and computer science. Other more general data-bases such as JSTOR and Scopus should also be considered. Searching a wide variety ofdatabases ensures that relevant studies are located despite the discipline in which they werepublished and indexed. However, these databases do not cover the often overlooked gray lit-erature, which includes any research not published by commercial publishers such as manyconference proceedings, books, book chapters, dissertations, theses, government documents,white papers, and proposals (Thompson, 2001). In engineering education as in other fields,most conference papers are not indexed in databases such as those listed in Table 3; a notableexception is the ASEE/IEEE Frontiers in Education conference. Another useful source ofgray literature for engineering education systematic reviews may be evaluation reports (ifthey can be obtained; evaluation reports are not often posted online) that present originalempirical data and conclusions. Reports, such as those published by national academies andother advisory organizations, written to draw attention to an important issue are anothersource of gray literature commonly cited in engineering education. While committee reportsmight be an important component of a narrative literature review, they are less relevant tosystematic reviews such as those seeking to understand a specific intervention.

When conducting a search, it is essential for systematic reviewers to meet with a librarianor information scientist specializing in their discipline to get expert advice on searching(McGowan & Sampson, 2005). Many librarians are now aware of systematic reviews andcan provide assistance beyond the search process. This expertise is most commonly found inmedical libraries because of the increasing prevalence of systematic reviews in medicine.Librarians can make recommendations for effective search procedures, sources to search, and

Table 3 Databases for Systematic Review Searches

Type of database Examples

Subject specific databases Education: ERIC, Education Full Text (EBSCO)Engineering: Compendex, InspecPsychology: PyscINFO (ProQuest)Communication: Communication Abstracts (EBSCO), Communication &

Mass Media Complete (EBSCO)

General databases Academic Search Complete, JSTOR, Scopus

Journal databases Science Direct, Wiley, Directory of Open Access Journals

Gray literature databases Gray literature overall: OpenDOAR (Directory of Open Access Repositories),Open Grey, National Technical Information Service (NTIS), Google

Conference papers: Conference Proceedings Citation Index (ISI), ProceedingsFirst (OCLC)

Dissertations/Theses: ProQuest Dissertations & Theses, Open ThesisGovernment documents: GOP Access, LexisNexisBooks: World Cat, Google Books

56 Borrego, Foster, & Froyd

Page 13: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

tracking citations. Many academic librarians are tenure-track faculty who may be interestedin coauthoring a review, particularly if they contribute substantially to finding and catalogingsources (or at other stages). As with any interdisciplinary collaboration, open communicationabout benefits and contributions is advised (Borrego & Newswander, 2008).

Other than database searching, three additional search approaches should be considered:contacting experts, citation searching, and hand searching (Papaioannou, Sutton, Carroll,Booth, & Wong, 2010). Contacting experts involves selecting specific experts, professionalorganizations, and relevant listservs to advertise and request articles matching inclusion crite-ria. Citation and hand searching are described in the following paragraphs.

Citation searching, or snowball sampling, involves reviewing the lists of works cited bysources already identified (cited references) as well as the later articles that cite the sourcesalready identified (citing references). The National Institute for Health and Clinical Excel-lence (2009) recommended citation searching in cases when other search steps have notyielded enough useful studies. Many electronic databases such as Scopus and Web of Knowl-edge allow for easy identification and searching of both cited references and citing references,provided these sources are also indexed by the database. For example, Web of Knowledgeincludes a feature that can help researchers focus on the most highly cited primary sources.Similarly, previous reviews in the topic or a related area, if they exist, can be used to identifyadditional sources, criteria, search terms, and databases (NICE, 2009).

Hand searching is reading the table of contents of journals or other potential sources(NICE, 2009), a task that is required by Cochrane Collaboration guidelines. Hand searchinghas been the most common method of prior systematic reviews in engineering educationresearch. Wankat (1999, 2004), Whitin and Sheppard (2004), and Koro-Ljungberg andDouglas (2008) hand searched issues of this Journal, while Beddoes, Jesiek, and Borrego(2010) and Osorio and Osorio (2002) hand searched the contents of multiple engineeringeducation journals. Many of these reviews have attempted to map the field of engineeringeducation research in terms of general research topics or methodological approaches. Inaddition to pointing out that a single journal does not sufficiently represent an entire field,we posit that engineering education research (and interest in related fields) has both ex-panded to include literature from many related fields and coalesced around key issues to theextent that more systematic reviews of specific topics are possible and needed. Many of the pri-mary engineering education journals are now indexed by databases listed above and in Table 3.Ulrich’s periodicals directory (Serial Solutions, n.d.) provides information on which specificdatabases index a particular journal.

After identifying articles using the various search procedures described above and remov-ing duplicates, the number of sources included and excluded at each phase should be tabu-lated. One option for presenting this information is shown in Figure 2, which was adaptedfrom the PRISMA flowchart (Liberati et al., 2009). Examples of tables presenting thisinformation can be found in Benitti (2012), Salleh et al. (2011), Meese and McMahon(2012), and Monasor, Vizca�ıno, Piattini, and Caballero (2010).

Critique and AppraisalAfter the primary sources have been selected, identified, and sorted, the next step is to sys-tematically assess the quality of each primary study.

Cook and West (2012) described an intermediate step of abstracting important detailsfrom each source. The PICO framework (Table 1) can be used to create a template specificto the review topic that summarizes information on the outcomes, study design, and

Systematic Literature Reviews in Engineering Education 57

Page 14: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

conclusions. These summaries should include details of how success or impact was definedand the conclusion drawn by the authors (as relevant to the focus of the review). It is alsoimportant to include the concrete details that led to the assessment of quality, such as sour-ces of bias or missing details. As in earlier phases, abstracting should have a clear protocoland specific rules should be created while discrepancies and borderline cases are discussedand resolved by review team members. Lou (2004) and Tamim, Bernard, Borokhovski,Abrami, and Schmid (2011) presented lists of the features they coded in their engineeringeducation-related systematic reviews.

Details gathered during abstracting allow systematic reviewers to assess the quality ofeach study and its conclusions. Assessing the quality of primary studies means applying crite-ria for quality research consistent with the methodology selected by the authors. A numberof sources described these criteria in terms directly relevant to engineering education research(Borrego, Douglas, & Amelink, 2009; Koro-Ljungberg & Douglas, 2008; Shavelson &Towne, 2002; Streveler, Borrego, & Smith, 2007). In the literature specific to systematicreviews, quality is framed as internal consistency of the study, or its fit, transparency, andappropriateness. In other words, did the researchers clearly describe their procedures and

Figure 2 Flowchart for articles, adapted from PRISMA (Liberati et al., 2009).

58 Borrego, Foster, & Froyd

Page 15: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

decisions, and were these procedures appropriate to the questions and theoretical frameworkidentified (Petticrew & Roberts, 2006)? Additionally, some guides included consideration offit to the systematic review question as part of the overall quality assessment of the individualstudies (EPPI-Centre, 2010; Gough, 2004). Table 4 gives more detail about aspects of qual-ity specific to intervention studies. Salleh et al. (2011) and Tondeur et al. (2012) providedadditional detail on the specific quality appraisal procedures used in their systematic reviews.

A major consideration in quality of primary studies is limitations and sources of bias inthe studies, particularly those evaluating efficacy of interventions. The existence of publica-tion bias, meaning that positive results are more likely to be published than negative orinconclusive results, is widely acknowledged. Sources of bias within individual studies shouldalso be considered. Participants who self-select to be part of a program, intervention, or sur-vey are likely to be more receptive than any comparison groups. Materials provided to atreatment group may contaminate the comparison group if students share resources, e.g.,Web sites. High attrition from programs and studies weakens the validity of the results.Self-reporting may overestimate attitudes and behaviors, particularly when the instrumentsdo not conceal socially desirable responses.

A second major consideration for intervention studies is fidelity of implementation andcontext. When considering whether a particular intervention works across studies, it is im-portant to understand whether and how different groups are implementing similar interven-tions. Various aspects of implementation may impact outcomes, and it is important to definethe intervention in question. Similarly, contextual characteristics related to the type of insti-tution, location, and socioeconomics of the participants and setting should be considered(What Works Clearinghouse, 2011).

In the systematic review process and report, particularly when drawing conclusions abouteffectiveness of interventions, authors should formally or informally weight the studies onthe basis of application of these quality criteria. When a substantial number of controlledstudies are available, reviewers may formalize quality scores in terms of number of explicitcriteria met and display this in summary tables using a notation such as 1, 11, or 111.Additionally, Petticrew and Roberts (2006) recommend describing the quality considerationsfor each source in the text, particularly if research designs vary.

Table 4 Questions to Consider When Constructing

Research Questions and Protocols for Effectiveness Reviews

How valid and appropriate are the outcome measures used to assess effectiveness (e.g., self-report vs. observations)?

How does the content of an intervention or program influence effectiveness?

How does the way an intervention is carried out influence effectiveness?

Does effectiveness depend on the job title or position of the intervener or other factors such as their age,gender, or ethnicity?

Does the site or setting influence effectiveness?

Does the intensity, length, or frequency of the intervention influence its effectiveness or duration of effect?

How does effectiveness vary according to the age, gender, class, and ethnicity of the target audience? Isthere any differential impact on between different population groups?

Are there any factors that prevent – or support – effective implementation?

Note. Adapted from National Institute for Health and Clinical Excellence (2009, pp. 33–34).

Systematic Literature Reviews in Engineering Education 59

Page 16: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

SynthesisMost of the value of a systematic review derives from a thorough synthesis of the primarysources that have been collected. All of the steps described previously have led to supportingthe review synthesis. Cook and West (2012) described the aim of synthesis in systematicreviews:

Synthesis involves pooling and exploring the results to provide a ‘bottom-line’ state-ment regarding what the evidence supports and what gaps remain in our currentunderstanding. This requires reviewers to organise and interpret the evidence, antici-pating and answering readers’ questions about this topic, while simultaneously provid-ing transparency that allows readers to verify the interpretations and arrive at theirown conclusions. (p. 950)

Petticrew and Roberts (2006) outlined the basic steps in synthesis:

1. Organize the studies, e.g., by outcome, population, level of analysis, study design

2. Critique within studies

3. Critique across studies

These steps are discussed in the following sections, along with related considerations.Mapping The first step, organizing the studies and reporting certain characteristics, can

be accomplished by mapping. Mapping, as the name implies, maps the work that has beendone in a field or topic area. It uses keywords to describe populations, theoretical frame-works, methods, countries, journals, disciplinary affiliations, and other details as relevant.Deciding which characteristics to report should be guided by the review question. Mappingmay be viewed as summarizing the results of the abstracting process described above. Addi-tionally, mapping “produces a useful product in its own right to describe what research hasalso been undertaken in a field (as defined by the inclusion criteria) and so can inform poli-cies for future research” (Gough, 2004, p. 56). A number of studies have focused on the goalof mapping the body of engineering education research (Jesiek et al., 2011; Wankat, 1999,2004; Whitin & Sheppard, 2004). Monasor et al. (2010) presented extensive mapping re-sults in the context of a systematic review. Mapping can also help inform decisions aboutwhere to focus the remaining analysis (NICE, 2009). While many previously published re-views have stopped at mapping, the systematic review literature does not consider mappingto be a full synthesis of the primary studies without subsequent critique steps.

Critique within studies The second step focuses on presenting the assessment of qualityfor each study in turn. Even if detailed tables are included, at least some of the report textshould be used to describe each study. The level of detail can range from the amount of textthat fits in a table to lengthy summaries. The level of description should be balanced forspace considerations (Petticrew & Roberts, 2006). The EPPI-Centre (2010) advises:

It is valuable to be explicit about how studies are singled out for description in a reviewand to be systematic when presenting detail of different studies so that each study isgiven standard treatment at write-up. It is even more valuable if the rationale for pre-senting certain studies and their results includes a measure of the quality and relevanceof the studies producing those results. . . . The aim is to make the links between thedetail of the studies found and the reviewers’ conclusions clear. (pp. 14–15)

60 Borrego, Foster, & Froyd

Page 17: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

This extended textual description of primary studies is important because it highlightsdifferences in study quality and discusses how sources of bias may have influenced the studyresults (Petticrew & Roberts, 2006).

Tables Publications of systematic reviews typically include one or more tables to sum-marize populations, methods, and results of the sources reviewed. Petticrew and Roberts(2006) explained:

Clear, detailed tables increase the transparency of the review. They show the readerwhich data have been extracted from which studies, and they clarify the contributionmade by each study to the overall synthesis. Tables can also reveal something of thereview’s wider context; that is, they can be used not just to present details of eachstudy’s methods and findings, but to organize and present information on the context,setting, and population within which each study was conducted. In short, clear, well-organized tables show how each study’s data contribute to the reviewer’s final conclu-sions, and they allow the reader to assess whether they would have come to similarconclusions, based on the same set of data. (p. 165)

To present important characteristics of the studies as well as the critical appraisal of theirquality, multiple tables may be needed. Sources should be listed in tables according to somesort of logical organization; alphabetical listing by first author is often used to help readersnavigate between accompanying text and tables. One example is Table 6 in this article. Oth-er strong examples to which we point readers are Becker and Park (2011), Benitti (2012),Falchikov and Goldfinch (2000), Koro-Ljungberg and Douglas (2008), and Salleh et al.(2011). Since the review is likely to include a substantial number of studies, they should becategorized into groups based on methods or other salient concepts. Some examples of meta-analysis summary tables, which highlight potential factors, are presented in Falchikov andGoldfinch (2000) and Li and Ma (2010). The results of meta-analysis are also often pre-sented in plots, as in Schmid et al. (2009) and Springer, Stanne, and Donovan (1999). Someexamples of qualitative meta-ethnographies that summarize themes in tables are Meese andMcMahon (2012) and Tondeur et al. (2012). As in earlier stages of the systematic review, itis important to define and communicate the criteria for grouping primary studies to avoidthe appearance of bias, e.g., to support a particular belief (Petticrew & Roberts, 2006).

Critique across studies The third step is the heart of synthesis and the major contribu-tion of systematic reviews. D. A. Cook and West (2012) asserted that “The most informativeaspect of many reviews is not the average result across studies, but, rather, the exploration ofwhy results differ from study to study. An explanation of between study inconsistency shouldbe part of all systematic reviews” (p. 951).

A number of specific synthesis methods are available, and choice of method is informedby the nature of primary studies and goal of the review. For example, Gough et al. (2012)listed meta-analysis and other quantitative methods, thematic summaries, framework synthe-sis, thematic synthesis, meta-ethnography, and mixed methods synthesis as separate methods.Although most systematic review guides give equal coverage to quantitative and qualitativesynthesis methods, in this article we focus on methods that do not require a large number ofprimary studies for statistical analysis (to test hypotheses or efficacy). We selected this focusbecause we believe that, in most research areas related to engineering students or engineeringcontent, there are simply not enough studies of similar design to support meta-analysis. Thisis not to say that meta-analysis cannot inform engineering education research; rather, if such

Systematic Literature Reviews in Engineering Education 61

Page 18: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

a study were conducted at this point, it would draw heavily from psychology or educationsources and closely follow the procedures previously developed and extensively documentedfor those fields (Institute of Education Sciences, 2012; B. Mullen, Driskell, & Salas, 1998;Petticrew & Roberts, 2006).

In terms of qualitative methods for synthesizing primary studies, Noblit and Hare (1988)are widely credited with first describing qualitative synthesis as meta-ethnography. Morerecently developed approaches such as thematic synthesis borrow from meta-ethnography(Thomas & Harden, 2008). Barnett-Page and Thomas (2009) critically reviewed nine dif-ferent methods for qualitative synthesis of primary studies. Thomas and Harden (2008) pres-ent an example of thematic analysis while placing the approach in the context of similarqualitative synthesis techniques. Readers are directed to these sources for more detailed in-formation on the techniques, while we illustrate some of the principles using content analysisapproaches.

Qualitative content analysis Particularly for syntheses of qualitative studies, anothersource of guidance is the literature on content analysis, which is often presented as a type ofqualitative research in educational research texts. Content analysis is a method for synthesiz-ing meaning from written documents, transcripts, or other media (including audio/visual).The basic procedure is to create mutually exclusive coding categories, categorize the datainto these categories, and report frequencies. If the sample size is large enough, descriptivestatistics and simple relationships are also reported (Downe-Wamboldt, 1992; Gall et al.,2007). Using a previously developed coding scheme (e.g., from a similar prior study) savescontent analysts time in developing a comprehensive set of categories (Gall et al., 2007).

Content analysis is flexible and open-ended; the researcher makes some decisions basedon the research question and nature of the data. As with the systematic review methods de-scribed in previous sections, content analysts should have a clear purpose and rationale inmind to guide all decisions to adapt the methodology (Leedy & Ormrod, 2005). Leedy andOrmrod (2005) suggested using at least two analysts and reporting interrater reliability (Gallet al., 2007). Graneheim and Lundman (2004) provide an overview of the steps in contentanalysis with examples based on interview and observation data. Hsieh and Shannon (2005)discussed documents specifically in their description of summative content analysis, whichaims to quantify and explore how a specific term is used in a set of texts.

Graneheim and Lundman (2004) explained that early content analysis publications, be-ginning in the 1950s, described strictly quantitative approaches striving for objectivity; morerecently, content analysis has expanded to accommodate more interpretive, qualitative per-spectives. Petticrew and Roberts (2006) cautioned that “Perhaps the least useful way of deal-ing with qualitative data in systematic reviews is to turn it into quantitative data” (p. 191).Content analysts differentiate between the visible, obvious manifest content and the underly-ing meaning, or latent content (Downe-Wamboldt, 1992; Graneheim & Lundman, 2004;Kondracki, Wellman, & Amundson, 2002). When embarking on a new project, the re-searcher makes a conscious decision about stance and focus with respect to these issues(Graneheim & Lundman, 2004). Educational research textbooks tend to emphasize“objective” (Leedy & Ormrod, 2005, p. 152) study of manifest content in which “text isassumed to be invariant across readers and across time” (Gall et al., 2007, p. 292). Nursingand nutrition education, in contrast, consider latent meaning and broader context (Downe-Wamboldt, 1992; Graneheim & Lundman, 2004; Kondracki et al., 2002). Finally, the mostcutting-edge discussion about creating quantitative data from qualitative texts is in mixedmethods (e.g., see the Journal of Mixed Methods Research). Again, researchers make decisions

62 Borrego, Foster, & Froyd

Page 19: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

based on goals and the quality and quantity of primary sources, and these choices should bedocumented with rationale in the review report.

Finally, relevant theory should be considered in content analysis (Gall et al., 2007), al-though it may be used in a different way than in primary studies, which may be guided by onetheoretical perspective. Content analysts may not want to limit their systematic review to aparticular theoretical framework because the aim of the review may be either to compare dif-ferent theoretical perspectives on the same problem or develop a framework to describe seem-ingly disparate approaches. Nonetheless, researchers should articulate their stance on the roleof theory in a particular systematic review.

Limitations,Validity, and Reliability ConcernsAs with any research study, systematic reviews have limitations that arise from both thequality and quantity of the primary studies and the quality of the systematic review proce-dures (D. J. Cook, Mulrow, & Haynes, 1997).

Most threats to validity and reliability arise from bias. Any situation that prevents a re-viewer from discovering primary studies creates bias. Systematic reviewers should considerwhether bias influenced their selection of primary studies due to any of the following charac-teristics of primary studies: strong positive (or negative) results, multiple related publicationson the same or related studies, whether they were published in well-indexed journals or con-ference proceedings, and language of publication (Higgins & Green, 2009). For example,studies identifying statistically significant differences might be more likely to be publishedthan studies finding no statistically significant differences. Similarly, multiple primary sour-ces describing similar studies could skew the results toward one set of original data. Restrict-ing the search to sources published in one language would exclude certain countries andregions. These biases can be reduced by searching thoroughly and including gray literature.

Another type of bias is selective outcome reporting, when primary study authors decideto report only certain outcomes. It is usually difficult for systematic reviewers to guess whichresults were not reported, but they can note selective outcome reporting as a potential limita-tion, particularly when primary authors describe their analyses and part of a larger study.Last, reviewers could unintentionally employ selection bias when applying their inclusionand exclusion criteria to primary studies. To avoid selection bias, during the quality appraisalphase, primary studies could be masked for author names, affiliations, and the journal name.Although research has not shown masking to make a significant difference in results ofreviews, it does eliminate one source of bias and lessen the overall perception of bias (Berlin,1997; Jadad et al., 1996).

Publication bias in quantitative studies can be detected using a funnel plot, or a “samplescatter plot of the intervention studies against some measure of each study’s size or precision”(Higgins & Green, 2009, p. 310). Funnel plots were first used in social science research byplotting effect estimates against total sample size (Light & Pillemer, 1984). Each primarystudy would be plotted as a single point. An asymmetrical funnel plot of a particular researcharea would indicate a publication bias from not reporting both positive and negative outcomes.

The quality of a systematic review is determined primarily by consistency and transpar-ency in selecting and reporting procedures for every step of the review. Readers of the reviewshould have enough information to judge the validity of the search procedures, the identifiedstudies, and the conclusions drawn from them. The reliability of the review can be estab-lished primarily through a collaborative process in which multiple team members apply anddiscuss criteria for inclusion or exclusion and quality assessment.

Systematic Literature Reviews in Engineering Education 63

Page 20: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

These limitations of systematic reviews cannot be eliminated, but they can be reduced byemploying some of the strategies described in this section, defining and following clear pro-cedures, and reporting decisions and procedures in detail (P. D. Mullen & Ramirez, 2006).

Advice for Writing the ReviewIn many cases, the final step of conducting a systematic review is disseminating resultsthrough a report, conference paper, dissertation or thesis, or article. Specifically, Leedy andOrmrod (2005) offered a simple checklist of what to include in a report or article describinga content analysis:

A description of the body of material studied

Precise definitions of the characteristics of focus

The coding or rating procedure

Tabulations of each characteristic

A description of patterns that the data reflect (p. 143)

As reinforced in several sections above, procedures, criteria, and evidence should be pre-sented in enough detail to allow readers to assess the quality of the review and to judge forthemselves whether conclusions were warranted. Key decisions related to procedures shouldinclude the rationale for the review. Important details of the primary studies should be pre-sented and cited in discussion and conclusion sections.

Several of the systematic reviews we identified described their procedures so clearly andthoroughly that the reviews could be replicated by others. Benitti (2012) presented searchprotocols in both text and tables. Salleh et al. (2011) provided extensive detail on qualityappraisal in particular. Tamim et al. (2011) provided the variables they coded (extracted) intheir meta-analysis of prior meta-analyses in an appendix.

Finally, a systematic review should conclude with substantive directions for futureresearch and implications for practice (NICE, 2009). The primary conclusions should not beobvious or already known at the outset of the review, e.g., “More research is needed in thisimportant area.” Interpretation should also include clear directions for future research (Gallet al., 2007). Future research directions should summarize, synthesize, and critique gaps inthe present body of research and prioritize which directions, topics, or approaches would bemost productive. Systematic reviews we identified with relatively strong future work sectionsinclude Falchikov and Goldfinch (2000), Lou (2004), and Springer et al. (1999). Implica-tions for practice should be related to the conclusions that can be (cautiously) drawn basedon the evidence collected regarding the efficacy of the intervention and its influencing fac-tors. Systematic reviews with relatively strong implications include Falchikov and Goldfinch(2000), Li and Ma (2010), Springer et al. (1999), and Tondeur et al. (2012). Conclusionsshould be grounded in the findings. Baumeister and Leary (1997) frame their advice in termsof common mistakes and how to avoid them; this advice is presented in Table 5.

Systematic Identification of ReviewsThroughout this article, we have cited reviews as examples of how specific systematic reviewprocedures have been applied to engineering education topics; we identified the examples byapplying systematic review procedures. We now describe how inclusion criteria, search pro-cedures, critique, and appraisal steps were applied to identify these example reviews.

64 Borrego, Foster, & Froyd

Page 21: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Our purpose was to identify exemplar systematic reviews on topics of interest to engineer-ing education that engineering education researchers might use to guide their systematicreview procedures, as well as to demonstrate systematic review procedures. Our inclusion cri-teria were:

Journal article or conference paper published since 1990

On an “engineering education topic” (engineering or STEM content orparticipants as topic area)

Included a methods section with criteria for selecting primary studies

One author with expertise in systematic reviews searched the following databases: ERIC(ProQuest), Academic Search (EBSCO), Education Full Text (EBSCO), Educational Admin-istration Abstracts (EBSCO), Applied Science & Technology Full Text (EBSCO), Science &Technology databases (ProQuest), and Papers First and Proceedings First (FirstSearch). Thesearch used keyword combinations (with truncations) such as [engineer* and educat*] and[“systematic review” or “meta-analysis”] and identified 476 unique articles. We also requested,via engineering education research listservs, nominations of high-quality reviews meeting thecriteria listed above. Responses provided 13 additional articles, bringing the total to 489 articles,which were entered into a RefWorks online database. These articles were randomly assigned tothe three authors so that each article was screened by two authors. The two screeners disagreedon inclusion or exclusion of approximately 20 articles, and these borderline articles were dis-cussed further until consensus was reached. A total of 37 articles met the inclusion criteria.

Table 5 Mistakes and How to Avoid Them in Writing Systematic Reviews

Mistake Recommendation to Avoid Mistake

Inadequate introduction Present conceptual and theoretical ideas early so readers can decide whetherto read the lengthy review and consider all of the evidence.

Inadequate coverageof evidence

Include in primary study descriptions, methods, and specific results as wellas conclusions of the studies.

Lack of integration Relate individual primary studies together through an overarching conceptu-alization, perspective, or point-of-view (i.e., the take-home message).

Lack of critical appraisal Point out and assess flaws and weaknesses of primary studies or groups ofprimary studies in order to weigh the evidence.

Failure to adjustconclusions

Temper conclusions based on critique of primary studies so that they areconsistent with the overall strength of the evidence.

Blurring assertionand proof

Clearly distinguish between citations to sources which assert an idea andthose that provide empirical evidence for it.

Selective reviewof evidence

Seek out and consider counterexamples.

Focusing on theresearchers ratherthan the research

Deemphasize researcher names by putting citations in parentheses ratherthan in text.

Stopping at the present Include implications for future research.

Note. Adapted from Baumeister and Leary (1997).

Systematic Literature Reviews in Engineering Education 65

Page 22: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

We adapted the quality appraisal form from Campbell Collaboration, and again, twoauthors conducted a quality appraisal of each paper. The final item on the quality appraisalform was an overall score for the systematic review. Eight articles were rated as “excellent” (ona three-point scale of poor, good, excellent) by both reviewers, and an additional six rated asexcellent by one reviewer were identified through discussion to consensus. These 14 articlesare presented in Table 6, which includes information on the topics and methods of each.

In sum, this brief description of our systematic review of systematic reviews demonstrateshow some of the steps in conducting systematic reviews can be implemented in engineeringeducation. We have identified 14 high-quality reviews on engineering education topics thatcan serve as models for engineering education researchers conducting their own systematicreviews. This set of exemplars included nearly equal numbers of quantitative (meta-analysesof quantitative primary studies) and qualitative (qualitative synthesis or meta-ethnographiesof both quantitative and qualitative primary studies) synthesis approaches. We note thatmost of the exemplars by authors who routinely publish in engineering education venues fellinto the qualitative category. We believe qualitative syntheses reflect the nature of the fieldand limited availability of primary studies that can support quantitative meta-analysis (i.e.,studies conducted with control or comparison groups). Given this focus of engineering edu-cation on more qualitative approaches, and the availability of sources describing statisticalmeta-analysis methods, we have emphasized qualitative synthesis methods in our Synthesissection above.

ConclusionSystematic review refers to a collection of evolving research methodologies for synthesizingexisting studies to answer a set of research questions formulated by an interdisciplinary team.One purpose of this article was to introduce systematic review methodologies to the field ofengineering education and to adapt existing resources on systematic reviews to engineeringeducation and other developing interdisciplinary fields. On the basis of our review of system-atic reviews, we suggest the approaches offer considerable promise for advancing the inter-disciplinary field of engineering education with respect to several of the criteria identified byFensham (2004): conceptual and theoretical development, research methodologies, progres-sion, model publications, seminal publications, and implications for practice. Systematicreviews can benefit the field of engineering education by synthesizing prior work, betterinforming practice, and identifying important new directions for research.

Systematic reviews can be contrasted with narrative reviews, which are currently the mostcommon type of review in engineering education, examples of which include invited articlesand chapters in special issues of this Journal (Vol. 94, No.1; Vol. 97, No. 3; and Vol. 100,No. 1), the Cambridge Handbook of Engineering Education Research (Johri & Olds, in press),and Heywood’s Engineering Education compendium (2005). This article is also primarily anarrative review. Narrative reviews, like other review approaches, attempt to synthesize or atleast summarize prior work in a particular area and may stimulate synthesis of innovative con-jectures and procedures (Eva, 2008). However, narrative reviews differ from systematic re-views in that the identification and selection criteria for sources are usually implicit; narrativereviews typically do not include methods sections. This lack of implicit methods leaves narra-tive reviews open to risks of bias (D. A. Cook & West, 2012) and incomplete coverage. Sim-ilarly, the analysis in narrative reviews tends to be ad hoc and likely to support the author’sintent, which may be to argue a certain stance. Systematic reviews can be distinguished from

66 Borrego, Foster, & Froyd

Page 23: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Tab

le6

Exe

mp

lar

Sys

tem

atic

Rev

iew

son

En

gin

eeri

ng

Ed

uca

tion

Top

ics

Art

icle

Typ

eof

revi

ewS

earc

hap

pro

ach

esP

rim

ary

stu

die

s(n

o.)

Met

hod

sto

be

hig

hli

ghte

d

Bec

ker

and

Par

k(2

011).

Eff

ects

ofin

tegr

ativ

eap

pro

ach

esam

on

gsc

ien

ce,

tech

nol

ogy,

engi

nee

rin

g,an

dm

ath

emat

ics

(ST

EM

)su

bje

cts

onst

ud

ents

’le

arn

ing:

Ap

reli

min

ary

met

a-an

alys

is

Quan

tita

tive

revi

ewof

qu

anti

tati

vest

ud

ies

2d

atab

ases

,in

tern

et,

dis

-se

rtat

ion

s,li

mit

edto

1989

–2009

28

Info

rmat

ive

table

sof

pri

mar

yst

ud

ies

and

met

a-an

alys

isre

sult

s

Bed

doe

s,Je

siek

,an

dB

orre

go(2

010

).Id

enti

fyin

gop

por

tun

itie

sfo

rco

llab

orat

ion

sin

inte

rnat

ion

alen

gin

eeri

ng

edu

cati

on

rese

arch

on

pro

blem

-an

dp

roje

ct-b

ased

lear

nin

g

Qual

itat

ive

revi

ewof

qu

alit

ativ

est

ud

ies

Con

fere

nce

s,jo

urn

als

105

Exp

and

su

pon

han

dse

arch

ing

trad

itio

n-

ally

use

din

revi

ews

of

engi

nee

rin

ged

u-

cati

on

rese

arch

by

con

sult

ing

ava

riet

yof

con

fere

nce

san

djo

urn

als

Ben

itti

(2012

).E

xplo

rin

gth

eed

uca

tion

alp

ote

nti

alof

robot

ics

insc

hoo

ls:

Asy

stem

atic

revi

ewQ

ual

itat

ive

revi

ewof

qu

anti

tati

vest

ud

ies

6d

atab

ases

,li

mit

edto

En

glis

hon

ly10

Th

oro

ugh

and

rep

lica

ble

sear

chp

roce

-d

ure

s

Pro

vid

esra

tion

ale

for

dec

isio

ns

and

num

ber

sof

incl

ud

edan

dex

clu

ded

stu

die

s

Info

rmat

ive

list

san

dta

ble

sfo

rse

vera

lp

has

es

Det

aile

dd

escr

ipti

ons

ofp

rim

ary

stu

die

s’ex

per

imen

tal

des

ign

s

Fal

chik

ovan

dG

old

fin

ch(2

000)

.S

tud

ent

pee

ras

sess

men

tin

hig

her

edu

cati

on:

Am

eta-

anal

ysis

com

par

ing

pee

ran

dte

ach

erm

arks

Quan

tita

tive

revi

ewof

qu

anti

tati

vest

ud

ies

6d

atab

ases

,ci

tati

on

s,re

qu

este

dar

ticl

es,

lim

ited

to195

9–1

999

48

Dis

cuss

ion

of

qu

alit

yap

pra

isal

Info

rmat

ive

table

sof

pri

mar

yst

ud

ies

and

met

a-an

alys

isre

sult

s

Rel

ativ

ely

stro

ng

futu

rew

ork

dis

cuss

ion

Systematic Literature Reviews in Engineering Education 67

Page 24: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Tab

le6

(con

tin

ued

)

Art

icle

Typ

eof

revi

ewS

earc

hap

pro

ach

esP

rim

ary

stu

die

s(n

o.)

Met

hod

sto

be

hig

hli

ghte

d

Kor

o-L

jun

gber

gan

dD

ougl

as(2

008)

.S

tate

of

qu

alit

ativ

ere

sear

chin

engi

nee

rin

ged

uca

tion

:M

eta-

anal

ysis

of

JEE

arti

cles

,200

5–2

006

Qual

itat

ive

revi

ewof

qu

alit

ativ

est

ud

ies

Jour

nal

ofE

ngi

nee

rin

gE

duca

tion

only

15

Lab

eled

byau

thors

asa

met

a-an

alys

is,

but

more

of

am

eta-

syn

thes

isor

met

a-et

hn

ogra

ph

y

Info

rmat

ive

table

sto

sum

mar

ize

pri

mar

yst

ud

ies

Li

and

Ma

(201

0).

Am

eta-

anal

ysis

of

the

effe

cts

of

com

pu

ter

tech

nol

ogy

on

sch

ool

stu

den

ts’m

ath

emat

-ic

sle

arn

ing

Quan

tita

tive

revi

ewof

qu

anti

tati

vest

ud

ies

6d

atab

ases

,ci

tati

on

s,co

nfe

ren

ces,

dis

sert

atio

ns,

lim

ited

to199

0–2

006

46

Cle

ard

efin

itio

nof

term

s,re

sear

chq

ues

-ti

on

s,an

din

clusi

oncr

iter

ia

Info

rmat

ive

table

sof

mod

erat

ing

vari

a-ble

sid

enti

fied

inp

rim

ary

stu

die

s

Lou

(200

4).

Un

der

stan

din

gp

roce

ssan

daf

fect

ive

fact

ors

insm

all

grou

pve

rsus

ind

ivid

ual

lear

nin

gw

ith

tech

nol

ogy

Quan

tita

tive

revi

ewof

qu

anti

tati

vest

ud

ies

5d

atab

ases

,ci

tati

on

s,d

isse

rtat

ion

s71

Det

aile

dta

ble

of

feat

ure

sco

ded

for

map

-p

ing

pu

rpose

s

Rel

ativ

ely

stro

ng

futu

rew

ork

dis

cuss

ion

Mee

sean

dM

cMah

on

(2012

).K

now

led

gesh

arin

gfo

rsu

stai

nab

led

evel

opm

ent

inci

vil

engi

nee

rin

g:A

syst

emat

icre

view

Qual

itat

ive

revi

ewof

qu

alit

ativ

est

ud

ies

5d

atab

ases

87

Det

aile

dd

escr

ipti

onof

how

crit

eria

wer

eap

pli

edin

clud

ing

bor

der

lin

eca

ses

Dis

cuss

ion

of

qu

alit

yap

pra

isal

Info

rmat

ive

table

sto

sum

mar

ize

sear

chp

roce

ssan

dth

emes

Mon

asor

,V

izca

�ıno,

Pia

tin

ni,

and

Cab

alle

ro(2

010)

.P

rep

arin

gst

ud

ents

and

engi

nee

rsfo

rgl

obal

soft

war

ed

evel

opm

ent:

Asy

stem

atic

revi

ew

Qual

itat

ive

revi

ewof

qu

alit

ativ

est

ud

ies

6d

atab

ases

38

Pri

mar

ily

am

app

ing

stu

dy

wit

hli

mit

edsy

nth

esis

Info

rmat

ive

table

sto

sum

mar

ize

sear

chp

roce

ssan

dth

emes

68 Borrego, Foster, & Froyd

Page 25: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Tab

le6

(con

tin

ued

)

Art

icle

Typ

eof

revi

ewS

earc

hap

pro

ach

esP

rim

ary

stu

die

s(n

o.)

Met

hod

sto

be

hig

hli

ghte

d

Sal

leh

,M

end

es,

and

Gru

nd

y(2

011

).E

mp

iric

alst

ud

ies

ofp

air

pro

gram

min

gfo

rC

S/S

Ete

ach

ing

inh

igh

ered

uca

tion

:A

syst

emat

icli

tera

ture

revi

ew

Quan

tita

tive

revi

ewof

qu

anti

tati

vest

ud

ies

(als

oso

me

qu

alit

ativ

ed

iscu

s-si

on

ofa

bro

ader

grou

p)

10d

atab

ases

,ci

tati

on

s,co

nfe

ren

ces

73

Good

adap

tati

on

of

met

a-an

alys

isp

roce

-d

ure

sto

ato

pic

wit

hfe

wra

nd

omiz

edco

ntr

oltr

ials

Th

oro

ugh

and

rep

lica

ble

sear

chp

roce

-d

ure

s

Det

aile

dq

ual

ity

app

rais

alin

clu

din

gch

eckl

ist

and

scori

ng

info

rmat

ion

Use

dP

ICO

fram

ewor

kto

defi

ne

inte

r-ve

nti

on

Flo

wch

art

ofn

umber

sof

incl

ud

edan

dex

clud

edst

ud

ies

Info

rmat

ive

table

sto

sum

mar

ize

pri

mar

yst

ud

ies

Sch

mid

etal

.(2

009)

.T

ech

nol

ogy’

sef

fect

on

ach

ieve

men

tin

hig

her

edu

cati

on:

AS

tage

Im

eta-

anal

ysis

of

clas

sroom

app

lica

tion

s

Quan

tita

tive

revi

ewof

qu

anti

tati

vest

ud

ies

14d

atab

ases

,ci

tati

on

s,in

tern

et,

dis

sert

atio

ns,

journ

als

231

Det

aile

dp

roce

du

res

of

how

mu

ltip

lere

sear

cher

sw

ork

edto

geth

er

Cle

ard

efin

itio

nof

term

s,re

sear

chq

ues

-ti

on

s,an

din

clusi

oncr

iter

ia

Dis

cuss

ion

of

qu

alit

yap

pra

isal

Info

rmat

ive

plo

tsof

met

a-an

alys

isre

sult

s

Spr

inge

r,S

tan

ne,

and

Don

ova

n(1

999

).E

ffec

tsof

smal

l-gr

oup

lear

nin

gon

un

der

grad

uat

esin

scie

nce

,m

ath

emat

ics,

engi

nee

rin

g,an

dte

chn

olog

y:A

met

a-an

alys

is

Quan

tita

tive

revi

ewof

qu

anti

tati

vest

ud

ies

4d

atab

ases

,co

nfe

ren

ces,

cita

tion

s,d

isse

rtat

ion

s39

Info

rmat

ive

plo

tsof

met

a-an

alys

isre

sult

sth

ath

igh

ligh

tm

oder

atin

gva

riab

les

iden

-ti

fied

inp

rim

ary

stu

die

s

Systematic Literature Reviews in Engineering Education 69

Page 26: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Tab

le6

(con

tin

ued

)

Art

icle

Typ

eof

revi

ewS

earc

hap

pro

ach

esP

rim

ary

stu

die

s(n

o.)

Met

hod

sto

be

hig

hli

ghte

d

Tam

im,

Ber

nar

d,

Bor

okh

ovs

ki,

Abra

mi,

and

Sch

mid

(201

1).

Wh

atfo

rty

year

sof

rese

arch

says

abou

tth

eim

pac

tof

tech

nol

ogy

onle

arn

ing:

Ase

con

d-o

rder

met

a-an

alys

isan

dva

lid

atio

nst

ud

y

Quan

tita

tive

revi

ewof

qu

anti

tati

vest

ud

ies

11d

atab

ases

,d

isse

rta-

tion

s,in

tern

et25

Sec

ond

ord

erm

eta-

anal

ysis

,i.

e.,

met

a-an

alys

isof

oth

erm

eta-

anal

yses

Th

oro

ugh

and

rep

lica

ble

sear

chp

roce

-d

ure

s

Ext

ensi

veap

pen

dix

of

cod

ing

vari

able

s(f

or

extr

acti

on)

from

met

a-an

alys

esin

par

ticu

lar

Info

rmat

ive

table

tosu

mm

ariz

ep

rim

ary

stu

die

s

Rel

ativ

ely

stro

ng

futu

rew

ork

dis

cuss

ion

Ton

deu

ret

al.

(201

2).

Pre

par

ing

pre

-ser

vice

teac

h-

ers

toin

tegr

ate

tech

nol

ogy

ined

uca

tion

:A

syn

thes

isof

qu

alit

ativ

eev

iden

ce

Qual

itat

ive

met

a-et

hn

ogra

ph

y1

dat

abas

e19

Good

des

crip

tion

of

met

a-et

hn

ogra

ph

yap

pro

ach

Det

aile

dq

ual

ity

app

rais

ald

escr

ipti

on

Info

rmat

ive

table

sto

sum

mar

ize

resu

lts

Wel

l-org

aniz

edre

sult

s(a

rou

nd

them

es)

Full

cita

tion

sar

ein

clud

edin

the

Ref

eren

celi

st.

70 Borrego, Foster, & Froyd

Page 27: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

narrative reviews by their formalized synthesis procedures. Specific identification and synthe-sis approaches, matched to the research questions and types of primary studies selected, dis-tinguish different types of systematic reviews.

A second purpose of this article was to provide an overview of steps in conducting a sys-tematic review. As mentioned, systematic review refers to an evolving collection of method-ologies (Tricco, Tetzlaff, & Moher, 2011) within which are ones for identifying the set ofstudies that will be analyzed as well as separate methodologies for synthesizing the set ofstudies. Within identifying methodologies, consideration must be given to the types of pri-mary studies, e.g., quantitative, qualitative, mixed methods, and specific subtypes, that willbe included in the collection. Often these decisions depend heavily on the research questions.Synthesizing methodologies draw from a wide variety of quantitative (e.g., statistical meta-analysis, network meta-analysis [Tricco et al., 2011]), qualitative (e.g., meta-ethnography,content analysis), and mixed method methodologies. In the past, the synthesizing methodol-ogy has been used to refer to the type of systematic review (e.g., meta-analysis), but as sys-tematic review methodologies continue to evolve, it is becoming more widely recognizedthat the type of systematic review might be more accurately described as the choice of thesynthesis methodology. Systematic procedures such as those described in this article ensurethe quality and potential contributions of the review.

Third, we applied systematic review procedures to identify high-quality systematicreviews on engineering education topics. These 14 engineering education systematic reviewsoffer models for the systematic review process, including decisions at various steps.

Engineering education researchers should consider including systematic reviews in theirrepertoire of methodologies. We hope this article offers an overview, guidelines, and resour-ces that will promote adoption of the methodologies.

AcknowledgmentsThis research was supported by the U.S. National Science Foundation while Maura Borregowas working for the Foundation. Any opinions, findings, and conclusions or recommenda-tions expressed in this material are those of the authors and do not necessarily reflect theviews of the National Science Foundation.

ReferencesAndrews, R. (2005). The place of systematic reviews in education research. British Journal of

Educational Studies, 53(4), 399–416.Barnett-Page, E., & Thomas, J. (2009). Methods for the synthesis of qualitative research: A criti-

cal review. BMC Medical Research Methodology, 9(59), 1–10. doi: 10.1186/1471-2288-9-59Baumeister, R. F., & Leary, M. R. (1997). Writing narrative literature reviews. Review of

General Psychology, 1(3), 311–320.Becker, K., & Park, K. (2011). Effects of integrative approaches among science, technology,

engineering, and mathematics (STEM) subjects on students’ learning: A preliminarymeta-analysis. Journal of STEM Education: Innovations & Research, 12(5), 23–37.

Beddoes, K., & Borrego, M. (2011). Feminist theory in three engineering education jour-nals: 1995–2008. Journal of Engineering Education, 100(2), 281–303.

Beddoes, K., Jesiek, B. K., & Borrego, M. (2010). Identifying opportunities for collabora-tions in international engineering education research on problem- and project-basedlearning. Interdisciplinary Journal of Problem-based Learning, 4(2), 7–34.

Systematic Literature Reviews in Engineering Education 71

Page 28: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Benitti, F. B. V. (2012). Exploring the educational potential of robotics in schools: A sys-tematic review. Computers & Education, 58(3), 978–988.

Berlin, J. A. (1997). Does blinding of readers affect the results of meta-analyses? Lancet,350(9072), 185–186.

Borrego, M. (2007a). Conceptual difficulties experienced by engineering faculty becomingengineering education researchers. Journal of Engineering Education, 96(2), 91–102.

Borrego, M. (2007b). Development of engineering education as a rigorous discipline: Astudy of the publication patterns of four coalitions. Journal of Engineering Education,96(1), 5–18.

Borrego, M., Douglas, E. P., & Amelink, C. T. (2009). Quantitative, qualitative, and mixedresearch methods in engineering education. Journal of Engineering Education, 98(1), 53–66.

Borrego, M., Froyd, J. E., & Hall, T. S. (2010). Diffusion of engineering education innova-tions: A survey of awareness and adoption rates in U.S. engineering departments. Journalof Engineering Education, 99(3), 185–207.

Borrego, M., & Newswander, L. K. (2008). Characteristics of successful cross-disciplinaryengineering education collaborations. Journal of Engineering Education, 97(2), 123–134.

Borrego, M., Streveler, R. A., Miller, R. L., & Smith, K. A. (2008). A new paradigm for anew field: Communicating representations of engineering education research. Journal ofEngineering Education, 97(2), 147–162.

Campbell Collaboration. (n.d.). About Us [Web page]. Retrieved from http://www.campbellcollaboration.org/about_us/index.php

Cook, D. A., & West, C. P. (2012). Conducting systematic reviews in medical education: Astepwise approach. Medical Education, 46, 943–952.

Cook, D. J., Mulrow, C. D., & Haynes, R. B. (1997). Systematic reviews: Synthesis of bestevidence for clinical decisions. Annals of Internal Medicine, 126(5), 376–380.

Crismond, D. P., & Adams, R. S. (2012). The informed design teaching and learningmatrix. Journal of Engineering Education, 101(4), 738–797.

Downe-Wamboldt, B. (1992). Content analysis: Methods, applications and issues. HealthCare for Women International, 13, 313–321.

Durand, M.-A., Stiel, M., Boivin, J., & Elwyn, G. (2008). Where is the theory? Evaluatingthe theoretical frameworks described in decision support technologies. Patient Educationand Counseling, 71(1), 125–135. doi: 10.1016/j.pec.2007.12.004

Eva, K. M. (2008). On the limits of systematicity. Medical Education, 42(9), 852–853. doi:10.1111/j.1365-2923.2008.03140.x

Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre).(2010). EPPI-Centre methods for conducting systematic reviews. London: University ofLondon.

Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: Ameta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3),287–322.

Fensham, P. J. (2004). Defining an identity: The evolution of science education as a field ofresearch. New York, NY: Springer.

Froyd, J. E., & Ohland, M. W. (2005). Integrated engineering curricula. Journal of Engi-neering Education, 94(1), 147–164.

Gall, M. D., Gall, J. P., & Borg, W. R. (2007). Educational research: An introduction (8thed.). Boston, MA: Pearson.

72 Borrego, Foster, & Froyd

Page 29: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Gough, D. (2004). Systematic research synthesis. In G. Thomas & R. Pring (Eds.), Evi-dence-based practice in education (pp. 44–62). Berkshire, UK: Open University Press.

Gough, D., Oliver, S., & Thomas, J. (2012). An introduction to systematic reviews. LosAngeles, CA: Sage.

Graneheim, U. H., & Lundman, B. (2004). Qualitative content analysis in nursing research:Concepts, procedures and measures to achieve trustworthiness. Nurse Education Today,24, 105–112.

Haghighi, K. (2005). Quiet no longer: Birth of a new discipline. Journal of Engineering Edu-cation, 94(4), 351–353.

Henderson, C., Finkelstein, N. D., & Beach, A. (2010). Beyond dissemination in collegescience teaching: An introduction to four core change strategies. Journal of College ScienceTeaching, 39(5), 18–25.

Heywood, J. (2005). Engineering education: Research and development in curriculum andinstruction. Hoboken, NJ: Wiley-IEEE Press.

Higgins, J. P. T., & Green, S. (2009). Cochrane handbook for systematic reviews of interven-tions (Cochrane Collaboration Ed.). Hoboken, NJ: Wiley-Blackwell.

Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis.Qualitative Health Research, 15(9), 1277–1288.

Institute of Education Sciences. (2012). What Works Clearinghouse. Retrieved from http://ies.ed.gov/ncee/wwc/aboutus.aspx

Jadad, A. R., Moore, R. A., Carroll, D., Jenkinson, C., Reynolds, D. J., Gavaghan, D. J., &McQuay, H. J. (1996). Assessing the quality of reports of randomized clinical trials: Isblinding necessary? Controlled Clinical Trials, 17(1), 1–12.

Jesiek, B. K., Borrego, M., Beddoes, K., Hurtado, M., Rajendran, P., & Sangam, D. (2011).Mapping global trends in engineering education research, 2005–2008. International Jour-nal of Engineering Education 27(1), 77–90.

Jesiek, B. K., Newswander, L. K., & Borrego, M. (2009). Engineering education research:Discipline, community, or field? Journal of Engineering Education, 98(1), 39–52.

Johnson, D. W., Johnson, R. T., & Smith, K. A. (1991). Active learning: Cooperation in thecollege classroom (3rd ed.). Edina, MN: Interaction Book Company.

Johnson, D. W., Johnson, R. T., & Smith, K. A. (1998). Cooperative learning returns tocollege: What evidence is there that it works? Change, 30(4), 26–35.

Johri, A., & Olds, B. (2011). Situated engineering learning: Bridging engineering educationresearch and the learning sciences. Journal of Engineering Education, 100(1), 151–185.

Johri, A., & Olds, B. (Eds.). (in press). Cambridge handbook of engineering education research.Kiefer, M., & Pulverm€uller, F. (2012). Conceptual representations in mind and brain: The-

oretical developments, current evidence and future directions. Cortex, 48(7), 805–825.doi: 10.1016/j.cortex.2011.04.006

Kondracki, N. L., Wellman, N. S., & Amundson, D. R. (2002). Content analysis: Reviewof methods and their applications in nutrition education. Journal of Nutrition Educationand Behaviour, 34(4), 224–230.

Koro-Ljungberg, M., & Douglas, E. P. (2008). State of qualitative research in engineeringeducation: Meta-analysis of JEE articles, 2005–2006. Journal of Engineering Education,97(2), 163–176.

Leedy, P. D., & Ormrod, J. E. (2005). Practical research: Planning and design. Upper SaddleRiver, NJ: Merrill.

Systematic Literature Reviews in Engineering Education 73

Page 30: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Li, Q., & Ma, X. (2010). A meta-analysis of the effects of computer technology on schoolstudents’ mathematics learning. Educational Psychology Review, 22(3), 215–243.

Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gotzsche, P. C., Ioannidis, J. P., . . .Moher, D. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: Explanation and elaboration.BMJ (Clinical Research Education), 339, b2700. doi: 10.1136/bmj.b2700

Light, R. J., & Pillemer, D. B. (1984). Summing up: The science of reviewing research. Cam-bridge, MA: Harvard University Press.

Litzinger, T., Lattuca, L. R., Hadgraft, R., & Newstetter, W. (2011). Engineering educa-tion and the development of expertise. Journal of Engineering Education, 100(1), 123–150.

Lohmann, J. R., & Froyd, J. E. (2010). Chronological and ontological development of engineer-ing education as a field of scientific inquiry. Paper presented at the Second Meeting of theCommittee on the Status, Contributions, and Future Directions of Discipline-BasedEducation Research, Washington DC.

Lou, Y. (2004). Understanding process and affective factors in small group versus individuallearning with technology. Journal of Educational Computing Research, 31(4), 337–369.

Lucena, J., Downey, G., Jesiek, B. K., & Elber, S. (2008). Competencies beyond countries:The re-organization of engineering education in the United States, Europe, and LatinAmerica. Journal of Engineering Education, 97(4), 433–447.

McGowan, J., & Sampson, M. (2005). Systematic reviews need systematic searchers. Journalof the Medical Library Association, 93(1), 74–80.

Meese, N., & McMahon, C. (2012). Knowledge sharing for sustainable development in civilengineering: A systematic review. AI & Society, 27(4), 437–449.

Mehalik, M., & Schunn, C. (2007). What constitutes good design? A review of empiricalstudies of design processes. International Journal of Engineering Education, 22(3), 519.

Monasor, M. J., Vizca�ıno, A., Piattini, M., & Caballero, I. (2010). Preparing students andengineers for global software development: A systematic review. Paper presented at the 5thIEEE International Conference on Global Software Engineering, Princeton, NJ.

Montfort, D., Brown, S., & Pegg, J. (2012). The adoption of a capstone assessment instru-ment. Journal of Engineering Education, 101(4), 657–678.

Mullen, B., Driskell, J. E., & Salas, E. (1998). Meta-analysis and the study of group dynam-ics. Group Dynamics, 2(4), 213–229.

Mullen, P. D., & Ramirez, G. (2006). The promise and pitfalls of systematic reviews.Annual Review of Public Health, 27, 81–102. doi: 10.1146/annurev.publhealth.27.021405.102239

National Institute for Health and Clinical Excellence (NICE). (2009). Methods for the devel-opment of NICE public health guidance (2nd ed.). London: Author.

National Research Council. (2011) Promising practices in undergraduate science, technology,engineering, and mathematics education: Summary of two workshops. Washington, DC:National Academies Press.

Noblit, G. W., & Hare, R. D. (1988). Meta-ethnography: Synthesizing qualitative studies.London: Sage.

Osorio, N., & Osorio, M. (2002). Engineering education in Europe and the USA: An anal-ysis of two journals. Science and Technology Libraries, 23(1), 49–70.

Papaioannou, D., Sutton, A., Carroll, C., Booth, A., & Wong, R. (2010). Literature search-ing for social science systematic reviews: Consideration of a range of search techniques.

74 Borrego, Foster, & Froyd

Page 31: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

Health Information & Libraries Journal, 27(2), 114–122. doi: 2048/10.1111/j.1471–1842.2009.00863.x

Petticrew, M., & Roberts, H. (2002). Evidence, hierarchies, and typologies: Horses forcourses. Journal of Epidemiology and Community Health, 57, 527–529.

Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: A practical guide.Malden, MA: Blackwell.

Prince, M. J. (2004). Does active learning work? A review of the research. Journal of Engi-neering Education, 93(3), 223–231. doi: 10.1002/j.2168-9830.2004.tb00809.x

Prince, M. J., & Felder, R. M. (2006). Inductive teaching and learning methods: Defini-tions, comparisons, and research bases. Journal of Engineering Education, 95(2), 123–128.doi: 10.1002/j.2168-9830.2006.tb00884.x

Salleh, N., Mendes, E., & Grundy, J. (2011). Empirical studies of pair programming forCS/SE teaching in higher education: A systematic literature review. IEEE Transactionson Software Engineering, 37(4), 509–525.

Schmid, R. F., Bernard, R. M., Borokhovski, E., Tamim, R., Abrami, P. C., Wade, C. A., . . .Lowerison, G. (2009). Technology’s effect on achievement in higher education: A Stage I meta-analysis of classroom applications. Journal of Computing in Higher Education, 21(2), 95–109.

Serial Solutions. (n.d.). Ulrichsweb [Web page]. Retreived from http://www.serialssolutions.com/en/services/ulrichs/ulrichsweb

Shavelson, R., & Towne, L. (2002). Scientific research in education. Washington, D.C.:National Academies Press.

Slavin, R. E. (1984). Meta-analysis in education: How has it been used? EducationalResearcher, 13(8), 6–15.

Springer, L., Stanne, M., & Donovan, S. (1999). Effects of small group learning on under-graduates in science, mathematics, engineering and technology: A meta-analysis. Reviewof Educational Research, 69(1), 21–52.

Streveler, R. A., Borrego, M., & Smith, K. A. (2007). Moving from the scholarship ofteaching and learning to educational research: An example from engineering. In D. R.Robertson & L. B. Nilson (Eds.), To Improve the Academy (Vol. 25, pp. 139–149). Bos-ton: Anker.

Streveler, R. A., Litzinger, T. A., Miller, R. L., & Steif, P. S. (2008). Learning conceptualknowledge in the engineering sciences: Overview and future research directions. Journalof Engineering Education, 97(3), 279–294.

Thomas, J., & Harden, A. (2008). Methods for the thematic synthesis of qualitative researchin systematic reviews. BMC Medical Research Methodology, 8(45), 1–10. doi: 10.1186/1471-2288-8-45

Thompson, L. A. (2001). Grey literature in engineering. Science & Technology Libraries,19(3-4), 57–73. doi: 10.1300/J122v19n03_05

Tondeur, J., Van Braak, J., Sang, G., Voogt, J., Fisser, P., & Ottenbreit-Leftwich, A.(2012). Preparing pre-service teachers to integrate technology in education: A synthesisof qualitative evidence. Computers & Education, 59(1), 134–144.

Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developingevidence-informed management knowledge by means of systematic review. British Journalof Management, 14(3), 207–222. doi: 10.1111/1467-8551.00375

Tricco, A. C., Tetzlaff, J., & Moher, D. (2011). The art and science of knowledge synthesis.Journal of Clinical Epidemiology, 64(1), 11–20.

Systematic Literature Reviews in Engineering Education 75

Page 32: Systematic Literature Reviews in Engineering Education and Other Developing Interdisciplinary Fields

U.S. Department of Education. (2007). Report of the Academic Competitiveness Council.Washington, DC: Author.

Wankat, P. C. (1999). An analysis of the articles in the Journal of Engineering Education.Journal of Engineering Education, 88(1), 37–42.

Wankat, P. C. (2004). Analysis of the first ten years of the Journal of Engineering Educa-tion. Journal of Engineering Education, 93(1), 13–21.

Wankat, P. C., Felder, R. M., Smith, K. A., & Oreovicz, F. S. (2002). The scholarship ofteaching and learning in engineering. In M. T. Huber & S. P. Morreale (Eds.), Discipli-nary styles in the scholarship of teaching and learning: Exploring common ground (pp. 217–237). Sterling, VA: Stylus.

What Works Clearinghouse. (2011). Procedures and standards handbook. Washington, DC:U.S. Department of Education, Institute of Education Sciences.

Whitin, K., & Sheppard, S. (2004). Taking stock: An analysis of the publishing record asrepresented by the Journal of Engineering Education. Journal of Engineering Education,93(1), 5–12.

AuthorsMaura Borrego is an associate dean and associate professor of engineering education at

Virginia Tech, 660 McBryde Hall (0218), Blacksburg, VA 24061; [email protected].

Margaret J. Foster is an assistant professor in the Medical Sciences Library at TexasA&M University, College Station, TX 77843-4462; [email protected].

Jeffrey E. Froyd is a TEES research professor at Texas A&M University, College Station,Texas 77843-3127; [email protected].

76 Borrego, Foster, & Froyd