investigating the cognitive affective engagement model of ......integration, within the context of...

37
1 Reading Research Quarterly, 0(0) pp. 1–37 | doi:10.1002/rrq.361 © 2020 International Literacy Association. ABSTRACT Given limited work modeling the role of individual difference factors and processing variables in students’ learning from multiple texts, the author evaluates such a model. In particular, the model analyzed examines the rela- tion between cognitive (i.e., habits with regard to information evaluation) and affective (i.e., interest) individual difference factors and multiple-text outcomes (i.e., integrated mental model development and intertext model development, as facets of multiple-text integration), as mediated by stu- dents’ processing of multiple texts (i.e., time on texts, engagement in cross- textual elaboration). Interest and time devoted to text access were found to have a direct effect on integrated mental model formation, whereas time and students’ engagement in cross-textual elaboration had a direct effect on intertext model development. Additionally, time on texts mediated the rela- tion between both individual difference factors and the integration-related outcomes examined. Implications for theory development and research on learning from multiple texts are discussed. L earning from multiple texts is both an essential academic skill, necessary for students to understand complex topics such as cli- mate change, overpopulation, and the harms and benefits of social media use, and a critical real-world competency, necessary for students’ success in the information age (Braasch, McCabe, & Daniel, 2016; Goldman & Scardamalia, 2013; List & Alexander, 2017a; Strømsø, Bråten, & Britt, 2010). Learning from multiple texts refers both to the processing required to understand and synthesize information pre- sented across multiple texts and to the outcomes of such processing (e.g., construction of an integrated mental model of multiple texts). These processes are involved when individuals seek to decide on a political candidate, consider the benefits and drawbacks of a course of treatment, or even compare restaurant menus and reviews to decide where to go for dinner. Both multiple-text processing and the resulting outcomes require that students select and coordinate a wealth of information, engage in deliberative and deep-level strategy use, and make metacogni- tive and epistemic judgments about information quality and their own understanding. In short, learning from multiple texts is complex and cognitively demanding. Yet, such learning is motivationally taxing as well, with students requiring topic interest and task investment to engage in the demanding cognitive processes that learning from multiple texts requires. In this study, based on the cognitive affective engagement model (CAEM) of multiple-source use (List & Alexander, 2017b, 2018a), I examined the role of both cognitive and affective or motivational Alexandra List The Pennsylvania State University, State College, USA Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts: A Structural Equation Modeling Approach

Upload: others

Post on 15-Aug-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

1

Reading Research Quarterly, 0(0) pp. 1–37 | doi:10.1002/rrq.361 © 2020 International Literacy Association.

A B S T R A C TGiven limited work modeling the role of individual difference factors and processing variables in students’ learning from multiple texts, the author evaluates such a model. In particular, the model analyzed examines the rela-tion between cognitive (i.e., habits with regard to information evaluation) and affective (i.e., interest) individual difference factors and multiple-text outcomes (i.e., integrated mental model development and intertext model development, as facets of multiple-text integration), as mediated by stu-dents’ processing of multiple texts (i.e., time on texts, engagement in cross-textual elaboration). Interest and time devoted to text access were found to have a direct effect on integrated mental model formation, whereas time and students’ engagement in cross-textual elaboration had a direct effect on intertext model development. Additionally, time on texts mediated the rela-tion between both individual difference factors and the integration-related outcomes examined. Implications for theory development and research on learning from multiple texts are discussed.

Learning from multiple texts is both an essential academic skill, necessary for students to understand complex topics such as cli-mate change, overpopulation, and the harms and benefits of social

media use, and a critical real-world competency, necessary for students’ success in the information age (Braasch, McCabe, & Daniel, 2016; Goldman & Scardamalia, 2013; List & Alexander, 2017a; Strømsø, Bråten, & Britt, 2010). Learning from multiple texts refers both to the processing required to understand and synthesize information pre-sented across multiple texts and to the outcomes of such processing (e.g., construction of an integrated mental model of multiple texts). These processes are involved when individuals seek to decide on a political candidate, consider the benefits and drawbacks of a course of treatment, or even compare restaurant menus and reviews to decide where to go for  dinner. Both multiple-text processing and the resulting outcomes require that students select and coordinate a wealth of information, engage in deliberative and deep-level strategy use, and make metacogni-tive and epistemic judgments about information quality and their own understanding. In short, learning from multiple texts is complex and cognitively demanding. Yet, such learning is motivationally taxing as well, with students requiring topic interest and task investment to engage in the demanding cognitive processes that learning from multiple texts requires. In this study, based on the cognitive affective engagement model (CAEM) of multiple-source use (List & Alexander, 2017b, 2018a), I examined the role of both cognitive and affective or motivational

Alexandra List The Pennsylvania State University, State College, USA

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts: A Structural Equation Modeling Approach

Page 2: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

2 | Reading Research Quarterly, 0(0)

individual difference factors in students’ learning from multiple texts.

Individual Difference Factors in Learning From Multiple TextsA variety of individual difference factors have been investi-gated to determine the extent to which different students may be stymied or engaged by the complexity that learning from multiple texts presents. Individual difference factors are a somewhat reliable or stable set of learner characteris-tics that may be directly assessed, independent of a specific learning task; expected to vary across learners; associated with and distinguishable from one another; and meaning-fully related to learners’ processing and performance across a variety of tasks (Friedman & Miyake, 2017; Perfors & Kidd, 2018; Tavor et al., 2016). Individual difference factors examined have included prior knowledge, epistemic beliefs, and argumentative reasoning skills, with these found to result in differences in task performance, both indepen-dently and in interaction (Bråten & Strømsø, 2006a; Burin, Barreyro, Saux, & Irrazábal, 2015; Ferguson & Bråten, 2013; Mason, Ariasi, & Boldrin, 2011).

Nevertheless, the bulk of prior work examining the role of individual difference factors in students’ learning from multiple texts has focused on factors that are cold or cognitive, rather than warm or affective, in nature. List and Alexander (2017b, 2018a) suggested that in addition to focusing on the cognitive factors impacting multiple-text use, affective or motivational factors ought be consid-ered as well. In particular, List and Alexander introduced the CAEM to capture the joint role of cognitive and affec-tive factors in students’ multiple-text comprehension and integration. The CAEM suggests that students’ interac-tions with multiple texts, including both processing (e.g., time on texts, strategy use) and performance outcomes, are the result of students’ assumption of one of four default stances, or general orientations, toward task completion. The default stances that students adopt are distinguishable according to their positioning along two  dimensions reflecting students’ dispositions toward multiple-text task completion. The first dimension, affec-tive engagement, comprises students’ motivation for task completion, including interest in and attitudes toward the topic of the task. This dimension has been examined to the most limited extent in prior research. The second dimension, behavioral dispositions, captures the extent to which students are habituated or predisposed to engage in a variety of deep-level multiple-text use strategies, including information evaluation and integration. The role of these two sets of factors are introduced in the CAEM as shaping students’ processing and learning from multiple texts (for a more detailed discussion of default stances, see List & Alexander, 2017b, 2018a; for an empirical

analysis of CAEM default stances, see Strømsø, Bråten, & Brante, 2020).

The CAEM, although based in prior empirical research, has yet to be fully investigated. This study constitutes an initial step in validating the CAEM by considering the role of individual difference factors, both cognitive (i.e., stu-dents’ habits with regard to information evaluation) and affective (i.e., interest), in students’ learning from multiple texts. As suggested by the CAEM, students’ relative posi-tioning according to these factors (i.e., habits with regard to information evaluation, interest) should impact both pro-cessing variables (i.e., strategy use, time on texts) and per-formance. I expected performance in this study to reflect students’ learning from multiple texts.

Learning From Multiple TextsIn this study, as in prior work, learning from multiple texts was considered to result in multiple-text integration on the part of learners. In prior work, integration has been examined in three distinct ways: as students’ as sociation of text-based information (1) with their prior knowledge (i.e., elaboration), (2) with other information presented in the same texts (i.e., intratextual integration), or (3) with information presented in another, seemingly unrelated, text or texts (i.e., intertextual integration). Although stu-dents have been found to manifest all three of these types of integration when learning from multiple texts (Anmarkrud, Bråten, & Strømsø, 2014; Du & List, 2020; List, Du, & Lee, 2020; Wolfe & Goldman, 2005), in this study, I specifically focused on intertextual integration or students’ connec-tion formation across texts because students have been found to experience particular challenges with intertex-tual integration (Britt & Aglinskas, 2002; Britt & Sommer, 2004; Cerdán & Vidal-Abarca, 2008; List, Du, Wang, & Lee, 2019) and because such integration has been viewed as an outcome unique to students’ learning from multiple texts rather than from a single text.

Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the documents model framework (Britt, Perfetti, Sandak, & Rouet, 1999; Perfetti, Rouet, & Britt, 1999), which suggests that integrating multiple texts involves two primary pro-cesses. First, students need to construct a single coherent representation of the common story or situation described across texts; this is done by linking redundant and comple-mentary information presented across texts. Second, stu-dents need to tag information coming from individual texts to source(s) of origin (i.e., to answer the question, Who said what?). This latter process allows students to deal with cross-textual conflict by comparing the source trustworthi-ness of conflicting pieces of information to decide which to privilege or believe. Moreover, relating specific pieces of information to source(s) of origin allows these to be interpreted

Page 3: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 3

in light of author perspective (Barzilai & Weinstock, 2020; List, 2020a) and allows for cross-textual connections to be formed (e.g., recognizing that two authors agree or disagree with each other). These multiple-text integration functions are accomplished by students constructing two cognitive representations of multiple texts: the integrated mental model and the intertext model. The integrated mental model is a conceptualization of the central issue or situation described across texts. The intertext model is a structural representation of multiple texts that captures both the rela-tions among them and the links between content in texts and sources of origin (e.g., author).

Although identifying important information and points of commonality across texts in the integrated men-tal model may be similar to the process of single-text comprehension, carried out across multiple texts, tagging information from texts to sources of origin in the inter-text model represents a process unique to learning from multiple texts. To start, such tagging allows explicit con-flicts to be resolved by comparing source trustworthiness. Moreover, this allows distinct but not necessarily con-flicting points of view to be separately represented and understood as, potentially, the result of authors’ differing perspectives. For instance, students learning about a com-plex topic, such as trophy hunting, may expect distinct information to come from different domain perspectives (e.g., economic, ecological) or from the perspective of those engaged in trophy hunting vis-à-vis those whose communities host trophy hunters (List, 2020b). Further, students should expect not only to encounter differing perspectives but also for these perspectives to be key to appropriately interpreting the information introduced. Indeed, the understanding that different sources are likely to present different information in different ways and that such differentiation needs to be examined, and poten-tially resolved, in any cognitive representation of multiple texts may be understood as the fundamental feature of learning from multiple texts. That is, rather than just the assimilation of complementary and coherent information presented across texts in an additive fashion, learning from multiple texts is much more likely to involve pro-cesses such as interpretation, corroboration, evaluation, and reconciliation.

Assessment of Comprehension and IntegrationStudents’ learning from multiple texts has been assessed in a variety of ways, including through students’ completion of various objective measures tapping integrated mental model and intertext model construction. The three most commonly used objective measures of learning from mul-tiple texts have been the sentence verification task, the intra-textual inference verification task, and the intertextual

inference verification task (Bråten, Strømsø, & Britt, 2009; Gil, Bråten, Vidal-Abarca, & Strømsø, 2010; Strømsø et al., 2010). The sentence verification task asks students to determine whether a variety of sentences, some accurate (i.e., originals, paraphrases) and others not (i.e., meaning changes, distractors), include the same information as was introduced across multiple texts. As such, this task can best be said to represent students’ surface-level comprehension or basic understanding of information presented within particular texts. The intratextual inference verification task asks students to identify statements, created by linking dis-parate ideas introduced within the same text, as valid or invalid in nature. The intratextual inference verification task constitutes a measure of deep-level, single-text comprehen-sion (Strømsø, Bråten, & Samuelstuen, 2008). Finally, the intertextual inference verification task asks students to determine the veracity of statements generated by accu-rately or inaccurately linking statements across disparate texts. Thus, the intertextual inference verification task may be said to tap students’ multiple-text integration and, with its focus on content-based inferencing across texts, to target integrated mental model formation, in particular.

In addition to measures tapping students’ integrated mental model formation, some measures have further tapped intertext model development and, especially, stu-dents’ linking of source information (e.g., author) with texts’ content. For instance, Braasch, Rouet, Vibert, and Britt (2012) used a cued source recall measure to assess students’ linking of content with source information from texts (i.e., recall of who said what), alongside stu-dents’ citation use in writing. Elsewhere, source–content integration tasks have been used to tap intertext model development, with students asked to match a text-based statement with one of four sources of origin (Stang Lund, Bråten, Brandmo, Brante, & Strømsø, 2019).

These various measures, tapping both integrated men-tal model and intertext model development and used extensively in prior work, have all been found to be associ-ated with one another to various extents (Bråten et al., 2009; Bråten, Strømsø, & Samuelstuen, 2008; Hagen, Braasch, & Bråten, 2014; Strømsø et al., 2008). Moreover, students have been found to struggle with these various measures, indicating the importance of considering indi-vidual difference factors and process variables in under-standing students’ learning from multiple texts. In this study investigating the CAEM, I considered the role of cognitive and affective factors in students’ integrated men-tal model and intertext model development and the extent to which this is mediated by students’ processing of multi-ple texts (i.e., deep-level strategy use, time devoted to text access).

More generally, this study contributes to prior work seeking to comprehensively model students’ learning from multiple texts. Robust models, considering individual difference factors, process variables, and multiple-text

Page 4: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

4 | Reading Research Quarterly, 0(0)

performance outcomes, have been relatively rarely exam-ined thus far in understanding students’ learning from multiple texts, with three notable exceptions.

Modeling Students’ Learning From Multiple TextsThree key studies have sought to comprehensively model students’ process of multiple-text use. First, Stang Lund et al. (2019) examined the role of individual difference factors (i.e., prior knowledge, interest, gender) and text presenta-tion format (i.e., with information presented across multi-ple texts or as one text) on students’ learning from multiple texts, with learning assessed via a conflict verification task and a source–content integration measure. The conflict verification task, tapping integrated mental model develop-ment, asked students to determine whether a series of statements were consistent with or conflicted with other information they had read. The source–content integration measure, corresponding to intertext model development, asked students to match statements from texts to sources of origin or to indicate that the information did not come from any of the texts provided. Using path analysis, Stang Lund et al. found that text presentation format, as well as gender and prior knowledge, directly predicted source–content integration performance. Moreover, they found that presentation format, as well as prior knowledge, predicted conflict verification performance, which then served as a mediator for source–content integration. The path connecting interest to conflict verification perfor-mance did not rise to significance; however, interest was positively associated with gender and negatively associated with format of text presentation.

In the present study, like Stang Lund et al. (2019), I looked at integration outcomes tapping both integrated mental model and intertext model construction. However, rather than examining conflict verification performance as mediating the relation between individual difference factors and source–content integration, I examined the role of processing variables as mediators of a host of multiple-text outcomes. Further, like Stang Lund et al., I examined a variety of individual difference factors (i.e., interest, habits with regard to information evaluation) in association with students’ processing and multiple-text learning performance.

Kobayashi (2009) similarly examined the role of prior knowledge and college experience, as individual difference factors, and students’ summary notes, as a pro-cessing variable, on their recall of information in indi-vidual texts (i.e., recall of intratextual arguments) and comprehension of intertextual relations. Whereas Kobaya-s hi  examined intratextual recall as a mediating factor, students’ use of summary notes was examined as an exogenous predictor, like prior knowledge and college

experience. In fact, processing variables mediating the relation between individual difference factors and multiple- text outcomes have, to my knowledge, only been exam-ined in one prior study.

In particular, Bråten, Anmarkrud, Brandmo, and Strømsø (2014) examined the direct and indirect effects of individual difference factors on students’ processing of multiple texts and, ultimately, on multiple-text compre-hension. Four individual difference factors were exam-ined: prior knowledge, need for cognition, epistemic beliefs about the need for knowledge to be justified by multiple sources, and individual interest. Processing vari-ables examined included situational interest, deep-level strategy use, and effort, as assessed via the time stu-dents devoted to multiple-text access. Multiple-text com-prehension, asking students to consider or integrate disparate perspectives when responding to a series of open-ended questions, was examined as the outcome variable of interest and directly predicted only by prior knowledge. Students’ deep-level strategy use was found to mediate the association between prior knowledge, epis-temic beliefs, and need for cognition and multiple-text comprehension performance. Effort devoted to multiple-text use mediated the relation between individual interest and epistemic beliefs and multiple-text comprehension. In sum, findings from Bråten et al.’s study point to the mediating role of processing variables on students’ com-prehension of multiple texts. Moreover, their work sug-gested that a variety of individual difference factors, both cognitive and affective, have an effect on students’ multiple- text task performance.

The Present Study in Relation to Prior Work Modeling Learning From Multiple TextsThis article complements and expands work by Stang Lund et al. (2019), Kobayashi (2009), and Bråten et al. (2014) in at least three ways. First, like Stang Lund et al. and Kobayashi, I examined the associations among a vari-ety of multiple-text-related outcomes, including measures of intertext and integrated mental model construction, as  facets of integration. Moreover, like Bråten et al., I included processing variables as mediating the relation between individual difference factors and students’ multiple- text task performance. Finally, like Stang Lund et al. and Bråten et al., I included both cognitive and affective indi-vidual difference factors in predicting multiple-text pro-cessing and performance. As such, this study models the associations among cognitive and affective individual difference factors and multiple-text task performance, assessed via measures of integration and mediated by multiple-text processing variables, all within a single

Page 5: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 5

model (see Figure 1). The individual difference factors examined in this study include interest (List, Stephens, & Alexander, 2019; Strømsø et al., 2010) and students’ reported habits with regard to strategy use (Bråten & Strømsø, 2011; Firetto & Van Meter, 2018; List & Alexander, 2018b). These have been found to be associated with multiple-text learning in prior work.

In addition to these individual difference factors, I also examined two primary mediators impacting stu-dents’ learning from multiple texts: time on texts and cross-textual elaborative strategy use. Both of these have been investigated in prior work, with time on texts con-sidered to be an indicator of effort and engagement and with cross-textual elaboration reflecting depth of strat-egy use. List, Stephens, and Alexander (2019) found that the time students devoted to text access mediated the relation between their situational interest and the num-ber of text-based (i.e., evidentiary) justifications included in their writing. Hagen et al. (2014) found that students’ reports of engagement in cross-textual elaboration was negatively associated with students directly paraphrasing text-based information in their notes, rather than trans-forming it. Likewise, List, Du, et al. (2019) found that students composing better responses based on multi-ple texts (i.e., with more integration and citation use) also reported higher levels of cross-textual strategy use, a  finding confirmed by think-aloud data (see also Anmarkrud et al., 2014).

I examined individual difference factors and processing variables as mediators predicting students’ multiple-text task performance. This analytic approach was further grounded in the CAEM in two primary ways (List & Alexander, 2017b, 2018a, 2019). First, I specifically examined the role of topic interest (i.e., intrinsic interest in texts’ content; List, Stephens, & Alexander, 2019; Schiefele, 1999) and students’ habits with regard to information evaluation and integration in learning from multiple texts. These two factors, in particular, corre-spond to the affective engagement and behavioral disposi-tions dimensions that dictate students’ default stances, or orientations, toward multiple-task completion, as described in the CAEM. Second, as suggested by the CAEM and List and Alexander’s (2019) more recent work, the default stances that students adopt shape the nature of both their multiple-text processing and task performance. These two aspects are modeled in this study through the mediators and perfor-mance outcomes examined and the relations among them. In doing so, this study constitutes an important initial empirical examination of the CAEM.

Also, to further inform prior work, I partially replicate analyses by Bråten et al. (2014) in analyses presented in Appendix A. Although the core focus of this study was on validating the CAEM, supplemental analyses are presented in Appendix A to further replicate earlier work examining additional contributors (i.e., need for cognition, students’ beliefs regarding the need for justification by multiple texts) to students’ learning from multiple texts.

FIGURE 1 Theorized Model

Page 6: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

6 | Reading Research Quarterly, 0(0)

MethodParticipantsParticipants were 351 undergraduate students enrolled in a large university in the Northeastern United States (mean [M] age  =  19.54 years, standard deviation [SD] =  1.39 years). Two participants were excluded because they only completed the individual difference measures and were not presented with any of the multiple texts. The sample was majority female (71.51%, n = 251; male: 27.92%, n = 98; two students did not report gender). The sample was also majority White (68.38%, n = 240), with 13.68% (n = 48) of students reporting Asian ethnicity, 5.70% (n = 20) identify-ing as Hispanic/Latino, and 3.42% (n  =  12) reporting African American ethnicity. Moreover, 2.56% (n  =  9) of students identified their race/ethnicity as other, and 5.41% (n = 19) of students identified as biracial or mixed race; three students did not report race/ethnicity. Students reported a variety of class standings and were pursuing a variety of majors in the social and natural sciences.

Students were recruited from five courses in educational psychology, biology, and chemistry; I was the instructor for one of these courses. To minimize any pressure that students in my course may have felt to participate, two measures were taken. First, students completed the study online and had no direct contact with me during participation. Second, stu-dents were not assigned extra credit for participation (and I did not check which students participated) until the end of the semester, once final course grades had been determined. See Table 1 for a comparison of indicators across gender, major, and year in school. Participants were recruited via an announcement either emailed to them or posted to their course site and received extra credit for study participation.

ProceduresThis study had three primary parts. First, students were asked to complete individual difference measures assessing prior knowledge, interest, and habits with regard to infor-mation evaluation and integration. Additional individual difference factors collected (e.g., attitudes, need for cogni-tion, justification by multiple texts) are described and ana-lyzed in Appendixes A and B. Then, students were asked to complete a multiple-text task. This involved reading a set of four texts addressing a controversial topic (i.e., trophy hunting) and completing a variety of outcome measures tapping integrated mental model and intertext model development. Finally, students completed a number of posttask measures, including reporting strategy use on the Multiple-Text Strategy Inventory (Bråten & Strømsø, 2011). All study materials were presented electronically via Qualtrics. Because the study was fully online, participants completed study materials at a time and location of their choosing. The full study took participants approximately one hour to complete.

MeasuresConsistent with Figure 1, I examined three types of mea-sures, reflecting individual difference factors, process vari-ables, and performance outcomes. Whereas individual difference factors were assessed prior to task completion, process variables and performance outcomes were assessed within the context of a multiple-text task. Six individual dif-ferences were assessed in this study: prior knowledge, inter-est, habits with regard to information evaluation, attitudes, need for cognition, and justification by multiple texts. The individual difference factors corresponding to the CAEM are introduced in the Method section, and attitudes (Appendix B), need for cognition (Appendix A), and justifi-cation by multiple texts (Appendix A) are discussed in the appendixes where these factors are analyzed.

Prior KnowledgePrior knowledge was assessed via a researcher-developed seven-item multiple-choice measure, based on key terms and concepts found across the four trophy hunting texts. For instance, a sample item was

In trophy hunting, the trophy is:

(a) the prize awarded to the hunter who was most successful(b) the hunting license awarded to the hunter who bids the

highest amount(c) the part of the animal (e.g., the horn) that the hunter keeps

as a souvenir(d) the qualifying points a hunter is required to earn before

hunting big game

with (c) constituting the correct answer. In addition to the four answer choices, students also had the option to select “I don’t know” in response to each multiple-choice ques-tion. Responses were scored as correct or incorrect, with the “I don’t know” option and incorrect responses assigned a 0. This was done so students’ total scores reflected the number of items they responded to correctly.

The Cronbach’s alpha reliability for the seven-item prior knowledge measure was .44. This particularly low level of reliability may stem from a number of sources: the limited number of items on the measure, the limited extent to which  this item set may correspond to a unidimensional con struct (i.e., rather than to information presented across four different texts), and the relatively conservative nature of Cronbach’s alpha as a measure of reliability (McNeish, 2018). Moreover, earlier work has found that prior knowledge measures administered to low-knowledge samples have lower than desired levels of reliability (Bråten & Strømsø, 2006b; Peterson & Alexander, 2020). This latter explanation for the low alpha level achieved is bolstered by the fact that when this measure was administered at posttest as a mea-sure of comprehension, its reliability increased somewhat to .58. Nevertheless, due to the low reliability of this measure, it was excluded from further analyses.

Page 7: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 7

TABL

E 1

Perf

orm

ance

Acr

oss D

emog

raph

ic G

roup

s

Pred

icto

r

Gen

der

Maj

orYe

ar in

sch

ool

Fem

ale

Mal

eEd

ucat

ion

Biol

ogy

Chem

istr

yO

ther

Und

ecid

ed1

23

45

Indi

vidu

al d

iffe

renc

e fa

ctor

s

Prio

r kn

owle

dge

2.20

(1

.33)

2.21

(1

.28)

2.04

(1

.27)

2.37

(1

.36)

1.70

(1

.07)

2.07

(1

.00)

2.18

(1

.40)

2.17

(1

.20)

2.08

(1

.34)

2.29

(1

.35)

2.28

(1

.33)

2.40

(1

.58)

Inte

rest

3.29

**

(0.9

5)2.

98**

(0

.96)

3.12

(0

.98)

3.24

(0

.93)

3.01

(1

.05)

3.55

(1

.01)

3.55

(0

.95)

3.12

(1

.00)

3.17

(0

.99)

3.25

(0

.95)

3.37

(0

.88)

2.98

(0

.92)

Hab

its

in e

valu

atio

n an

d in

tegr

atio

n3.

86

(0.7

4)3.

86

(0.7

3)3.

71

(0.8

1)3.

92

(0.6

8)4.

01

(0.6

8)3.

94

(0.7

6)3.

73

(0.9

5)3.

82

(0.7

9)3.

82

(0.7

8)3.

89

(0.6

9)3.

89

(0.7

2)4.

23

(0.5

7)

Proc

ess

fact

ors

Tim

e on

tex

ts4.

23

(4.1

5)2.

76

(2.4

9)3.

97

(4.4

2)3.

84

(3.6

2)2.

82

(3.3

4)3.

95

(3.2

3)4.

19

(3.6

0)4.

32

(3.8

6)3.

97

(4.0

1)3.

43

(3.4

3)3.

94

(4.9

1)2.

78

(2.5

6)

Log(

Tim

e on

Tex

ts)

0.41

(0

.47)

0.27

(0

.42)

0.37

(0

.46)

0.38

(0

.46)

0.24

(0

.42)

0.42

(0

.45)

0.49

(0

.36)

0.43

(0

.47)

0.38

(0

.46)

0.33

(0

.46)

0.38

(0

.44)

0.30

(0

.36)

Cros

s-te

xtua

l el

abor

atio

n3.

61

(0.6

9)3.

50

(0.7

5)3.

43

(0.7

2)3.

66

(0.6

9)3.

60

(0.7

8)3.

54

(0.6

7)3.

45

(0.8

4)3.

44

(0.7

4)3.

61

(0.7

4)3.

61

(0.6

7)3.

72

(0.6

9)3.

67

(0.7

6)

Perf

orm

ance

mea

sure

s

Inte

grat

ed m

enta

l m

odel

0.50

(0

.24)

0.52

(0

.23)

0.48

(0

.23)

0.54

**

(0.2

4)0.

37**

(0

.27)

0.57

(0

.22)

0.49

(0

.24)

0.51

(0

.22)

0.51

(0

.24)

0.51

(0

.25)

0.47

(0

.24)

0.54

(0

.27)

Inte

rtex

t m

odel

0.40

(0

.25)

0.40

(0

.24)

0.37

(0

.23)

0.43

**

(0.2

5)0.

25**

(0

.19)

0.47

(0

.25)

0.41

(0

.24)

0.39

(0

.23)

0.36

(0

.25)

0.43

(0

.25)

0.38

(0

.24)

0.45

(0

.17)

Not

e. S

tati

stic

ally

sig

nifi

cant

dif

fere

nces

are

bol

ded.

**

p <

.01.

Page 8: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

8 | Reading Research Quarterly, 0(0)

Topic InterestTopic interest was assessed by asking students to rate six terms related to trophy hunting, the topic of the texts. Specifically, students were asked to “please rate how inter-ested you are in topics related to trophy hunting,” using a 5-point Likert-type scale, ranging from 1 (not at all inter-ested) to 5 (very interested). The topics that students were asked to rate were hunting for food, hunting for sport, wildlife conservation, animal rights, economic develop-ment in Africa, and ecology. Interest items were developed to be relevant to the topic of the study and to reflect the perspectives introduced across the four study texts (i.e., for  and against trophy hunting and related to economic and ecological concerns). The full six-item scale had a Cronbach’s alpha reliability of .77, with reliability increas-ing to .80 when the two items associated with hunting were excluded (i.e., hunting for sport, hunting for food), based on alpha-if-item-deleted metrics. Only four interest items were used in all subsequent analyses.

Information EvaluationThe scale for habits with regard to information evaluation consisted of six items capturing the frequency with which students engaged the skills and practices associated with information evaluation and integration. Although a num-ber of related measures were examined (e.g., Hargittai, Fullerton, Menchen-Trevino, & Thomas, 2010; Strømsø & Bråten, 2010), this was a researcher-designed scale created to specifically tap source evaluation and corroboration. These are the deep-level multiple-text use strategies ex -pressly identified in the CAEM as associated with students’ integration of multiple texts. Specifically, students were asked to rate, on a 5-point scale, whether they strongly disagreed or strongly agreed with items such as “When reading or researching on the Internet I try to: decide whether texts are trustworthy/evaluate the evidence that is presented/compare sources to one another.” The Cronbach’s alpha reliability for the six-item scale was .86.

Multiple-Text TaskThe multiple-text task entailed students reading four texts on the topic of trophy hunting and completing a variety of performance measures tapping multiple-text integration.

TextsStudents were provided with four partially complemen-tary and partially conflicting texts on the topic of trophy hunting. Trophy hunting was selected as the topic of the multiple-text task for three primary reasons. First, it was a complex and controversial topic about which both con-flicting and complementary perspectives could be intro-duced (i.e., supporting or opposing trophy hunting). Second, this represented an interdisciplinary topic about

which different domain perspectives (i.e., economic, eco-logical) could be presented, across texts. Specifically, I sought to identify a topic that would parallel the common cross-disciplinary tensions between economic develop-ment and ecological conservation but that would be unfa-miliar to students. Third, this topic was chosen because it was expected to be reactive with learners’ various indi-vidual difference factors. Specifically, this topic was acces-sible to students (i.e., students were able to infer the meaning of the term trophy hunting) but not likely to be supported by a great deal of conceptual knowledge, requiring students to learn new information from the texts provided. Because of its accessibility, students were expected to be able to form attitudes toward this topic. Indeed, information about this topic included the coun-terintuitive argument that trophy hunting, in fact, raises significant funds for animal conservation—an argument that may have sparked students’ interest in the topic or challenged their initial beliefs.

Prior work on students’ learning from multiple texts has, generally, constructed texts to be either conflicting or consistent in nature (Firetto, 2020). In a review of stud-ies examining students’ multiple-text integration, Primor and Katzir (2018) found that 42 studies presented stu-dents with conflicting texts, whereas 11 studies intro-duced students to complementary information. Building on this prior work, the texts created for this study were systematically constructed to simultaneously be both complementary and conflicting in nature (see Appendix C). That is, the texts created varied in adopting an eco-nomic or ecological perspective on trophy hunting and in presenting arguments for or against the practice. For instance, one text (i.e., economic perspective in favor of trophy hunting) discussed the ability to raise money for conservation through trophy hunting. A second, conflict-ing, text contrasted this to the money that could be raised through ecotourism (i.e., economic perspective opposing trophy hunting). Still a third, complementary text argued that money raised allowed local areas to conserve land for animal habitats (i.e., ecological perspective in favor of tro-phy hunting). In this way, each text developed connected to two of the other texts in both a consistent and conflict-ing fashion. This approach to text construction stands in contrast to prior work that primarily organized text sets around a single, central conflict (Barzilai, Zohar, & Mor-Hagani, 2018; Firetto, 2020; Primor & Katzir, 2018). A variety of both complementary and conflicting relations were built into the texts used in this study to more robustly examine learners’ integration or connection formation across texts. Texts were further written to be parallel to one another in structure, each including two distinct arguments about trophy hunting and providing corre-sponding quantitative evidence.

The texts used in this study were constructed through a three-step process. First, I compiled information about

Page 9: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 9

trophy hunting from empirical work and articles in the popular press (e.g., Creel et al., 2016; Cruise, 2015; Dickman, 2018; Di Minin, Leader-Williams, & Bradshaw, 2016; Treves et al., 2019). I used this information to con-struct four long-form trophy hunting texts following the same complementary/conflicting structure used in the study (i.e., introducing economic and ecological perspec-tives, for and against trophy hunting). In long form, each text included a central claim and four pieces of support-ing evidence. In the second phase of development, these long-form texts were iteratively reviewed, at least three separate times, by a team of four research assistants who provided feedback on the quality of the information included, the texts’ parallel structure, and clarity and wording. Texts were revised between each round of feed-back and ultimately reduced to include a central claim and two pieces of supporting evidence that overlapped with other texts in the document set. The selection of supporting evidence to include in these shortened texts was done to most explicitly convey the points of view intended and to most directly overlap with the comple-mentary and conflicting evidence introduced in other texts. Texts were reviewed one final time by research assistants to ensure that they were in final form.

In the last stage of development, the final texts were piloted via Amazon’s Mechanical Turk. In this pilot, texts were segmented to include the central claim and only one piece of supporting evidence, resulting in eight subtexts. This was done to isolate whether any segments of texts were outliers with regard to trustworthiness or readabil-ity. To reduce carryover effects, participants were ran-domly assigned to read only one of the eight subtexts constructed, such that each subtext was rated by between 16 and 31 separate respondents. Respondents were asked to rate how trustworthy, convincing, and easy to under-stand they considered each text to be on a 5-point

Likert-type scale, ranging from 1 (not at all) to 5 (very). As can be seen in Table 2, texts were rated by respondents as both more than moderately trustworthy and convincing and as quite easy to understand. Text segments were also rated consistently with one another.

The perspective of each text, supporting or opposing trophy hunting and introducing economic or ecological arguments on this issue, was introduced in the title (e.g., “The Economic Benefits of Trophy Hunting”). Texts were attributed to expert authors (e.g., “Dr. Kevin Redmond, Professor of Business, Columbia University”) and identified as published in reputable venues in the popular press (e.g., The New York Times, The Washington Post). All authors were introduced as male, as gender has been found to impact students’ trustworthiness perceptions (Armstrong & McAdams, 2009; Brann & Himes, 2010; Flanagin & Metzger, 2003). Information about each text is summarized in Table 3. Texts were comparable in length to those used in prior work (Firetto, 2020). Texts presenting economic or ecological views, for and against trophy hunting, were intro-duced to students in one of eight different orders: (1) pro economic view (pro econ), con economic view (con econ), pro ecological view (pro ecol), con ecological view (con ecol); (2) con econ, pro econ, con ecol, pro ecol; (3) pro ecol, con ecol, pro econ, con econ; (4) con ecol, pro ecol, con econ, pro econ; (5) pro econ, pro ecol, con econ, con ecol; (6) pro ecol, pro econ, con ecol, con econ; (7) con econ, con ecol, pro econ, pro ecol; or (8) con ecol, con econ, pro ecol, pro econ. Texts ranged in length from 256 to 285 words and had an average Flesch–Kincaid grade level of 16, indicating that these were appropriate for use with an undergraduate audience.

Process MeasuresI examined two process measures in association with stu-dents’ learning from multiple texts: the total time students

TABLE 2 Pilot Ratings of the Study’s Texts

SubtextNumber of raters

Trustworthy: M (SD)

Convincing: M (SD)

Easy to understand: M (SD)

“Introduction” 31 3.94 (0.63) 3.94 (0.89) 4.68 (0.48)

“The Economic Benefits of Trophy Hunting”: Segment 1 22 3.18 (1.01) 3.05 (1.25) 4.36 (0.90)

“The Economic Benefits of Trophy Hunting”: Segment 2 24 3.42 (0.97) 3.46 (1.10) 4.46 (0.88)

“The Economic Harms of Trophy Hunting”: Segment 1 23 4.00 (0.80) 4.13 (1.14) 4.43 (1.08)

“The Economic Harms of Trophy Hunting”: Segment 2 26 3.92 (0.63) 4.23 (0.76) 4.58 (0.64)

“The Ecological Benefits of Trophy Hunting”: Segment 1 16 3.25 (1.29) 3.00 (1.46) 4.75 (0.58)

“The Ecological Benefits of Trophy Hunting”: Segment 2 20 3.50 (0.76) 3.80 (1.06) 4.35 (0.75)

“The Ecological Harms of Trophy Hunting”: Segment 1 22 3.73 (1.03) 3.73 (1.24) 4.45 (0.74)

“The Ecological Harms of Trophy Hunting”: Segment 2 20 3.60 (0.60) 3.65 (1.09) 4.40 (0.82)

Note. M = mean; SD = standard deviation.

Page 10: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

10 | Reading Research Quarterly, 0(0)

TABL

E 3

Sum

mar

y of

the

Stud

y’s T

exts

Titl

e“T

he E

cono

mic

Ben

efit

s

of T

roph

y H

unti

ng”

“The

Eco

nom

ic H

arm

s

of T

roph

y H

unti

ng”

“The

Eco

logi

cal A

rgum

ent

fo

r Tr

ophy

Hun

ting

”“T

he E

colo

gica

l Har

ms

of

Tro

phy

Hun

ting

Auth

orD

r. H

owar

d Br

own

Dr.

Kev

in R

edm

ond

Dr.

Mat

thew

Pow

ell

Dr.

Dan

iel J

amis

on

Auth

or

info

rmat

ion

Prof

esso

r of

Eco

nom

ics,

Cor

nell

Uni

vers

ity

Prof

esso

r of

Bus

ines

s, C

olum

bia

Uni

vers

ity

Prof

esso

r of

Eco

logy

, D

uke

Uni

vers

ity

Prof

esso

r of

Bio

logy

, Br

own

Uni

vers

ity

Publ

ishe

rTh

e N

ew Y

ork

Tim

esTh

e W

ashi

ngto

n Po

stCh

icag

o Tr

ibun

eLo

s A

ngel

es T

imes

Clai

m 1

Trop

hy h

unti

ng r

aise

s su

bsta

ntia

l m

oney

for

ani

mal

con

serv

atio

n.Th

e ba

sic

econ

omic

arg

umen

t ag

ains

t tr

ophy

hun

ting

is t

hat,

ove

r ti

me,

pr

eser

ving

ani

mal

s is

muc

h m

ore

fina

ncia

lly b

enef

icia

l tha

n hu

ntin

g th

em.

The

larg

est

thre

at t

o w

ildlif

e to

day

is

not

from

tro

phy

hunt

ing,

but

rat

her

from

hab

itat

loss

due

to

farm

ing

and

land

dev

elop

men

t.

Fund

amen

tally

, ki

lling

ani

mal

s th

roug

h tr

ophy

hun

ting

red

uces

bi

odiv

ersi

ty (

i.e.

, th

e nu

mbe

r of

pl

ant

and

anim

al s

peci

es in

an

area

),

acce

lera

ting

the

pac

e of

ext

inct

ion.

Clai

m 2

Trop

hy h

unti

ng a

lso

bene

fits

loca

l co

mm

unit

ies

in A

fric

a, c

ontr

ibut

ing

to r

ural

dev

elop

men

t.

Whi

le a

com

mon

cla

im is

tha

t tr

ophy

hu

ntin

g ra

ises

sub

stan

tial

fun

ds f

or

impo

veri

shed

Afr

ican

nat

ions

, in

fac

t,

the

trop

hy h

unti

ng in

dust

ry is

qui

te

smal

l, r

elat

ive

to t

he b

road

er A

fric

an

tour

ism

indu

stry

.

Thos

e in

volv

ed in

tro

phy

hunt

ing

have

a v

este

d in

tere

st in

ens

urin

g th

at t

roph

y hu

ntin

g do

es n

ot h

arm

th

e ov

eral

l sus

tain

abili

ty o

f w

ildlif

e po

pula

tion

s.

Ove

r ti

me,

tro

phy

hunt

ing

furt

her

resu

lts

in t

he “

surv

ival

of

the

wea

k,”

in e

volu

tion

ary

term

s.

Leng

th25

6 w

ords

251

wor

ds27

1 w

ords

285

wor

ds

Fles

ch–K

inca

id

grad

e le

vel

16.6

17.9

14.7

14.8

Page 11: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 11

devoted to text access and students’ self-reports of strat-egy use. Time on texts was collected during the course of multiple-text use, and strategy use was self-reported fol-lowing task completion.

Time on TextsTime on texts was recorded via log data (i.e., recorded by Qualtrics, in association with each text accessed and then totaled) and has commonly been used as a measure of students’ task engagement, effort expenditure, and depth of processing when learning from multiple texts (Bråten et al., 2014; List, Stephens, & Alexander, 2019). Students accessed texts for an average of 3.80 minutes (SD = 3.80 minutes).

Multiple-Text Strategy InventoryFollowing multiple-text task completion, students were asked to report their strategy use during multiple-text use via the cross-textual elaboration subscale of the Multiple-Text Strategy Inventory (Bråten & Strømsø, 2011). This subscale reflects students’ efforts to compare and contrast information presented across texts. A sample item was “I tried to understand the issues concerning trophy hunting by comparing the content of the different texts.” Due to concerns over length, only four items from the scale (i.e., those with the strongest factor loadings as per Bråten & Strømsø, 2011) were used in this study, with a Cronbach’s alpha reliability of .72. Because the scale was abridged from that used by Bråten and Strømsø (2011), I ran a con-firmatory factor analysis using the lavaan package in R (Rosseel, 2012) to ensure appropriate factor loadings. The model was found to have strong data fit, χ2(2)  =  5.46, p  =  .07. In particular, the model had a comparative fit index (CFI) of 0.99, a root mean square error of approxi-mation (RMSEA) of 0.07 (90% confidence interval [CI] [0.00, 0.14]), and a standardized root mean square resid-ual (SRMR) of 0.02. Each of the items loaded statistically significantly onto the latent factor (ps < .001).

Performance OutcomesI used four measures, tapping two underlying constructs (i.e., integrated mental model and intertext model devel-opment), to assess multiple-text integration. Two differ-ent measures, similar to those used in prior work, were used as metrics for integrated mental model development (Bråten et al., 2009; Gil et al., 2010; Strømsø et al., 2008): a statement verification task and a cross-textual integration task. I likewise used two different measures to assess intertext model development: a statement source match task and a source differentiation task. Consistent with prior work examining students’ learning from multiple texts, I developed all four measures using statements appearing in the trophy hunting texts, composed for the purposes of this study (e.g., Bråten et al., 2009; Cheong,

Zhu, Li, & Wen, 2019; Hagen et al., 2014). Prior to their use in this study, measures were piloted with a sample of online participants using Amazon Mechanical Turk to ensure both reliability and that participants’ responses had an appropriate degree of variability in performance. The Cronbach’s alpha reliabilities ranged from .62 for the source differentiation task (reliability increased to .69 in  study 2) to .81 for the sentence verification task. Performance likewise ranged from 56% accuracy for the statement source match task to 71% accuracy for the sen-tence verification task, indicating appropriate variability (i.e., the lack of a ceiling effect).

Based on measures of task difficulty, established through pilot testing, students were randomly assigned to complete either the sentence verification and source dif-ferentiation tasks (n = 168) or the cross-textual integra-tion and statement source match tasks (n  =  183), in counterbalanced order. This resulted in each student completing a measure tapping integrated mental model development and intertext model development.

Two separate measures each of integrated mental model development and intertext model development were used in this study. This was done to reflect the range of similar mea-sures used in prior work (e.g., Stadtler, Scharrer, & Bromme, 2011; Strømsø et al., 2010) and because each of these mea-sures tapped somewhat different aspects of integrated men-tal model and intertext model formation. For instance, whereas the statement source match task examined students’ abilities to link information back to sources of origin (i.e., develop source–content links), the source differentiation task required students to further engage in source separation or recognize whether statements introduced were intratex-tual or intertextual in nature. Participants were not asked to complete all four measures due to concerns regarding study length and possible carryover effects. Because each measure included a variable number of items, scores were converted to percentage accuracy. Students then received a percentage accuracy score for measures tapping integrated mental model (sentence verification or cross-textual integration task) and intertext model (statement source match or source dif-ferentiation task) construction. All measures, along with sample items, are summarized in Table 4.

For the main analyses presented in this article, I col-lapsed students’ sentence verification and cross-textual integration task performance and statement source match task and source differentiation task performance, as reflective of integrated mental model and intertext model development, respectively, with these different outcome variables simultaneously modeled. This is consistent with prior work on both reading comprehension (Fesel, Segers, & Verhoeven, 2018) and learning from multiple texts (Brand-Gruwel, Wopereis, & Walraven, 2009; Maier, Richter, & Britt, 2018; Schoor et al., 2020) that has likewise ran-domly assigned students to complete parallel measures to allow multiple aspects of performance to be efficiently

Page 12: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

12 | Reading Research Quarterly, 0(0)

assessed. A model separately describing performance on each set of measures (i.e., sentence verification and state-ment source match tasks, cross-textual integration and source differentiation tasks) is presented in Appendix D. I ran this model using the multigroups analysis function (i.e., using the group argument) in lavaan. Groups were defined as those completing the sentence verification and source differentiation tasks vis-à-vis the cross-textual integration and statement source match tasks.

Statement Verification TaskThe statement verification task comprised 14 statements, eight drawn from the four trophy hunting texts and six not included in the texts provided. Students were asked to “please select whether each statement below was included in one of the texts you read or not included in the text you read.” Students could also select an “I don’t remember” option. This option and incorrect responses received a 0, such that students’ scores reflected the total amount of items that they answered correctly. The “I don’t know” option was included because prior work has found the inclusion of such an option to reduce guessing (Muijtjens, van Mameren, Hoogenboom, Evers, & van der Vleuten, 1999; Zhang, 2013). Although statements on the sentence verification task were drawn only from a single text, best corresponding to multiple-text comprehension, I never-theless considered this measure to underlie integrated mental model construction (Kobayashi, 2009). The 14-item measure had a Cronbach’s alpha reliability of .74.

Statement Source Match TaskThe statement source match task was a 12-item measure that asked students to match statements drawn from texts to their sources of origin. Specifically, students were instructed to “please verify which text the statement appeared in.” Students also had the option to select “I don’t remember” in association with each text, with such responses likewise receiving a score of 0, such that students’ total scores reflected the number of items they responded to correctly. The scale had a Cronbach’s alpha reliability of .73.

Cross-Textual Integration TaskThe cross-textual integration task comprised 12 items cor-rectly or incorrectly integrating or connecting statements presented across texts. Whereas six of the items included accurate instances of integration, the other six items reflected incorrect linkages across texts. Statements from different texts were connected via a linking term (e.g., and, because, so that) in all uppercase letters. Specifically, partici-pants were instructed to “please determine whether each statement is correct or incorrect based on information pro-vided in texts.” As with the other measures, an “I don’t remember” option was also provided and assigned a 0 when selected. Students had to identify whether each statement reflected an appropriate or inappropriate instance of cross- TA

BLE

4 Su

mm

ary

of In

tegr

atio

n M

easu

res

Mea

sure

Focu

sN

umbe

r

of it

ems

Dir

ecti

ons

Sam

ple

stat

emen

tM

ean

(SD

)Pe

rcen

tage

ac

cura

cy (

SD)

Sent

ence

ve

rifi

cati

on t

ask

Inte

grat

ed m

enta

l m

odel

dev

elop

men

t14

Plea

se s

elec

t w

heth

er e

ach

stat

emen

t be

low

was

incl

uded

in o

ne o

f th

e te

xts

you

read

or

was

not

incl

uded

in t

he

text

s yo

u re

ad.

A 20

16 s

tudy

fro

m t

he U

nive

rsit

y of

Min

neso

ta

foun

d th

at,

on a

vera

ge,

hunt

ing

a lio

n co

st

betw

een

$24,

000

and

$71,

000,

whi

le h

unti

ng a

w

hite

rhi

noce

ros

cost

s $1

25,0

00 o

r m

ore.

7.51

(3

.31)

53.6

4%

(23.

65%)

Sour

ce

diff

eren

tiat

ion

task

Inte

rtex

t m

odel

de

velo

pmen

t10

For

each

sta

tem

ent

belo

w,

plea

se

veri

fy w

heth

er t

he in

form

atio

n in

the

st

atem

ent

cam

e fr

om t

he s

ame

text

or

from

dif

fere

nt t

exts

.

Trop

hy h

unti

ng r

aise

s su

bsta

ntia

l mon

ey

for

anim

al c

onse

rvat

ion

AND

ben

efit

s lo

cal

com

mun

itie

s in

Afr

ica,

con

trib

utin

g to

rur

al

deve

lopm

ent.

3.05

(1

.74)

43.6

4%

(24.

81%)

Cros

s-te

xtua

l in

tegr

atio

n ta

skIn

tegr

ated

men

tal

mod

el d

evel

opm

ent

12Pl

ease

det

erm

ine

whe

ther

eac

h st

atem

ent

is c

orre

ct o

r in

corr

ect

base

d on

info

rmat

ion

prov

ided

in t

he t

exts

.

Trop

hy h

unti

ng r

aise

s su

bsta

ntia

l mon

ey f

or

anim

al c

onse

rvat

ion

BUT

the

indu

stry

is q

uite

sm

all r

elat

ive

to t

he b

road

er A

fric

an t

ouri

sm

indu

stry

.

5.75

(2

.90)

47.9

4%

(24.

19%)

Stat

emen

t so

urce

m

atch

tas

kIn

tert

ext

mod

el

deve

lopm

ent

12Fo

r ea

ch s

tate

men

t be

low

, pl

ease

ver

ify

whi

ch t

ext

the

stat

emen

t ap

pear

ed in

. If

you

do

not

rem

embe

r a

stat

emen

t,

plea

se s

elec

t “I

don

’t r

emem

ber.

A 20

13 s

tudy

fro

m t

he U

nite

d N

atio

ns’

Inte

rnat

iona

l Mon

etar

y Fu

nd f

ound

vill

ages

tha

t so

ld h

unti

ng li

cens

es t

o sa

fari

ope

rato

rs h

ad

15%

high

er h

ouse

hold

inco

mes

.

4.38

(2

.82)

36.5

2%

(23.

47%)

Not

e. S

D =

sta

ndar

d de

viat

ion.

Page 13: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 13

textual integration. The Cronbach’s alpha reliability for the 12-item scale was .72.

Source Differentiation TaskThe source differentiation task comprised 10 items re -flecting instances of intratextual and intertextual integra-tion or linkages within and across texts. Whereas four of the items reflected intratextual integration, the other six items represented intertextual integration or links across two of the texts. In particular, students were instructed to “please verify whether the information in the statement came from the same text or from different texts.” To reduce guessing, an “I don’t remember” option was also provided and assigned a 0 in scoring. The reliability for the 10-item scale was .44; excluding three items resulted in a Cronbach’s alpha reliability of .50.

Data AnalysisDescriptives and correlations among all measures of inter-est are presented in Tables 5 and 6. I evaluated the fit of the theorized model presented in Figure 1 using the lavaan package in R (Rosseel, 2012). I used the maximum likeli-hood estimator with robust standard errors to correct for violations of multivariate nonnormality. Nevertheless, all of the independent and dependent variables in the model were found to be approximately normal (see values for skewness and kurtosis in Table 7). Outliers were not removed because all of the measures used in analyses had restricted ranges (e.g., reflecting students’ ratings of Likert-type items and performance on various objective measures

of integrated mental model and intertext model develop-ment). The exception to this was the time variable, which demonstrated considerable non-normality. In response to this, I removed eight outliers (2.28% of the sample) from the data set; I defined outliers as students who spent longer than 1.5 standard deviations above the mean on reading the study texts. Moreover, I applied a logarithmic transfor-mation to the time on texts variable to correct for issues of non-normality, with the transformed variable used in all subsequent analyses. I used listwise deletion to handle missing data because only four cases were found to have missing values (1.14% of the sample). As suggested by the CAEM, the focal model examined in this study (see Figure 1) uses interest and students’ habits with regard to informa-tion evaluation and integration as individual difference factors predicting processing and multiple-text task perfor-mance. However, analyses including additional individual difference factors are introduced in appendixes: analyses using attitudes (Appendix B), need for cognition (Appen -dix A), and justifications for knowing by multiple texts (Appendix A) as individual difference factors.

ResultsThe overall model was found to have strong fit, χ2(67) = 113.77, p  =  .001. In particular, the model had a CFI of 0.97, a RMSEA of 0.05 (90% CI [0.03, 0.06]), and a SRMR of 0.04. I determined guidelines for acceptable model fit based on standards from Hooper, Coughlan, and Mullen (2008) and Schreiber, Nora, Stage, Barlow, and King (2006).

TABLE 5 Descriptives and Correlations Between Key Variables

Variable

Mean (standard deviation) 1 2 3 4 5 6 7 8

Individual difference factors

1. Prior knowledge 2.21 (1.32) —

2. Interest 3.20 (0.96) .15** —

3. Information evaluation 3.86 (0.74) .16** .19*** —

Multiple-text processing variables

4. Total time 3.80 (3.80) .09 .15** .07 —

5. Log(Total Time) 0.37 (0.46) .18*** .19*** .16** .85*** —

6. Cross-textual elaboration 3.58 (0.71) .17*** .25*** .31*** .16** .17*** —

Multiple-text outcomes

7. Integrated mental model 0.51 (0.24) .34*** .21*** .11* .36*** .47*** .19*** —

8. Intertext model 0.40 (0.24) .28*** .14* .16** .31*** .41*** .20*** .50*** —

*p < .05. **p < .01. ***p < .001.

Page 14: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

14 | Reading Research Quarterly, 0(0)

In reporting results, I first describe loadings onto each latent construct in the measurement model, then results are introduced in terms of the direct and indirect paths predicting each outcome measure of interest (i.e., inte-grate mental model and intertext model development). Finally, additional relations in the model are described. A model with significant direct paths represented is shown in Figure 2. Direct and indirect loadings are summarized in Table 8.

Factor Loadings in the Measurement ModelInterestEach of the four items loaded statistically significantly onto the interest factor (.61–.89, ps < .001).

Information EvaluationEach of the six items loaded statistically significantly onto the information evaluation factor (.66–.79, ps < .001).

Integrated Mental Model DevelopmentDirect EffectsFour direct effects (i.e., interest, students’ habits with regard to information evaluation, total time, cross-textual elaboration) on integrated mental model development were examined in the model. However, only the time that students devoted to text access (β = 0.44, p < .001) had a statistically significant direct effect on integrated mental

model development, with interest having a marginally significant direct effect (β = 0.11, p = .05).

Indirect EffectsFour mediating paths were examined in the model (i.e., interest via time on texts, interest via cross-textual elabo-ration, information evaluation via time on texts, informa-tion evaluation via cross-textual elaboration) as predicting students’ integrated mental model development. Two of these indirect paths were statistically significant. First, interest predicted integrated mental model development, as mediated by time (β = 0.09, p < .01). Second, the rela-tion between students’ habits with regard to information evaluation and integrated mental model development was also statistically significantly mediated by time (β = 0.06, p < .05).

In total, 24.3% of the variance in integrated mental model development was explained by the direct and indi-rect effects in the model, suggesting a large effect.

Intertext Model DevelopmentDirect EffectsFour direct effects were further examined as predicting intertext model development. The time students devoted to text access (β = 0.37, p < .001) was found to have a sta-tistically significant direct effect on intertext model de -velopment, with students’ engagement in cross-textual elaboration (β = 0.10, p = .05) having a marginally signifi-cant direct effect.

TABLE 6 Correlations Between Key Variables by Specific Measures of Integration

Variable 1 2 3 4 5 67. Cross-textual integration task

8. Statement source match task

Individual difference factors

1. Prior knowledge — .18* .16* .09 .15* .23** .33*** .32***

2. Interest .13 — .23** .13 .17* .32*** .16* .16*

3. Information evaluation .14 .16* — .10 .20** .40*** .15 .17*

Multiple-text processing variables

4. Total time .10 .18* .04 — .86*** .15* .31*** .37***

5. Log(Total Time) .22** .22** .12 .87*** — .12 .40 .48***

6. Cross-textual elaboration .13 .17* .23** .17* .24** — .20** .19*

Multiple-text outcomes

7. Sentence verification task .34*** .25*** .05 .48*** .56*** .19* — .56***

8. Source differentiation task .21** .12 .13 .27*** .34*** .24** .43*** —

Note. Sentence verification and source differentiation task performance is below the diagonal; cross-textual integration and statement source match task performance is above the diagonal. *p < .05. **p < .01. ***p < .001.

Page 15: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 15

Indirect EffectsAdditionally, two of the four possible mediating effects in the model were found to be statistically significant. The effect of both interest (β  =  0.08, p  <  .01) and students’ habits with regard to information evaluation (β  =  0.05, p < .05) on intertext model development were found to be statistically significantly mediated by the time students devoted to text access.

In total, 19.0% of variance in intertext model develop-ment was explained by the direct and indirect effects in the model, indicating a medium effect.

Direct Effects of Individual Difference Factors on Process VariablesTotal TimeTotal time on text access was statistically significantly pre-dicted by interest (β  =  0.20, p  <  .001) and by students’

habits with regard to information evaluation (β  =  0.13, p < .05). The variance explained in time devoted to text access by the model was R2 = .07.

Cross-Textual ElaborationCross-textual elaboration was statistically significantly predicted both by interest (β = 0.18, p < .01) and by stu-dents’ habits with regard to information evaluation (β = 0.30, p < .001). R2 variance explained in cross-textual elab-oration was .15.

CovariancesInterest and students’ habits with regard to information evaluation statistically significantly covaried with each other (0.23, p = .001). Integrated mental model develop-ment statistically significantly covaried with intertext model development (0.37, p < .001).

DiscussionIn investigating the CAEM in this study, I evaluated the role of individual difference factors and processing variables on students’ learning from multiple texts. Two primary sets of integration outcomes were examined: measures of integrated mental model and intertext model development. Integrated mental model construc-tion was predicted by students’ interest in the topic of the task and by the time students devoted to text access, with time further mediating the association between both topic interest and students’ habits with regard to information evaluation and integrated mental model development. Intertext model construction was directly predicted by both the time that students devoted to text access and students’ engagement in cross-textual elabo-ration, with time on texts further mediating the relation between intertext model development and students’ topic interest and habits with regard to information evaluation. In total, 10 direct paths relating individual difference factors, processing variables, and integration outcomes were hypothesized in our model. I found sup-port for eight out of these 10 paths and discuss each of these in turn. Eight indirect paths were also examined, with support found for only four of these. Results expected but not found in the model are also discussed.

First, both of the individual difference factors exam-ined in this study (i.e., topic interest, students’ habits with regard to information evaluation) were found to be associ-ated with processing variables, including the time students devoted to multiple-text access and their engagement in cross-textual elaboration. I consider this to be a focal find-ing, given that among the central tenants of the CAEM, both cognitive (i.e., students’ habits with regard to informa-tion evaluation) and affective (i.e., interest) factors impact

TABLE 7 Values for Skewness and Kurtosis

Variable Skewness Kurtosis

Individual difference factors

Prior knowledge 0.06 (0.13) −0.83 (0.26)

Interest −0.25 (0.13) −0.36 (0.26)

Information evaluation −0.40 (0.13) −0.01 (0.26)

Attitude certainty −0.34 (0.13) −0.89 (0.26)

Need for cognition −0.20 (0.12) −0.27 (0.26)

Justification by multiple texts

−0.14 (0.13) −0.59 (0.26)

Processing variables

Total time on textsa 1.99 (0.13) 5.10 (0.26)

Log(Time on Texts) −0.24 (0.13) −0.63 (0.26)

Cross-textual elaboration

−0.24 (0.13) 0.16 (0.26)

Performance outcomes

Sentence verification task

−0.27 (0.19) −0.56 (0.38)

Cross-textual integration task

−0.37 (0.18) −0.50 (0.36)

Statement source match task

−0.01 (0.18) −0.94 (0.36)

Source differentiation task

0.37 (0.19) −0.21 (0.38)

Note. Variables were considered to be approximately normal using ±1 as a cutoff for skewness and ±2 as a cutoff for kurtosis. aBecause the time variable deviated significantly from normality, eight outliers were removed from the data, and the time on texts variable was logarithmically transformed.

Page 16: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

16 | Reading Research Quarterly, 0(0)

students’ processing or strategy use when learning from multiple texts (List & Alexander, 2017b, 2018a). Second, the processing variables examined in this study and time on texts, in particular, were significantly associated with students’ integrated mental model and intertext model development. Again, these findings are supported both by ideas in the CAEM (List & Alexander, 2017b, 2018a) and by broader empirical work linking students’ multiple-text processing with task performance (e.g., An markrud et al., 2014; Du & List, 2020; List & Alexander, 2019).

Despite these findings, which are generally consistent with the CAEM, I was specifically intrigued that interest only directly predicted integrated mental model develop-ment, not intertext model development, and that cross-textual elaboration predicted intertext model development, not integrated mental model development. This pattern of findings may be understood according to the distinct fac-ets of integration that are captured by integrated mental model construction vis-à-vis intertext model construction (Britt et al., 1999; Perfetti et al., 1999). Integrated mental model construction corresponds to students developing a unified and coherent understanding of a common topic, in this case trophy hunting, described across texts. Inter-text model construction reflects students’ formation of structural relations across texts and associating of content from texts with sources of origin. Given the different aspects of integration captured by these two models, their associations with somewhat different predictors begin to  make sense. In particular, integrated mental model

con struction, focused on the content introduced across texts, can best be served by students’ interest in the topic of the task. That is, students more interested in aspects of tro-phy hunting may be more motivated to develop an under-standing of the issues involved. As such, the association between interest and integrated mental model develop-ment should further be explored within the context of multiple-text tasks that are more affectively engaging (e.g., interesting) for learners.

An intertext model of multiple texts may require a somewhat complementary set of processes to develop. That is, whereas integrated mental model formation may emerge somewhat organically through students’ multiple-text comprehension, intertext model development likely requires more deliberate and effortful construction on the part of learners. In particular, intertext model formation requires students to actively form novel, higher order rela-tions across texts and tag information from texts to sources of origin. Doing so requires a more demanding and complex set of strategic processes than those involved in only allowing an increasingly rich or elaborated mental model of content from across texts to form, as happens during integrated mental model construction. In other words, intertext model development requires that stu-dents deliberately organize and, in considering sources, evaluate the content otherwise only assimilated into the integrated mental models that they form. Given these requirements, it is understandable that the cross-textual elaboration dimension of the Multiple-Text Strategy

FIGURE 2 Full Model With Statistically Significant Direct Paths and Standardized Loadings

Page 17: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 17

TABLE 8 Direct and Indirect Paths in the Full Model Predicting Students’ Integration Performance Based on Individual Difference Factors and Processing Variables

Predictor BStandard error (B) β

95% confidence interval p

Integrated mental model development

Direct paths

Interest 0.03 0.01 0.11 [−0.000, 0.05] .05

Habits in information evaluation −0.01 0.02 −0.02 [−0.043, 0.03] .73

Time 0.23 0.03 0.44 [0.18, 0.29] .00

Cross-textual elaboration 0.03 0.02 0.09 [−0.004, 0.06] .08

Indirect paths

Total indirect interest 0.02 0.01 0.10 [0.01, 0.04] .00

• Interest → Time 0.02 0.01 0.09 [0.008, 0.03] .00

• Interest → Cross-textual elaboration 0.00 0.00 0.02 [−0.001, 0.01] .15

Total indirect habits in information evaluation 0.03 0.01 0.09 [0.007, 0.05] .01

• Habits in information evaluation → Time 0.02 0.01 0.06 [0.001, 0.04] .04

• Habits in information evaluation → Cross-textual elaboration

0.01 0.01 0.03 [−0.001, 0.02] .09

Total effects

Interest 0.05 0.01 0.22 [0.02, 0.08] .00

Habits in information evaluation 0.02 0.02 0.07 [−0.017, 0.06] .27

Intertext model development

Direct paths

Interest 0.01 0.01 0.05 [−0.02, 0.04] .43

Habits in information evaluation 0.02 0.02 0.06 [−0.02, 0.06] .26

Time 0.20 0.03 0.37 [0.15, 0.25] .00

Cross-textual elaboration 0.04 0.02 0.10 [−0.001, 0.07] .05

Indirect paths

Total indirect interest 0.02 0.01 0.09 [0.010, 0.03] .00

• Interest → Time 0.02 0.01 0.08 [0.006, 0.03] .00

• Interest → Cross-textual elaboration 0.00 0.00 0.02 [−0.001, 0.01] .10

Total indirect habits in information evaluation 0.03 0.01 0.08 [0.008, 0.05] .01

• Habits in information evaluation → Time 0.02 0.01 0.05 [0.001, 0.03] .04

• Habits in information evaluation → Cross-textual elaboration

0.01 0.01 0.03 [−0.001, 0.02] .08

Total effects

Interest 0.03 0.01 0.14 [0.004, 0.06] .03

Habits in information evaluation 0.05 0.02 0.14 [0.009, 0.09] .02

Page 18: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

18 | Reading Research Quarterly, 0(0)

Inventory capturing students’ deliberate attention to and reasoning about consistency and conflict across texts was uniquely associated with intertext model construction, not with integrated mental model development, in this study.

At the same time, these interpretations require sub-stantiation via further replication, as the confidence inter-vals for both the path coefficient connecting interest and integrated mental model development and cross-textual elaboration and intertext model development included zero, and interest and cross-textual elaboration were directly associated with integration outcomes in only one of the measure-specific models examined in Appendix D. Still, it is important to note that students’ engagement in cross-textual elaboration was predicted by both topic interest and reports of habits with regard to information evaluation. The association between cross-textual elabo-ration and students’ habits with regard to information evaluation is a logical one. That is, students’ reports of habitual engagement in information evaluation may indi-cate their knowledge of and familiarity with deep-level strategies for multiple-text use, making it more likely that they would deploy these when completing a particular multiple-text task. The association between interest and cross-textual elaboration is a less obvious one; neverthe-less, this association may be interpreted as suggesting that given the demanding nature of cross-textual elaborative strategy use, interest, as a motivational variable, is required to drive or merit such effortful and strategic engagement on the part of learners.

Time on TextsTime on texts emerged in this study as having both a direct effect on integrated mental model and intertext model development and as mediating the effect of indi-vidual difference factors on these outcomes. The impor-tance of the time on text variable in this study may be understood in a number of ways. First, time on texts may serve as an operationalization of the construct of behav-ioral engagement introduced by Bråten, Brante, and Strømsø (2018). Bråten et al. defined behavioral engage-ment as students’ overt or directly observable participa-tion in learning activities (e.g., persistence in text access). Second, time on texts, particularly as it was associated with interest, may reflect students’ immersion in or affec-tive engagement with the information provided. Finally, time on texts may reflect students’ depth of strategy use during reading, as has been suggested in prior work (Bråten et al., 2014; List, Stephens, & Alexander, 2019). Regardless of its specific interpretation, time on texts emerged as an essential mediator in this study, connecting individual difference factors with students’ ultimate task performance. Understanding time on texts as a mediator is central to conceptualizing the CAEM, which suggests

that although various cognitive and affective individual difference factors may set the stage for deeper level pro-cessing during multiple-text learning, such processing, or actual, strategy enactment during multiple-text use, is nevertheless necessary for successful task performance.

Missing Direct Path: Students’ Habits With Regard to Information EvaluationWhereas interest was found to directly predict integrated mental model development, albeit marginally, students’ habits with regard to information evaluation were not directly associated with either measure of integration (i.e., neither integrated mental model nor intertext model development). At the same time, habits with regard to information evaluation were associated with both the time students devoted to text access and their cross- textual elaboration during task completion, with time devoted to text access, in particular, significantly mediat-ing the relation between this individual difference factor and integration performance. This mediation path indi-cates the importance of modeling students’ learning from multiple texts based on both (a) individual difference fac-tors that differentially predispose students toward task completion, and (b) the extent to which students draw on or activate these during learning. That is, results from this study suggest that although students’ habits with regard to information evaluation are important for multiple-text integration, their importance is only reflected in their being associated with students’ higher propensity for actual strategy use during learning. Still, it is equally important to recognize that strategy use during task com-pletion (e.g., students’ engagement in cross-textual elabo-ration) was explained to a large extent by students’ habits with regard to information evaluation. This suggests the need to develop the competency for information evalua-tion in learners, in both a task-embedded and a task-independent fashion.

Implications for the CAEMThis study contributes to the literature by validating at least three of the theoretical conjectures postulated in the CAEM. First, both affective engagement (i.e., interest) and students’ habits with regard to information evaluation were found to be associated with performance on a multiple-text task. In particular, whereas interest directly predicted integrated mental model formation, both in -terest  and students’ habits with regard to information evaluation were associated with both measures of integra-tion, as mediated by the effects of time on texts. These results validate the unique contribution of both affective and cognitive factors to students’ learning from multiple texts. Second, these findings underscore the CAEM’s proposition that a variety of individual difference factors lead to behavioral and strategic differences in multiple-text

Page 19: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 19

processing and, ultimately, drive differences in multiple- text task performance. Indeed, students’ engagement in strategies corresponding to cross-textual elaboration pre-dicted intertext model development, whereas time de -voted to text access predicted both of the outcome measures examined in this study. Moreover, the individual difference factors modeled, namely, interest and students’ habits with regard to information evaluation, signifi-cantly  predicted both processing measures investigated. Altogether, these results underscore theoretical concep-tions of multiple-text use as involving a set of learner characteristics contributing to differences in processing and resulting in different levels of task performance (List & Alexander, 2017b, 2018a; Rouet & Britt, 2011).

Finally, as suggested by the CAEM, the contribution of interest and students’ habits with regard to source eval-uation should not be considered in isolation; rather, given their strong association, these dimensions of individual difference should be associated with one another in understanding students’ engagement with multiple texts. More generally informing both the CAEM and the docu-ments model framework, used to conceptualize students’ mental models constructed based on multiple texts, the integration measures used in this study were found to be associated with, yet distinguishable from, one another. That is, although students’ performance on objective measures tapping integrated mental model and intertext model construction were found to be strongly associated with one another, these facets of integration were never-theless predicted by somewhat different sets of individual difference factors and processing variables, pointing to their discrete nature as aspects of the multidimensional construct that is integration.

All told, this study contributes to the broader litera-ture of learning from multiple texts in at least three ways. First, this study contributes to a relatively limited set of analyses that have aimed to model the complexity associ-ated with learning from multiple texts. Theoretical mod-els conceptualize the process of learning from multiple texts as encompassing multiple-text use behaviors and strategies and multiple-text task performance, with these jointly impacted by a diverse set of individual difference factors (List & Alexander, 2017b, 2018a, 2019; Rouet & Britt, 2011). Yet, these three aspects of multiple-text use have rarely been considered in conjunction with one another, with studies more commonly focusing on indi-vidual differences in multiple-text task performance (Ferguson & Bråten, 2013; Gil et al., 2010; Strømsø & Barzilai, 2018) or on the association among different pro-cessing variables and outcome measures of interest (Bråten & Strømsø, 2011; Hagen et al., 2014; Kobayashi, 2009). In this study, all three of these dimensions of multiple-text task completion were considered in a com-prehensive model of processing.

Second, this study contributes to the burgeoning lit-erature examining the role of affective factors and behav-ioral engagement in learning from multiple texts (Bråten et al., 2018; List, Stephens, & Alexander, 2019). In further documenting the role of topic interest in both cross- textual elaboration and integrated mental model con-struction, this study pushes the literature to move beyond considering only cognitive factors in understanding stu-dents’ learning from multiple texts. Finally, and perhaps most important, this study is unique in the range of multiple-text learning outcomes considered. Complement-ing and expanding on measures used in prior work, this study included distinct assessments of integrated mental model and intertext model construction. Moreover, in modeling these using both individual difference factors and processing variables, it demonstrates how these capture complementary yet distinct aspects of learning from texts.

Educational ImplicationsPreliminary results from this study point to a number of educational implications for literacy instruction. First, given the role of interest in learning from multiple texts, as demonstrated in this study, it is clear that teachers should be strategic and purposeful in the topics they select when introducing students to tasks requiring mul-tiple texts. This is in line with more general implications of the CAEM, as well as prior work (Wigfield & Guthrie, 1997), which suggest that students’ degree of topic inter-est, as well as other motivational factors, matter a great deal for multiple-text task completion.

Second, this study separately modeled two outcomes associated with multiple-text integration: integrated men-tal model and intertext model development. Whereas integrated mental model development corresponded to students forming a unified and coherent understand -ing  of a particular topic, intertext model development reflected students’ structural understanding of informa-tion as coming from specific sources of origin and cross-source connection formation. Whereas integrated mental model construction seems integral to learning from mul-tiple texts, the purpose of intertext model construction may be less clear. Nevertheless, understanding informa-tion as coming from distinct sources and as shaped by these and forming connections across sources remain critical processes for learning from multiple texts and multiple-text integration. For instance, in this study, stu-dents were presented with four texts on the topic of tro-phy hunting. Fully understanding this complex topic required not only synthesizing the content included in texts but also recognizing that perspectives on trophy hunting conflict and that there are multiple relevant domain perspectives that could be used as lenses on this topic. This demonstrates that considering source in the

Page 20: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

20 | Reading Research Quarterly, 0(0)

interpretation and evaluation of information is critical when students use multiple texts to learn about complex topics. To demonstrate this importance, teachers should introduce students to multiple texts that are distinct not only in the information they present but also in the per-spectives on this information that they offer (e.g., Kiili, Coiro, & Hämäläinen, 2016), such as asking students to read texts written from the perspective of abolitionists vis-à-vis slave owners during the Civil War (Monte-Sano, 2011). In doing so, teachers can demonstrate to students the importance of considering source, source−content, and source−source links in learning from multiple texts.

As a final point, intertext model formation was pre-dicted by students’ engagement in cross-textual elabora-tion. Cross-textual elaboration was predicted by students’ reports of habits with regard to information evaluation. This points to the need to provide students with not only tasks that allow cross-textual connections to be formed but also direct instruction in strategies for forming such con-nections when learning from multiple texts (Afflerbach & Cho, 2009; List 2020b).

LimitationsDespite the strengths of this study, a number of limitations must be acknowledged. First, although I sought to validate the CAEM, which identifies the role of affective factors in promoting students’ learning from multiple texts, only a limited number of affective factors (e.g., interest, attitudes) were considered in analyses. An important direction for future work is to consider the role of other affective and motivational individual difference factors in processing (e.g., self-efficacy, enjoyment) and to consider a variety of attitude-related constructs, beyond attitude certainty, in understanding the complex relation between attitudes and information processing.

It is also important to consider these affective factors in relation to various text characteristics (e.g., length, readability). This is especially the case given that text coherence and comprehensibility, in interaction with in -dividual difference factors, has been found to be a key determinant of students’ task performance when reading single texts (McNamara, Ozuru, & Floyd, 2017; Rawson & Dunlosky, 2002; Yano, Long, & Ross, 1994). Moreover, additional work is needed to consider which aspects of texts (e.g., their number, length, or positionality in rela-tion to one another) may contribute to multiple-text task performance. Although the texts used in this study were typical in length to those used in prior work (Firetto, 2020), they were nevertheless fairly short relative to the types of academic texts that undergraduate students may typically be asked to understand and integrate (e.g., Bråten & Strømsø, 2010; Lehmann, Rott, & Schmidt- Borcherding, 2019). Considering the cognitive and affec-tive factors identified in the CAEM, it may be, in part,

that text length increases the intertextual integration and motivational (i.e., persistence) demands associated with learning from multiple texts.

Second, two separate measures, both of integrated mental model and intertext model development, were used in this study and collapsed in analyses (with these measures analyzed separately in Appendix D). On the one hand, I considered this methodological decision to be necessary given the length of the study and the potential redundancy of items across measures. On the other hand, the extent to which these measures can truly be collapsed remains an open question, and in fact, when these mea-sures were separately modeled, somewhat different paths emerged than what was found in the sample as a whole (see Appendix D). Future work should consider creating more comprehensive measures to tap integrated mental model and intertext model development. For instance, a single instrument can be developed to capture those fac-ets of integrated mental model development that were separately assessed in this study via the sentence verifica-tion and cross-textual integration tasks.

Third, the sample in this study was predominantly female and predominantly White. Although the sample was representative of students enrolled in the courses sam-pled in this study (e.g., educational psychology, biology), this still raises questions regarding sample representative-ness of university students as a whole (Henrich, Heine, & Norenzayan, 2010). Moreover, the gender makeup of the sample may have made them particularly uninterested in the topic of the task (i.e., trophy hunting), although women in this study reported statistically significantly higher levels of topic interest (M = 3.30, SD  =  0.95) than their male counterparts (M  =  2.97, SD  =  0.96), F(1, 347)  =  8.54, p < .01, η2 = .02, indicating a small effect. More generally, gender differences have been found in males’ and females’ performance on multiple-text tasks, in particular (Bråten & Strømsø, 2006b; Stang Lund et al., 2019), and on reading tasks more generally (Denton et al., 2015), suggesting the need to further explore gender in learning from multiple texts. Additionally, no research, to my knowledge, has examined racial differences in multiple-text task perfor-mance (although national differences have been examined; Strømsø, Bråten, Anmarkrud, & Ferguson, 2016). This is an area that requires further exploration, especially given some findings on racial differences in Black and White users’ source evaluations (Appiah, 2003, 2004).

Fourth, this study replicated investigations of learning from multiple texts carried out with Norwegian second-ary students (Bråten et al., 2014; Stang Lund et al., 2019). Doubtless, there exist many similarities between the United States and Norway as economically developed and technologically sophisticated nations, wherein students are expected to navigate the complexity inherent in learn-ing from multiple texts, particularly online (OECD, 2019). Nevertheless, there are likely important cultural (e.g.,

Page 21: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 21

internet access, media environments) differences across these two countries (Zevin, 2003), and their impact on multiple-text task performance requires further investiga-tion in future work.

There were also developmental differences between the secondary students used in prior work (Bråten et al., 2014; Stang Lund et al., 2019) and the undergraduate sample examined in the present study. Generally, late adolescence and young adulthood have been understood as the developmental stages during which students may be expected to develop the epistemic sophistication to learn from multiple texts (e.g., appreciating a multitude of perspectives, evaluating knowledge sources; King & Kitchener, 2004; Kuhn, Cheney, & Weinstock, 2000). Moreover, at least within the United States, curricular dif-ferences between high school and college may contribute to differences between students at these educational lev-els. As compared with high school, for many students, college is the first time they are asked to make sense of multiple sources of various quality, potentially in conflict with one another, rather than learning from the author-less yet authoritative voice of the textbook (Rouet, Britt, Mason, & Perfetti, 1996). These curricular differences may result in students at the college level being more adept at multiple-text comprehension and integration (Kobayashi, 2009; Wiley, Griffin, Steffens, & Britt, 2020). Although speculating across samples at different develop-mental stages and embedded within different cultural set-tings is impossible, it suggests the need to further consider such factors in students’ learning from multiple texts.

Fifth, the low reliability of the measures used in this study represents a major limitation, particularly with regard to the source differentiation task used in the focal analyses in this study. These measures may have had unac-ceptably low reliabilities for a variety of reasons. For one, the low reliabilities may be attributable to a floor effect due  to individuals’ generally low levels of knowledge about and interest in the topic of the task and low levels of integration-related task performance. For another, due to concerns over length and between-item redundancy, the measures used in this study were shorter than those used in prior work, further reducing reliability. For instance, Strømsø et al. (2010) included 20 items on their intertex-tual inference verification task and found this measure to only have a reliability of .58. For comparison, the source differentiation task used in this study only had 10 items, with this reduction in length attributable to our use of only four study texts, in contrast to the seven texts used by Strømsø et al. (2010). Finally, the extent to which the mea-sures represented unidimensional indexes remains an open question. For instance, the prior knowledge measure included multiple-choice items reflecting key terms in the study’s texts. Likewise, performance measures, including the source differentiation task, represented students’ under -standing of four different texts and the associations among

them. This means that these measures may not have, in fact, corresponded to a unidimensional construct but rather multiple facets of understanding.

Finally, and on a related methodological note, all of the performance measures used in this study included an “I don’t know” option among the answer choices available to students. I made this methodological decision to reduce students’ guessing (Zhang, 2013). The need to reduce guessing was a particular concern in this study given that a number of the measures used only had two answer choices available to students (e.g., “in the texts I read” and “not in the texts I read” options on the sentence verification task). Nevertheless, the inclusion of an “I don’t know” option has also been found to reduce instru-ment reliability, at least to some extent (Muijtjens et al., 1999). This reduction in reliability and the substantial number of students selecting the “I don’t know” option across measures require further revisiting this method-ological decision in future work. In particular, on average, 25.76% of students selected the “I don’t know” option for at least one item on the sentence verification task; 24.05% of students, on average, selected this option at least once when responding to the cross-textual integration task; 25.50% of students selected this option at least once for items on the source differentiation task; and 16.30% of students selected this option at least once in response to the statement source match task. In prior work, when including the “I don’t know” option on assessments, stu-dents have also received penalties for incorrect responses (Muijtjens et al., 1999). This was not done in this study, as students’ performance across measures was already found to be fairly low. This scoring option (with points reduced for incorrect responses) should also be considered in future work.

ConclusionsA number of conclusions regarding the nature of learning from multiple texts may be drawn based on the models examined in this study. First, interest was found to have a direct relation with integrated mental model develop-ment, whereas the relation between students’ habits with regard to information evaluation and integration out-comes was an indirect one. Second, time devoted to text access, potentially reflective of deep-level strategy use, was a direct predictor and a mediator of both sets of inte-gration outcomes examined. Third, comparing across integration measures, interest was found to contribute to integrated mental model development. At the same time, engagement in cross-textual elaboration, or the relational consideration of texts, was specifically needed for inter-text model construction, or for students to develop a structural understanding of the relations among multiple texts. Across these three conclusions, I comprehensively link individual difference factors, processing metrics, and

Page 22: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

22 | Reading Research Quarterly, 0(0)

multiple-text task performance in a way that has largely been only theorized in prior work.

REFERENCESAfflerbach, P., & Cho, B.-Y. (2009). Identifying and describing con-

structively responsive comprehension strategies in new and tradi-tional forms of reading. In S.E. Israel & G.G. Duffy (Eds.), Handbook of research on reading comprehension (pp. 69–90). New York, NY: Routledge.

Anmarkrud, Ø., Bråten, I., & Strømsø, H.I. (2014). Multiple-documents literacy: Strategic processing, source awareness, and argumentation when reading multiple conflicting documents. Learning and Individual Differences, 30, 64–76. https://doi.org/10.1016/j.lindif.2013.01.007

Appiah, O. (2003). Americans online: Differences in surfing and evalu-ating race-targeted web site: By black and white users. Journal of Broadcasting & Electronic Media, 47(4), 537–555. https://doi.org/10. 1207/s1550 6878j obem4 704_4

Appiah, O. (2004). Effects of ethnic identification on web browsers’ attitudes toward and navigational patterns on race-targeted sites. Communication Research, 31(3), 312–337. https://doi.org/10.1177/ 00936 50203 261515

Armstrong, C.L., & McAdams, M.J. (2009). Blogs of information: How gender cues and individual motivations influence perceptions of credibility. Journal of Computer-Mediated Communication, 14(3), 435–456. https://doi.org/10.1111/j.1083-6101.2009.01448.x

Barzilai, S., & Weinstock, M. (2020). Beyond trustworthiness: Comprehending multiple source perspectives. In P. Van Meter, A. List, D. Lombardi, & P. Kendeou (Eds.), Handbook of learning from multiple representations and perspectives (pp. 123–140). New York, NY: Routledge.

Barzilai, S., Zohar, A.R., & Mor-Hagani, S. (2018). Promoting integra-tion of multiple texts: A review of instructional approaches and practices. Educational Psychology Review, 30(3), 973–999. https://doi.org/10.1007/s10648-018-9436-8

Braasch, J.L., McCabe, R.M., & Daniel, F. (2016). Content integration across multiple documents reduces memory for sources. Reading and Writing, 29(8), 1571–1598. https://doi.org/10.1007/s11145-015- 9609-5

Braasch, J.L., Rouet, J.-F., Vibert, N., & Britt, M.A. (2012). Readers’ use of source information in text comprehension. Memory & Cognition, 40(3), 450–465. https://doi.org/10.3758/s13421-011-0160-6

Brand-Gruwel, S., Wopereis, I., & Walraven, A. (2009). A descriptive model of information problem solving while using internet. Computers & Education, 53(4), 1207–1217. https://doi.org/10.1016/j.compe du.2009.06.004

Brann, M., & Himes, K.L. (2010). Perceived credibility of male versus female television newscasters. Communication Research Reports, 27(3), 243–252. https://doi.org/10.1080/08824 09100 3737869

Bråten, I., Anmarkrud, Ø., Brandmo, C., & Strømsø, H.I. (2014). Developing and testing a model of direct and indirect relationships between individual differences, processing, and multiple-text com-prehension. Learning and Instruction, 30, 9–24. https://doi.org/ 10. 1016/j.learn instr uc.2013.11.002

Bråten, I., Brante, E.W., & Strømsø, H.I. (2018). What really matters: The role of behavioural engagement in multiple document literacy tasks. Journal of Research in Reading, 41(4), 680–699. https://doi.org/10.1111/1467-9817.12247

Bråten, I., & Strømsø, H.I. (2006a). Effects of personal epistemology on the understanding of multiple texts. Reading Psychology, 27(5), 457–484. https://doi.org/10.1080/02702 71060 0848031

Bråten, I., & Strømsø, H.I. (2006b). Epistemological beliefs, interest, and gender as predictors of internet-based learning activities. Computers in Human Behavior, 22(6), 1027–1042. https://doi.org/10.1016/j.chb. 2004.03.026

Bråten, I., & Strømsø, H.I. (2010). When law students read multiple documents about global warming: Examining the role of topic- specific beliefs about the nature of knowledge and knowing. In -structional Science, 38(6), 635–657. https://doi.org/10.1007/s11251- 008-9091-4

Bråten, I., & Strømsø, H.I. (2011). Measuring strategic processing when students read multiple texts. Metacognition and Learning, 6(2), 111–130. https://doi.org/10.1007/s11409-011-9075-7

Bråten, I., Strømsø, H.I., & Britt, M.A. (2009). Trust matters: Examining the role of source evaluation in students’ construction of meaning within and across multiple texts. Reading Research Quarterly, 44(1), 6–28. https://doi.org/10.1598/RRQ.44.1.1

Bråten, I., Strømsø, H.I., & Samuelstuen, M.S. (2008). Are sophisticated students always better? The role of topic-specific personal epistemol-ogy in the understanding of multiple expository texts. Contemporary Educational Psychology, 33(4), 814–840. https://doi.org/10.1016/j. cedps ych.2008.02.001

Britt, M.A., & Aglinskas, C. (2002). Improving students’ ability to iden-tify and use source information. Cognition and Instruction, 20(4), 485–522. https://doi.org/10.1207/S1532 690XC I2004_2

Britt, M.A., Perfetti, C.A., Sandak, R., & Rouet, J.-F. (1999). Content integration and source separation in learning from multiple texts. In S.R. Goldman, A.C. Graesser, & P. van den Broek (Eds.), Narrative comprehension, causality, and coherence: Essays in honor of Tom Trabasso (pp. 209–233). Mahwah, NJ: Erlbaum.

Britt, M.A., & Sommer, J. (2004). Facilitating textual integration with macro-structure focusing tasks. Reading Psychology, 25(4), 313–339. https://doi.org/10.1080/02702 71049 0522658

Burin, D.I., Barreyro, J.P., Saux, G., & Irrazábal, N.C. (2015). Navigation and comprehension of digital expository texts: Hypertext structure, previous domain knowledge, and working memory capacity. Elec-tronic Journal of Research in Educational Psychology, 13(3), 529–550. https://doi.org/10.14204/ ejrep.37.14136

Cacioppo, J.T., Petty, R.E., & Kao, C.F. (1984). The efficient assessment of need for cognition. Journal of Personality Assessment, 48(3), 306–307. https://doi.org/10.1207/s1532 7752j pa4803_13

Cerdán, R., & Vidal-Abarca, E. (2008). The effects of tasks on integrating infor-mation from multiple documents. Journal of Educational Psychology, 100(1), 209–222. https://doi.org/10.1037/0022-0663.100.1.209

Cheong, C.M., Zhu, X., Li, G.Y., & Wen, H. (2019). Effects of intertextual processing on L2 integrated writing. Journal of Second Language Writing, 44, 63–75. https://doi.org/10.1016/j.jslw.2019.03.004

Creel, S., M’soka, J., Dröge, E., Rosenblatt, E., Becker, M.S., Matandiko, W., & Simpamba, T. (2016). Assessing the sustainability of African lion trophy hunting, with recommendations for policy. Ecological Applications, 26(7), 2347–2357. https://doi.org/10.1002/eap.1377

Cruise, A. (2015, November 17). Is trophy hunting helping save African elephants? National Geographic. Retrieved from: https://www.natio nalge ograp hic.com/news/2015/11/151715-conse rvati on-trophy-hunti ng-eleph ants-tusks-poach ing-zimba bwe-namib ia/

Denton, C.A., Wolters, C.A., York, M.J., Swanson, E., Kulesz, P.A., & Francis, D.J. (2015). Adolescents’ use of reading comprehension strategies: Differences related to reading proficiency, grade level, and gender. Learning and Individual Differences, 37, 81–95. https://doi.org/10.1016/j.lindif.2014.11.016

Dickman, A. (2018, January 4). Ending trophy hunting could actu-ally  be worse for the endangered species. CNN. Retrieved from https://www.cnn.com/2017/11/24/opini ons/trophy-hunti ng-decli ne- of-speci es-opini on-dickm an/index.html

Di Minin, E., Leader-Williams, N., & Bradshaw, C.J. (2016). Banning trophy hunting will exacerbate biodiversity loss. Trends in Ecology & Evolution, 31(2), 99–102. https://doi.org/10.1016/j.tree.2015.12. 006

Du, H., & List, A. (2020). Researching and writing based on multiple texts. Learning and Instruction, 66, article 101297. https://doi.org/ 10.1016/j.learn instr uc.2019.101297

Page 23: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 23

Ferguson, L.E., & Bråten, I. (2013). Student profiles of knowledge and epistemic beliefs: Changes and relations to multiple-text comprehen-sion. Learning and Instruction, 25, 49–61. https://doi.org/10.1016/j.learn instr uc.2012.11.003

Fesel, S.S., Segers, E., & Verhoeven, L. (2018). Individual variation in children’s reading comprehension across digital text types. Journal of Research in Reading, 41(1), 106–121. https://doi.org/10.1111/1467- 9817.12098

Firetto, C.M. (2020). Learning from multiple complementary perspec-tives. In P. Van Meter, A. List, D. Lombardi, & P. Kendeou (Eds.), Handbook of learning from multiple representations and perspectives (pp. 223–244). New York, NY: Routledge.

Firetto, C.M., & Van Meter, P.N. (2018). Inspiring integration in college students reading multiple biology texts. Learning and Individual Differences, 65, 123–134. https://doi.org/10.1016/j.lindif.2018.05.011

Flanagin, A.J., & Metzger, M.J. (2003). The perceived credibility of per-sonal web page information as influenced by the sex of the source. Computers in Human Behavior, 19(6), 683–701. https://doi.org/10.1016/ S0747-5632(03)00021-9

Friedman, N.P., & Miyake, A. (2017). Unity and diversity of executive functions: Individual differences as a window on cognitive structure. Cortex, 86, 186–204. https://doi.org/10.1016/j.cortex.2016.04.023

Gil, L., Bråten, I., Vidal-Abarca, E., & Strømsø, H.I. (2010). Summary versus argument tasks when working with multiple documents: Which is better for whom? Contemporary Educational Psychology, 35(3), 157–173. https://doi.org/10.1016/j.cedps ych.2009.11.002

Goldman, S.R., & Scardamalia, M. (2013). Preface for the special issue Multiple Document Comprehension. Cognition and Instruction, 31(2), 121. https://doi.org/10.1080/07370 008.2013.773224

Hagen, Å.M., Braasch, J.L., & Bråten, I. (2014). Relationships between spontaneous note-taking, self-reported strategies and comprehen-sion when reading multiple texts in different task conditions. Journal of Research in Reading, 37(S1), S141–S157. https://doi.org/10.1111/ j.1467-9817.2012.01536.x

Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K.Y. (2010). Trust online: Young adults’ evaluation of web content. International Journal of Communication, 4, 468–494.

Henrich, J., Heine, S.J., & Norenzayan, A. (2010). Most people are not WEIRD. Nature, 466, 29. https://doi.org/10.1038/466029a

Hooper, D., Coughlan, J., & Mullen, M. (2008). Structural equation modelling: Guidelines for determining model fit. Electronic Journal of Business Research Methods, 6(1), 53–60.

Kiili, C., Coiro, J., & Hämäläinen, J. (2016). An online inquiry tool to support the exploration of controversial issues on the internet. The Journal of Literacy and Technology, 17(1/2), 31–52.

King, P.M., & Kitchener, K.S. (2004). Reflective judgment: Theory and research on the development of epistemic assumptions through adulthood. Educational Psychologist, 39(1), 5–18. https://doi.org/ 10.1207/s1532 6985e p3901_2

Kobayashi, K. (2009). The influence of topic knowledge, external strat-egy use, and college experience on students’ comprehension of con-troversial texts. Learning and Individual Differences, 19(1), 130–134. https://doi.org/10.1016/j.lindif.2008.06.001

Kuhn, D., Cheney, R., & Weinstock, M. (2000). The development of epistemological understanding. Cognitive Development, 15(3), 309–328. https://doi.org/10.1016/S0885-2014(00)00030-7

Lehmann, T., Rott, B., & Schmidt-Borcherding, F. (2019). Promoting pre-service teachers’ integration of professional knowledge: Effects of writing tasks and prompts on learning from multiple documents. Instructional Science, 47(1), 99–126. https://doi.org/10.1007/s11251- 018-9472-2

List, A. (2020a). Knowledge as perspective: From domain perspective learning to interdisciplinary understanding. In P. Van Meter, A. List, D. Lombardi, & P. Kendeou (Eds.), Handbook of learning from

multiple representations and perspectives (pp. 164–190). New York, NY: Routledge.

List, A. (2020b). Six questions regarding strategy use when learning from multiple texts. In D.L. Dinsmore, L.K. Fryer, & M.M. Parkinson (Eds.), Handbook of strategies and strategic processing (pp. 119–140). New York, NY: Routledge.

List, A., & Alexander, P.A. (2017a). Analyzing and integrating models of multiple text comprehension. Educational Psychologist, 52(3), 143–147. https://doi.org/10.1080/00461 520.2017.1328309

List, A., & Alexander, P.A. (2017b). Cognitive affective engagement model of multiple source use. Educational Psychologist, 52(3), 182–199. https://doi.org/10.1080/00461 520.2017.1329014

List, A., & Alexander, P.A. (2018a). Cold and warm perspectives on the cognitive affective engagement model of multiple source use. In J.L.G. Braasch, I. Bråten, & M.T. McCrudden (Eds.), Handbook of multiple source use (pp. 34–54). New York, NY: Routledge.

List, A., & Alexander, P.A. (2018b). Corroborating students’ self-reports of source evaluation. Behaviour & Information Technology, 37(3), 198–216. https://doi.org/10.1080/01449 29X.2018.1430849

List, A., & Alexander, P.A. (2019). Toward an integrated framework of multiple text use. Educational Psychologist, 54(1), 20–39. https://doi.org/10.1080/00461 520.2018.1505514

List, A., Du, H., & Lee, H.Y. (2020). How do students integrate multiple texts? An investigation of top-down processing. European Journal of Psychology of Education. Advance online publication. https://doi.org/10.1007/s10212-020-00497-y

List, A., Du, H., Wang, Y., & Lee, H.Y. (2019). Toward a typology of integration: Examining the documents model framework. Con-temporary Educational Psychology, 58, 228–242. https://doi.org/10. 1016/j.cedps ych.2019.03.003

List, A., Stephens, L.A., & Alexander, P.A. (2019). Examining interest throughout multiple text use. Reading and Writing, 32(2), 307–333. https://doi.org/10.1007/s11145-018-9863-4

Maier, J., Richter, T., & Britt, M.A. (2018). Cognitive processes underly-ing the text-belief consistency effect: An eye-movement study. Applied Cognitive Psychology, 32(2), 171–185. https://doi.org/10.10 02/acp.3391

Mason, L., Ariasi, N., & Boldrin, A. (2011). Epistemic beliefs in action: Spontaneous reflections about knowledge and knowing during online information searching and their influence on learning. Learning and Instruction, 21(1), 137–151. https://doi.org/10.1016/ j.learn instr uc.2010.01.001

McCrudden, M.T., & Barnes, A. (2016). Differences in student reason-ing about belief-relevant arguments: A mixed methods study. Metacognition and Learning, 11(3), 275–303. https://doi.org/10.1007/ s11409-015-9148-0

McNamara, D.S., Ozuru, Y., & Floyd, R.G. (2017). Comprehension challenges in the fourth grade: The roles of text cohesion, text genre, and readers’ prior knowledge. International Electronic Journal of Elementary Education, 4(1), 229–257.

McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods, 23(3), 412–433. https://doi.org/10.1037/met 00 00144

Monte-Sano, C. (2011). Beyond reading comprehension and summary: Learning to read and write in history by focusing on evidence, per-spective, and interpretation. Curriculum Inquiry, 41(2), 212–249. https://doi.org/10.1111/j.1467-873X.2011.00547.x

Muijtjens, A.M.M., van Mameren, H., Hoogenboom, R.J.I., Evers, J.L.H., & van der Vleuten, C.P.M. (1999). The effect of a “don’t know” option on test scores: Number-right and formula scoring compared. Medical Education, 33(4), 267–275. https://doi.org/10.1046/j.1365- 2923.1999.00292.x

OECD. (2019). PISA 2018 results (volume I): What students know and can do. Paris, France: Author.

Page 24: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

24 | Reading Research Quarterly, 0(0)

Perfetti, C.A., Rouet, J.-F., & Britt, M.A. (1999). Toward a theory of documents representation. In H. van Oostendorp & S.R. Goldman (Eds.), The construction of mental representations during reading (pp. 99–122). Hillsdale, NJ: Erlbaum.

Perfors, A., & Kidd, E. (2018). What drives individual differences in sta-tistical learning? The role of perceptual fluency and familiarity. Retrieved from https://psyar xiv.com/7jvx8/

Peterson, E.G., & Alexander, P.A. (2020). Navigating print and digital sources: Students’ selection, use, and integration of multiple sources across mediums. Journal of Experimental Education, 88(1), 27–46. https://doi.org/10.1080/00220 973.2018.1496058

Primor, L., & Katzir, T. (2018). Measuring multiple text integration: A review. Frontiers in Psychology, 9, article 2294. https://doi.org/10. 3389/fpsyg.2018.02294

Rawson, K.A., & Dunlosky, J. (2002). Are performance predictions for text based on ease of processing? Journal of Experimental Psychology: Learning, Memory, and Cognition, 28(1), 69–80. https://doi.org/10.1037/ 0278-7393.28.1.69

Rosseel, Y. (2012). Lavaan: An R package for structural equation mod-eling and more. Journal of Statistical Software, 48(2), 1–36. https://doi.org/10.18637/ jss.v048.i02

Rouet, J.-F., & Britt, M.A. (2011). Relevance processes in multiple docu-ment comprehension. In M.T. McCrudden, J.P. Magliano, & G. Schraw (Eds.), Text relevance and learning from text (pp. 19–52). Charlotte, NC: Information Age.

Rouet, J.-F., Britt, M.A., Mason, R.A., & Perfetti, C.A. (1996). Using multiple sources of evidence to reason about history. Journal of Educational Psychology, 88(3), 478–493. https://doi.org/10.1037/0022-0663.88.3.478

Schiefele, U. (1999). Interest and learning from text. Scientific Studies of Reading, 3(3), 257–279. https://doi.org/10.1207/s1532 799xs sr0303_4

Schoor, C., Hahnel, C., Mahlow, N., Klagges, J., Kroehne, U., Goldhammer, F., & Artelt, C. (2020). Multiple document compre-hension of university students. In O. Zlatkin-Troitscanskaia, H.A. Pant, M. Toepper, & C. Lautenbach (Eds.), Student learning in German higher education (pp. 221–240). Wiesbaden, Germany: Springer.

Schreiber, J.B., Nora, A., Stage, F.K., Barlow, E.A., & King, J. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. The Journal of Educational Research, 99(6), 323–338. https://doi.org/10.3200/JOER.99.6.323-338

Stadtler, M., Scharrer, L., & Bromme, R. (2011). How reading goals and rhetorical signals influence recipients’ recognition of intertextual conflicts. In L. Carlson, C. Hoelscher, & T.F. Shipley (Eds.), Proceedings of the 33rd annual conference of the Cognitive Science Society (pp. 1346–1351). Austin, TX: Cognitive Science Society.

Stang Lund, E., Bråten, I., Brandmo, C., Brante, E.W., & Strømsø, H.I. (2019). Direct and indirect effects of textual and individual factors on source-content integration when reading about a socio-scientific issue. Reading and Writing, 32(2), 335–356. https://doi.org/10.1007/s11145-018-9868-z

Strømsø, H.I., & Barzilai, S. (2018). Individual differences in multiple document comprehension. In J.L.G. Braasch, I. Bråten, & M.T. McCrudden (Eds.), Handbook of multiple source use (pp. 99–116). New York, NY: Routledge.

Strømsø, H.I., & Bråten, I. (2010). The role of personal epistemology in the self-regulation of internet-based learning. Metacognition and Learning, 5(1), 91–111. https://doi.org/10.1007/s11409-009-9043-7

Strømsø, H.I., Bråten, I., Anmarkrud, Ø., & Ferguson, L.E. (2016). Beliefs about justification for knowing when ethnic majority and

ethnic minority students read multiple conflicting documents. Edu-cational Psychology, 36(4), 638–657. https://doi.org/10.1080/01443 410.2014. 920080

Strømsø, H.I., Bråten, I., & Brante, E.W. (2020). Profiles of warm engagement and cold evaluation in multiple-document comprehen-sion. Reading and Writing. Advance online publication. https://doi.org/ 10.1007/s11145-020-10041-5

Strømsø, H.I., Bråten, I., & Britt, M.A. (2010). Reading multiple texts about climate change: The relationship between memory for sources and text comprehension. Learning and Instruction, 20(3), 192–204. https://doi.org/10.1016/j.learn instr uc.2009.02.001

Strømsø, H.I., Bråten, I., & Samuelstuen, M.S. (2008). Dimensions of topic-specific epistemological beliefs as predictors of multiple text understanding. Learning and Instruction, 18(6), 513–527. https://doi.org/10.1016/j.learn instr uc.2007.11.001

Tavor, I., Parker Jones, O., Mars, R.B., Smith, S.M., Behrens, T.E., & Jbabdi, S. (2016). Task-free MRI predicts individual differences in brain activity during task performance. Science, 352(6282), 216–220. https://doi.org/10.1126/scien ce.aad8127

Treves, A., Santiago-Ávila, F.J., Popescu, V.D., Paquet, P.C., Lynn, W.S., Darimont, C.T., & Artelle, K.A. (2019). Trophy hunting: Insufficient evidence. Science, 366(6464), 435. https://doi.org/10.1126/scien ce. aaz4389

Wigfield, A., & Guthrie, J.T. (1997). Relations of children’s motivation for reading to the amount and breadth or their reading. Journal of Educational Psychology, 89(3), 420–432. https://doi.org/10.1037/ 0022-0663.89.3.420

Wiley, J., Griffin, T.D., Steffens, B., & Britt, M.A. (2020). Epistemic beliefs about the value of integrating information across multiple documents in history. Learning and Instruction, 65, article 101266. https://doi.org/10.1016/j.learn instr uc.2019.101266

Wolfe, M.B., & Goldman, S.R. (2005). Relations between adolescents’ text processing and reasoning. Cognition and Instruction, 23(4), 467–502. https://doi.org/10.1207/s1532 690xc i2304_2

Yano, Y., Long, M.H., & Ross, S. (1994). The effects of simplified and elaborated texts on foreign language reading comprehension. Language Learning, 44(2), 189–219. https://doi.org/10.1111/j.1467-1770.1994. tb011 00.x

Zevin, J. (2003). Perceptions of national identity: How adolescents in the United States and Norway view their own and other countries. Social Studies, 94(5), 227–231. https://doi.org/10.1080/00377 99030 9600211

Zhang, X. (2013). The I don’t know option in the vocabulary size test. TESOL Quarterly, 47(4), 790–811. https://doi.org/10.1002/tesq.98

Submitted August 20, 2019 Final revision received June 17, 2020

Accepted June 17, 2020

ALEXANDRA LIST is an assistant professor in the Department of Educational Psychology, Counseling, and Special Education at The Pennsylvania State University, State College, USA; email [email protected]. Her work focuses on students’ learning from multiple sources of information (e.g., when researching on the internet) and the higher order cognitive processes involved, including source evaluation and integration.

Page 25: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 25

A PPE N D I X A

Interest, Need for Cognition, and Epistemic Beliefs in Students’ Learning From Multiple TextsReplicating analyses by Bråten et al. (2014), I ran a latent path model to examine the associations among individual difference factors, processing variables, and multiple-text task performance. Individual difference factors included were topic interest, need for cognition, and students’ en -dors ements of epistemic beliefs associated with the need to justify knowledge based on multiple texts. Processing factors included were time devoted to text access, referred to by Bråten et al. as an indicator of effort, and students’ reports of engagement in cross-textual elaboration. Out-come measures examined were of integrated mental model and intertext model development.

These analyses differed from Bråten et al.’s (2014) model in four ways. First, this study modeled individual difference factors as latent, rather than observed, vari-ables. Second, this study did not include prior knowl-edge because of limitations in the reliability of this measure. Third, this study did not have a measure of situational interest as a processing factor. Fourth, this study examined an integration-focused set of multiple-text learning outcomes, including measures of integrat-ed mental model and intertext model development. The theorized model evaluated is presented in Figure A1.

MethodAll measures used were the same as those reported in the full study, with two additional individual difference fac-tors examined.

Need for CognitionNeed for cognition was a six-item measure based on Cacioppo, Petty, and Kao (1984). Only six items (e.g., “I prefer complex to simple problems”; “I like situations that require a lot of thinking”) were taken from the 18-item measure (i.e., those with the highest loadings), due to concerns over study length. Students rated each item on a 5-point Likert-type scale ranging from 1 (strongly dis-agree) to 5 (strongly agree). The six-item scale had a Cronbach’s alpha reliability of .88.

Justifications for Knowing by Multiple TextsStudents’ endorsement of epistemic beliefs that knowledge ought to be justified via multiple texts was assessed using

FIGURE A1 Theorized Model Based on Bråten et al. (2014)

Page 26: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

26 | Reading Research Quarterly, 0(0)

four items from Ferguson and Bråten’s (2013) Justification by Multiple Sources beliefs subscale. Items included “Just one source is never enough to decide what is right, and “To be able to trust information in texts, I have to check various sources.” Each item was scored on a 5-point Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree). The Cronbach’s alpha reliability for the four-item scale was .74.

Results: Model Replicating Bråten et al. (2014)The model using individual difference factors and pro-cessing variables to predict multiple-text task performance was found to have less than desired data fit, χ2(119) = 249.30, p < .001. Model fit indexes were CFI = 0.94, RMSEA = 0.06 (90% CI [0.05, 0.07]), SRMR = 0.05. In particular, the CFI index fell below the desired value of greater than 0.95 (Hooper et al., 2008). Statistically significant direct paths and standardized loadings are presented in Figure A2. Direct and indirect paths are summarized in Table A1.

Measurement ModelAll indicators were found to load statistically signifi-cantly onto latent factors corresponding to interest (.61–.89, ps <  .001), need for cognition (.68–.85, ps <  .001), and beliefs about the need for justification by multiple texts (.52–.73, ps < .001).

MediatorsTime on texts was statistically significantly predicted by students’ interest (β = 0.24, p  <  .001). Engagement in cross-textual elaboration was statistically significantly predicted by all of the individual difference factors in the model: interest (β = 0.16, p < .05), need for cognition (β = 0.22, p < .01), and beliefs about the need to justify knowl-edge via multiple texts (β = 0.23, p < .01). R2 for time on texts was .06, whereas R2 for students’ reports of cross-textual elaboration was .19.

Integrated Mental Model DevelopmentDirect PathsIntegrated mental model development was only statisti-cally significantly and directly predicted by the amount of time students devoted to text access (β = 0.44, p < .001).

Indirect PathsThere was also a statistically significant path between interest and integrated mental model development, medi-ated by the amount of time students devoted to text access (β = 0.11, p < .001).

There was a total of 24.4% of R2 variance in inte-grated mental model development explained by the model.

FIGURE A2 Model Based on Bråten et al. (2014), With Statistically Significant Direct Paths and Standardized Loadings

Page 27: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 27

Intertext Model DevelopmentDirect PathsIntertext model development was only directly predicted by the amount of time students devoted to text access (β = 0.39, p < .001), with none of the other predictors having a statistically significant direct effect in the model (ps > .12).

Indirect PathsThe only statistically significant indirect path in the model was the effect of interest on students’ intertext model development, which was mediated by the amount of time students devoted to text access (β = 0.09, p < .001).

The variance in intertext model development ex-plained by the model was R2 = .19.

TABLE A1 Direct and Indirect Paths for the Model Replicating Bråten et al. (2014), With Need for Cognition and Beliefs About the Need for Justification by Multiple Texts

Predictor BStandard error (B) β

95% confidence interval p

Integrated mental model development

Direct paths

Time 0.24 0.03 0.46 [0.19, 0.29] .00

Cross-textual elaboration 0.04 0.02 0.11 [0.005, 0.07] .15

Indirect paths

Total indirect interest 0.03 0.01 0.13 [0.014, 0.04] .00

• Interest → Time 0.02 0.01 0.11 [0.011, 0.04] .00

• Interest → Cross-textual elaboration 0.00 0.00 0.01 [−0.001, 0.01] .22

Total indirect need for cognition −0.01 0.01 −0.02 [−0.023, 0.01] .58

• Need for cognition → Time −0.01 0.01 −0.04 [−0.028, 0.004] .15

• Need for cognition → Cross-textual elaboration 0.01 0.00 0.02 [−0.001, 0.01] .20

Total indirect justification by multiple texts 0.03 0.02 0.06 [−0.002, 0.06] .10

• Justification by multiple texts → Time 0.02 0.01 0.03 [−0.010, 0.04] .24

• Justification by multiple texts → Cross-textual elaboration

0.01 0.01 0.02 [−0.001, 0.02] .21

Intertext model development

Direct paths

Time 0.21 0.03 0.39 [0.16, 0.26] .00

Cross-textual elaboration 0.05 0.02 0.14 [0.012, 0.08] .12

Indirect paths

Total indirect interest 0.03 0.01 0.12 [0.014, 0.04] .00

• Interest → Time 0.02 0.01 0.10 [0.010, 0.03] .00

• Interest → Cross-textual elaboration 0.01 0.00 0.02 [−0.001, 0.01] .08

Total indirect need for cognition −0.002 0.01 −0.01 [−0.017, 0.01] .85

• Need for cognition → Time −0.01 0.01 −0.04 [−0.024, 0.003] .14

• Need for cognition → Cross-textual elaboration 0.01 0.00 0.03 [0.000, 0.02] .047

Total indirect justification by multiple texts 0.03 0.01 0.06 [0.001, 0.06] .04

• Justification by multiple texts → Time 0.01 0.01 0.03 [−0.009, 0.04] .23

• Justification by multiple texts → Cross-textual elaboration

0.02 0.01 0.03 [−0.001, 0.03] .07

Page 28: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

28 | Reading Research Quarterly, 0(0)

CovariancesInterest was statistically significantly associated with both need for cognition (0.26, p <  .001) and students’ beliefs regarding the need for justification by multiple texts (0.18, p = .01). Need for cognition was statistically significantly associated with the need for justification by multiple texts (0.39, p  <  .001). Integrated mental model construction and intertext model development were statistically sig-nificantly associated with each other (0.37, p < .001).

DiscussionDespite having less good model fit, these results largely mirror findings from Bråten et al. (2014). In particular, in

this model, interest, need for cognition, and students’ beliefs regarding the need for justification by multiple texts were all associated with students’ deep-level strategy use, with students’ beliefs and need for cognition associ-ated with deep-level strategy use in Bråten et al.’s model. Further, in our model, only interest was found to be asso-ciated with the time students devoted to text access, whereas both interest and students’ beliefs about the need for justification by multiple texts were associated with time on task in Bråten et al.’s study. Centrally, in this study and in Bråten et al.’s work, both the time that students devoted to text access (i.e., termed effort by Bråten et al.) and students’ engagement in deep-level strategy use sig-nificantly predicted students’ learning from multiple texts.

A PPE N D I X B

Attitudes, Interest, and Habits With Regard to Information Evaluation in Students’ Learning From Multiple TextsIn addition to examining interest as a measure of affective engagement, students’ attitudes toward trophy hunting and attitude certainty were also examined in association with multiple-text task performance. Specifically, a model including individual difference factors (i.e., topic interest, information evaluation, attitudes), processing measures, as mediators (i.e., time devoted to text access, reports of cross-textual elaboration), and performance indicators (i.e., intertext model and integrated mental model devel-opment) was evaluated. The theoretical model evaluated is presented in Figure B1, and model fit, with direct paths shown, is presented in Figure B2. Direct and indirect paths are summarized in Table B1.

MeasuresAll measures were the same as those reported in the full study, with the addition of attitudes included in this model. Attitudes were assessed in two primary ways.

Attitude StanceFirst, students were asked to take a stance with regard to trophy hunting by responding to the multiple-choice question, “Do you support trophy hunting?” In the sam-ple, 5.98% (n = 21) selected “Yes, I support trophy hunt-ing”; 60.40% (n = 212) selected “No, I don’t support trophy

hunting”; 6.84% (n  =  24) of participants reported that they could not decide; and 26.78% (n  =  94) selected “I don’t know enough to decide.”

Attitude CertaintyIn addition to reporting their attitude stance, participants were further asked to rate three items corresponding to attitude certainty (i.e., “how sure are you,” “how confident are you,” “how strongly do you feel”). Students rated atti-tude certainty items on a 5-point Likert-type scale, rang-ing from 1 (not at all) to 5 (very), with attitude certainty able to be rated by students endorsing a variety of attitude stances. That is, students who were strongly in support of trophy hunting and students who were strongly opposed to trophy hunting were expected to strongly endorse these items, alike, and to be comparably reactive to the texts provided, as they included information both in support of and in opposition to trophy hunting. In contrast, students without a predefined attitude stance (i.e., selecting the “I cannot decide” or “I don’t know enough to decide” options) were expected to be more moderate in their rat-ings of attitude certainty.

These expected associations between attitude stance and attitude certainty was born out in an analysis of vari-ance that found average attitude certainty to statistically significantly differ in association with stance, F(3, 348) = 80.28, p < .001, η2 = .41. As expected, post hoc analyses

Page 29: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 29

using Tukey’s honestly significant difference test found students supporting (M = 3.54, SD = 0.89) and opposing (M = 3.96, SD = 0.96) trophy hunting to have statistically significantly higher degrees of attitude certainty than students selecting the “I cannot decide” (M = 2.42, SD = 0.84) or “I don’t know enough to decide” (M = 2.17,

SD = 1.09) options (ps < .001). Because attitude certainty functioned separately from students’ chosen attitude stance (i.e., students both supporting and opposing tro-phy hunting had similarly high ratings of attitude cer-tainty), I used this measure in analyses. Reliability for the three-item attitude certainty scale was .94.

FIGURE B1 Theorized Model With Attitudes as an Individual Difference Factor, With Statistically Significant Paths and Standardized Loadings

FIGURE B2 Model Including Attitude Certainty, With Statistically Significant Direct Paths and Standardized Loadings

Page 30: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

30 | Reading Research Quarterly, 0(0)

TABLE B1 Direct and Indirect Paths for the Model Including Attitudes

Predictor BStandard error (B) β

95% confidence interval p

Integrated mental model development

Direct paths

Interest 0.02 0.01 0.09 [−0.006, 0.05] .13

Habits in information evaluation −0.01 0.02 −0.02 [−0.043, 0.03] .73

Attitudes 0.01 0.01 −0.05 [−0.010, 0.03] .34

Time 0.23 0.03 0.33 [0.18, 0.29] .00

Cross-textual elaboration 0.03 0.02 0.08 [−0.007, 0.06] .12

Indirect paths

Total indirect interest 0.02 0.01 0.10 [0.008, 0.04] .00

• Interest → Time 0.02 0.01 0.09 [0.007, 0.03] .00

• Interest → Cross-textual elaboration 0.00 0.00 0.01 [−0.002, 0.01] .26

Total indirect habits in information evaluation 0.03 0.01 0.08 [0.006, 0.05] .01

• Habits in information evaluation → Time 0.02 0.01 0.06 [0.001, 0.04] .04

• Habits in information evaluation → Cross-textual elaboration 0.00 0.01 0.02 [−0.002, 0.02] .13

Total indirect attitudes 0.00 0.01 0.01 [−0.009, 0.01] .32

• Attitudes → Time −0.001 0.01 −0.003 [−0.010, 0.01] .90

• Attitudes → Cross-textual elaboration 0.00 0.00 0.01 [−0.001, 0.01] .21

Total effects

Interest 0.04 0.02 0.19 [0.014, 0.07] .00

Habits in information evaluation 0.02 0.02 0.06 [−0.018, 0.06] .29

Attitudes 0.01 0.01 0.06 [−0.011, 0.04] .32

Intertext model development

Direct paths

Interest 0.01 0.02 0.03 [−0.023, 0.04] .70

Habits in information evaluation 0.02 0.02 0.06 [−0.016, 0.06] .26

Attitudes 0.01 0.01 0.06 [−0.008, 0.03] .26

Time 0.20 0.03 0.38 [0.15, 0.25] .00

Cross-textual elaboration 0.03 0.02 0.10 [−0.004, 0.07] .08

Indirect paths

Total indirect interest 0.02 0.01 0.09 [0.008, 0.03] .00

• Interest → Time 0.02 0.01 0.08 [0.006, 0.03] .00

• Interest → Cross-textual elaboration 0.00 0.00 0.01 [−0.001, 0.01] .22

Total indirect habits in information evaluation 0.03 0.01 0.08 [0.007, 0.05] .01

• Habits in information evaluation → Time 0.02 0.01 0.05 [0.001, 0.03] .04

• Habits in information evaluation → Cross-textual elaboration 0.01 0.01 0.03 [−0.002, 0.02] .11

Total indirect attitudes 0.00 0.01 0.01 [−0.007, 0.01] .64

• Attitudes → Time −0.00 0.00 −0.00 [−0.009, 0.01] .90

• Attitudes → Cross-textual elaboration 0.00 0.00 0.02 [−0.001, 0.01] .18

(continued)

Page 31: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 31

Results for the Model Including Attitude CertaintyThe model including attitudes and other individual difference factors and processing variables to predict multiple-text task performance was found to have good data fit, χ2(103) = 174.06, p < .001. Model fit indexes were CFI  =  0.97, RMSEA  =  0.05 (90% CI [0.03, 0.06]), SRMR  =  0.04. Statistically significant direct paths and standardized loadings are presented in Figure B2.

Measurement ModelAll indicators were found to load statistically significantly onto latent factors corresponding to interest (.61–.88, ps  <  .001), engagement in information evaluation (.66–.79, ps < .001), and attitude certainty (.88–.97, ps < .001).

MediatorsTime on texts was statistically significantly predicted by both students’ interest (β = 0.20, p  <  .01) and their habits with regard to information evaluation (β = 0.13, p < .05). Students’ engagement of strategies reflecting cross-textual elaboration was predicted by their attitudes (β = 0.16, p < .05) and habits with regard to information evaluation (β = 0.29, p < .001).

Integrated Mental Model DevelopmentDirect PathsIntegrated mental model development was only statisti-cally significantly predicted by the amount of time stu-dents devoted to text access, (β = 0.44, p < .001). None of the other factors had a statistically significant, direct effect on integrated mental model development (ps > .12).

Indirect PathsThere were additionally two statistically significant medi-ating paths in the model. In particular, the effects of inter-est (β = 0.09, p < .01) and students’ habits with regard to information evaluation (β = 0.06, p < .05) on integrated mental model development were statistically significantly mediated by time devoted to text access.

In total, 24.6% of integrated mental model devel-oped was explained by the model.

Intertext Model DevelopmentDirect PathsIntertext model development was also only predicted by the amount of time students devoted to text access (β = 0.38, p < .001). No other direct paths were statistically sig-nificant (ps > .08).

Indirect PathsAgain, the effects of interest (β = 0.08, p < .01) and students’ habits with regard to information evaluation (β = 0.05, p < .05) on intertext model development were mediated by the amount of time students developed to text access.

In total, 19.3% of variance in intertext model devel-opment was explained by the model.

CovariancesInterest was statistically significantly associated with stu-dents’ attitudes (.41, p  <  .001) and habits with regard to information evaluation (.23, p = .001). Attitudes were also statistically significantly associated with students’ habits with regard to information evaluation (.14, p  <  .05). Integrated mental model construction and intertext model development were associated with each other (.37, p < .001).

DiscussionAlthough demonstrating good model fit, students’ atti-tude certainty was not directly associated with any of the outcomes of interest, although it was statistically sig-nificantly associated with students’ engagement in cross-textual elaboration. As suggested by the CAEM, it may be that holding stronger attitudes on a particular topic is associated with greater strategy use (i.e., engagement in cross-textual elaboration) when students are presented with conflicting texts on a controversial topic. For instance, stronger attitudes may lead students to scruti-nize and critically evaluate texts conflicting with their own points of view (McCrudden & Barnes, 2016).

Predictor BStandard error (B) β

95% confidence interval p

Total effects

Interest 0.03 0.02 0.11 [−0.005, 0.06] .10

Habits in information evaluation 0.05 0.02 0.14 [0.008, 0.09] .02

Attitudes 0.01 0.01 0.07 [−0.008, 0.04] .21

TABLE B1 Direct and Indirect Paths for the Model Including Attitudes (continued)

Page 32: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

32 | Reading Research Quarterly, 0(0)

A PPE N D I X C

Representation of Arguments in Study Texts

The Economic Benefits of Trophy Hunting The Economic Harms of Trophy Hunting

The Ecological Argument for Trophy Hunting

A. Trophy hunting raises substantial money for animal conservation....In total, trophy hunting generates $200 million annually for wildlife conservation in sub-Saharan Africa. Such a substantial sum of money for wildlife conservation could not otherwise be raised without the exorbitant fees that Westerners pay for trophy hunting licenses.

A. While a common claim is that trophy hunting raises substantial funds for impoverished African nations, in fact, the trophy hunting industry is quite small, relative to the broader African tourism industry. For instance, in 2014, researchers from Princeton University found that while 8,500 trophy hunters visited South Africa each year, a staggering 9.5 million tourists did so. The tourism industry provides many more local jobs at hotels, restaurants, and wildlife parks than does trophy hunting. To the extent that the tourism industry rests on wildlife conservation, it can raise substantially more funds than trophy hunting, while those working in the tourism industry become personally and financially invested in wildlife conservation.

B. Trophy hunting also benefits local communities in Africa, contributing to rural development. Wildlife is the property of rural villages in the surrounding areas. In Zimbabwe, rural district councils allow villages to sell hunting licenses to safari operators, granting them access to their wildlife. Safari operators, in turn, sell opportunities for trophy hunting to wealthy foreigners, primarily from the United States....Selling access to their wildlife, via trophy hunting licenses, is often one of the few ways that rural villages are able to make money.

B. The largest threat to wildlife today is not from trophy hunting, but rather from habitat loss due to farming and land development....Trophy hunting is one of the few activities that makes it profitable enough for rural communities in Africa to preserve grasslands for wild animals, by creating an economic incentive for land conservation.

Page 33: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 33

A PPE N D I X D

Multigroup Effects Model With Outcome Measures Separately ExaminedIn addition to evaluating a model in which measures of inte-grated mental model and intertext model development are collapsed, I also evaluated a multigroup effects model, where students’ performance on the sentence verification task, as a measure of integrated mental model development, and the source differentiation task, as an indicator of intertext model formation, was evaluated separately from a second group of students, completing the cross-textual integration task, as a measure of integrated mental model development, and the statement source match task, as a measure of intertext model formation. All other measures and paths were identical to those examined in the collapsed model (see Figure D1 for the theoretical model evaluated). Models for each group, with statistically significant direct paths shown, are presented separately in Figures D2 and D3. Direct and indirect paths across groups are summarized in Table D1.

Overall Model FitThe overall model had good data fit, χ2(134)  =  217.76, p < .001; group 1 χ2 = 89.68; group 2 χ2 = 128.08. Model fit

indexes were CFI = 0.95, RMSEA = 0.06 (90% CI [0.05, 0.08]), and SRMR = 0.05.

Group 1: Predicting Sentence Verification and Source Differentiation Task PerformanceAs in the full model, all indicators were found to load sta-tistically significantly onto interest (.68–.84, ps  <  .001) and students’ habits with regard to information evalua-tion (.68–.78, ps < .001).

MediatorsTotal time was only predicted by students’ interest (β = 0.24, p < .01), whereas cross-textual elaboration was pre-dicted by students’ habits with regard to information evaluation (β = 0.22, p < .05) but not by interest (p > .13). In total, R2 was .07 for the time devoted to text access and .08 for students’ engagement in cross-textual elaboration.

FIGURE D1 Theorized Multigroup Model

Note. CTI = cross-textual integration task; SDT = source differentiation task; SSM = statement source match task; SVT = sentence verification task.

Page 34: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

34 | Reading Research Quarterly, 0(0)

FIGURE D2 Model predicting Sentence Verification Task and Source Differentiation Task, With Statistically Significant Direct Paths and Standardized Loadings

Note. SDT = source differentiation task; SVT = sentence verification task.

FIGURE D3 Model predicting Cross-Textual Integration and Statement Source Match Task, With Statistically Significant Direct Paths and Standardized Loadings

Note. CTI = cross-textual integration task; SSM = statement source match task.

Page 35: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 35

TABL

E D

1 Su

mm

ary

of D

irect

and

Indi

rect

Eff

ects

for S

ente

nce

Verif

icat

ion

Task

(SV

T) a

nd S

ourc

e D

iffer

enti

atio

n Ta

sk (S

DT)

Per

form

ance

(Gro

up 1)

and

Cro

ss-T

extu

al

Inte

grat

ion

(CTI

) and

Sta

tem

ent S

ourc

e M

atch

(SSM

) Tas

k Pe

rfor

man

ce (G

roup

2)

Pred

icto

r

Gro

up 1

: SV

T an

d SD

TG

roup

2:

CT

I and

SSM

BSt

anda

rd

erro

r (B

95%

con

fide

nce

inte

rval

pB

Stan

dard

er

ror

(B)

β95

% c

onfi

denc

e in

terv

alp

Inte

grate

d m

enta

l m

odel

dev

elop

men

t

Dir

ect

path

s

Inte

rest

0.03

0.02

0.14

[0.0

03,

0.07

].0

30.

020.

020.

09[−

0.01

9, 0

.05]

.35

Hab

its

in in

form

atio

n ev

alua

tion

−0.0

20.

02−0

.06

[−0.

069,

0.0

3].3

60.

000.

030.

00[−

0.05

, 0.

05]

.98

Tim

e0.

290.

040.

53[0

.22,

0.3

6].0

00.

190.

040.

19[0

.11,

0.2

7].0

0

Cros

s-te

xtua

l ela

bora

tion

0.02

0.02

0.05

[−0.

025,

0.0

6].4

50.

040.

030.

04[−

0.00

9, 0

.10]

.11

Indi

rect

pat

hs

Tota

l ind

irec

t in

tere

st0.

030.

010.

13[0

.009

, 0.

05]

.01

0.02

0.01

0.09

[0.0

03,

0.03

].0

2

• In

tere

st →

Tim

e0.

030.

010.

13[0

.008

, 0.

05]

.01

0.01

0.01

0.06

[−0.

001,

0.0

3].0

6

• In

tere

st →

Cro

ss-t

extu

al

elab

orat

ion

0.00

0.00

0.01

[−0.

003,

0.0

1].4

90.

010.

000.

03[−

0.00

3, 0

.01]

.19

Tota

l ind

irec

t ha

bits

in in

form

atio

n ev

alua

tion

0.02

0.02

0.05

[−0.

018,

0.0

6].3

20.

040.

010.

12[0

.009

, 0.

06]

.01

• H

abit

s in

info

rmat

ion

eval

uati

on →

Ti

me

0.02

0.02

0.04

[−0.

020,

0.0

5].4

10.

020.

010.

07[0

.000

, 0.

04]

.05

• H

abit

s in

info

rmat

ion

eval

uati

on →

Cr

oss-

text

ual e

labo

rati

on0.

000.

010.

01[−

0.00

7, 0

.01]

.46

0.02

0.01

0.05

[−0.

004,

0.0

3].1

2

Inte

rtex

t m

odel

dev

elop

men

t

Dir

ect

path

s

Inte

rest

0.01

0.02

0.03

[−0.

037,

0.0

5].7

90.

020.

020.

08[−

0.01

4, 0

.05]

.30

Hab

its

in in

form

atio

n ev

alua

tion

0.02

0.03

0.06

[−0.

036,

0.0

8].4

30.

010.

020.

02[−

0.03

9, 0

.05]

.79

Tim

e0.

170.

040.

29[0

.08,

0.2

5].0

00.

220.

040.

45[0

.153

, 0.

29]

.00

Cros

s-te

xtua

l ela

bora

tion

0.05

0.02

0.15

[0.0

05,

0.10

].0

30.

030.

030.

10[−

0.02

4, 0

.09]

.25

(con

tinu

ed)

Page 36: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

36 | Reading Research Quarterly, 0(0)

Statement Verification Task PerformanceDirect EffectsPerformance on the statement verification task was statis-tically significantly predicted by interest (β = 0.14, p < .05) and the amount of time students devoted to text access (β = 0.53, p < .001).

Indirect EffectsFour mediating paths were examined, but only one of these was statistically significant. In particular, the effect on interest on integrated mental model development was statistically significantly mediated by the time students devoted to text access (β = 0.13, p < .01).

In total, 33.5% of variance in sentence verification task performance was explained by the model.

Source Differentiation PerformanceDirect EffectsThe time students devoted to multiple-text access (β = 0.29, p  <  .001) and students’ engagement of strategies reflecting cross-textual elaboration (β = 0.15, p  <  .05) were statistically significant direct predictors of source differentiation performance.

Indirect EffectsFour mediating paths were examined, in association with source differentiation performance, but only one of these was found to be statistically significant. That is, interest was statistically significantly associated with source dif-ferentiation task performance, as mediated via time devoted to text access (β = 0.07, p < .05).

In total, 13.0% of variance in source differentiation task performance was explained by the model.

CovariancesInterest was not statistically significantly associated with students’ habits with regard to information evaluation (.21, p = .06). Integrated mental model development was statistically significantly associated with intertext model development (.31, p < .001).

Group 2: Predicting Cross-Textual Integration and Statement Source Match Task PerformanceAs in the full model, all indicators loaded statistically sig-nificantly onto factors corresponding to interest (.52–.94, ps < .001) and habits with regard to information evalua-tion (.66–.79, ps  <  .001). Time on texts was marginally Pr

edic

tor

Gro

up 1

: SV

T an

d SD

TG

roup

2:

CT

I and

SSM

BSt

anda

rd

erro

r (B

95%

con

fide

nce

inte

rval

pB

Stan

dard

er

ror

(B)

β95

% c

onfi

denc

e in

terv

alp

Indi

rect

pat

hs

Tota

l ind

irec

t in

tere

st0.

020.

010.

09[0

.005

, 0.

04]

.01

0.02

0.01

0.09

[0.0

03,

0.04

].0

2

• In

tere

st →

Tim

e0.

020.

010.

07[0

.003

, 0.

03]

.02

0.02

0.01

0.08

[−0.

000,

0.0

3].0

53

• In

tere

st →

Cro

ss-t

extu

al

elab

orat

ion

0.01

0.00

0.02

[−0.

003,

0.0

1].2

00.

000.

000.

02[−

0.00

4, 0

.01]

.30

Tota

l ind

irec

t ha

bits

in in

form

atio

n ev

alua

tion

0.02

0.01

0.06

[−0.

003,

0.0

5].0

90.

040.

020.

12[0

.006

, 0.

07]

.02

• H

abit

s in

info

rmat

ion

eval

uati

on →

Ti

me

0.01

0.01

0.02

[−0.

012,

0.0

3].4

10.

020.

010.

08[0

.000

, 0.

05]

.046

• H

abit

s in

info

rmat

ion

eval

uati

on →

Cr

oss-

text

ual e

labo

rati

on0.

010.

010.

03[−

0.00

4, 0

.03]

.13

0.01

0.01

0.04

[−0.

009,

0.0

3].2

7

TABL

E D

1 Su

mm

ary

of D

irect

and

Indi

rect

Eff

ects

for S

ente

nce

Verif

icat

ion

Task

(SV

T) a

nd S

ourc

e D

iffer

enti

atio

n Ta

sk (S

DT)

Per

form

ance

(Gro

up 1)

and

Cro

ss-T

extu

al

Inte

grat

ion

(CTI

) and

Sta

tem

ent S

ourc

e M

atch

(SSM

) Tas

k Pe

rfor

man

ce (G

roup

2) (

cont

inue

d)

Page 37: Investigating the Cognitive Affective Engagement Model of ......Integration, within the context of learning from multi-ple texts, has primarily been conceptualized in terms of the

Investigating the Cognitive Affective Engagement Model of Learning From Multiple Texts | 37

significantly predicted by interest (β = 0.17, p = .05) and statistically significantly predicted by students’ habits with regard to information evaluation (β = 0.18, p < .05), with R2 =  .08. Students’ engagement in cross-textual elabora-tion was predicted by interest (β = 0.20, p < .05) and stu-dents’ habits with regard to information evaluation (β = 0.40, p < .001), with R2 = .23.

Cross-Textual Integration Task PerformanceDirect EffectsCross-textual integration task performance was directly predicted only by the time students devoted to text access (β = 0.37, p < .001). None of the other direct paths exam-ined in the model were statistically significant (ps > .11).

Indirect EffectsFour possible indirect paths were also examined, with only one of these found to be statistically significant. In particu-lar, the effect on students’ engagement in information eval-uation on cross-textual integration task performance was found to be statistically significantly mediated by the time students devoted to text access (β = 0.07, p < .05).

In total, 19.3% of variance in cross-textual integra-tion task performance was explained by the model.

Statement Source Match Task PerformanceDirect EffectsStatement source match task performance was only di -rectly predicted by the amount of time students devoted to text access (β = 0.45, p < .001), with none of the other direct effects statistically significant in the model.

Indirect EffectsThere were two statistically significant mediating paths in the model. That is, the marginally significant effect of inter-est (β = 0.08, p = .05) and the statistically significant effect of students’ habits with regard to information evaluation (β = 0.08, p < .05) on statement source match task perfor-mance were mediated by the time students devoted to text access.

In total, 25.9% of the R2 variance in statement source match task performance was explained by the model.

CovariancesInterest was statistically significantly associated with stu-dents’ habits with regard to information evaluation (.26, p < .01). Integrated mental model development was statisti-cally significantly associated with intertext model develop-ment (.44, p < .001).

DiscussionResults from these analyses partially mirrored those found in the full study. First, whereas interest directly predicted integrated mental model development for group 1, this path did not rise to significance for group 2. Second, as in the full model, time was found to have an important direct and mediating effect on both students’ integrated mental model development and their intertext model develop-ment. Third, in analyses for group 1, students’ engage-ment in cross-textual elaboration statistically significantly predicted intertext model development, but this path was not found to be statistically significant for group 2. On a more general level, the analyses conducted across groups supported the general conclusions suggested by the CAEM. Individual difference factors were associated with a variety of processing variables; processing variables were generally found to predict integration performance.