representation and processing of …bcalab/documents/fencsik2003.pdf2.1.2 apparatus ... 5.1 capacity...

100
REPRESENTATION AND PROCESSING OF OBJECTS AND OBJECT FEATURES IN VISUAL WORKING MEMORY by David Fencsik A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Psychology) in The University of Michigan 2003 Doctoral Committee: Professor David E. Meyer, Chair Associate Professor William J. Gehring Professor David E. Kieras Professor Patricia A. Reuter-Lorenz

Upload: trinhxuyen

Post on 28-Mar-2018

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

REPRESENTATION AND PROCESSINGOF OBJECTS AND OBJECT FEATURES

IN VISUAL WORKING MEMORY

by

David Fencsik

A dissertation submitted in partial fulfillmentof the requirements for the degree of

Doctor of Philosophy(Psychology)

in The University of Michigan2003

Doctoral Committee:

Professor David E. Meyer, ChairAssociate Professor William J. GehringProfessor David E. KierasProfessor Patricia A. Reuter-Lorenz

Page 2: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

c© David Fencsik

All Rights Reserved2003

Page 3: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

To Alissa

ii

Page 4: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

ACKNOWLEDGEMENTS

There are many people who have helped me get to this point, and I owe them all a great

deal of gratitude.

First and foremost, my sister Susanna, without whom none of this would have been

possible—for always arguing with me, even on those rare occasions when I was right. My

parents, for giving me all the time I needed to learn things in my own way. The much

loved and sorely missed Isaac, for the long walks and my first data-set.

I must give thanks to my committee members, David Kieras and Patti Reuter-Lorenz,

for their time and effort, and for ensuring that I set the bar higher. My advisors, Bill

Gehring and Dave Meyer, for all the guidance, patience, and the occasional kick in the

pants. The many fantastic teachers from elementary school through high school who

taught me so enthusiastically: Rebecca Mayeno, Lee Tempkin, Karla Herndon, Susan

Groves, David Bye, and Eric Anderson. My college professors: Glenn Meyer, Arthur

Shapiro, and especially Erik Nilsen, for giving me a push in the right direction.

The many research assistants who helped me collect all this data: Emily, Larry, Jenn,

Jessica, Robin, Stephanie, and Rachel. Shane, for seven years of free math consulting.

The whole rest of the Brain, Cognition, and Action Lab.

I thank my fabulous daughter, Maya, for constantly reminding me of what is important

in life. And last, but certainly not least, my wife, Alissa, who not only put up with this

adventure, but embraced it wholeheartedly—I could not have made it without you.

iii

Page 5: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

TABLE OF CONTENTS

DEDICATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

LIST OF APPENDICES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

CHAPTER

I. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 Visual Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Visual Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 A Processing Model of Performance for the Change-Detection Task . . . . . . . . . 8

1.3.1 The Visual Change-Detection Task . . . . . . . . . . . . . . . . . . . . . 81.3.2 Stages of Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.3.3 Consequences of the Present Assumptions . . . . . . . . . . . . . . . . . 11

1.4 Quantitative Predictions Based on the Model . . . . . . . . . . . . . . . . . . . . . 121.5 Testing the Object-File Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . 14

II. EXPERIMENT 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.1.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.1.2 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.1.3 Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.1.4 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.1.5 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.3 Model Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

III. EXPERIMENT 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.1.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.1.2 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.1.3 Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.1.4 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.1.5 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

iv

Page 6: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

3.3 Model Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

IV. EXPERIMENT 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.1.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.1.2 Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.1.3 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.1.4 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.3 Model Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

V. EXPERIMENT 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5.1 Predictions of the Independent Feature-Store Hypothesis . . . . . . . . . . . . . . . 415.2 Predictions of the Object-File Hypothesis with Unreliable Coding . . . . . . . . . . 435.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5.3.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.3.2 Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.3.3 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.3.4 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.5 Model Fits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

VI. GENERAL DISCUSSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

6.1 The Object-File Theory of Visual Working Memory . . . . . . . . . . . . . . . . . 526.1.1 How Does Information Enter Visual Working Memory? . . . . . . . . . 536.1.2 What Type of Information is Stored in Visual Working Memory? . . . . . 536.1.3 How Much is Stored in Visual Working Memory? . . . . . . . . . . . . . 536.1.4 How is Information in Visual Working Memory Organized? . . . . . . . 546.1.5 How is Information in Visual Working Memory Accessed? . . . . . . . . 546.1.6 How is Information in Visual Working Memory Maintained? . . . . . . . 546.1.7 What is the Duration of Information in Visual Working Memory? . . . . 546.1.8 How is Information Lost from Visual Working Memory? . . . . . . . . . 556.1.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

6.2 Evaluating the Object-File Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 566.2.1 Application of Theory to Present Results . . . . . . . . . . . . . . . . . 566.2.2 Application of Theory to Results from Past Studies of Visual Working

Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576.3 Directions for Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

APPENDICES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

v

Page 7: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

LIST OF FIGURES

Figure

1.1 Illustration of the change-detection task . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.2 Quantitative predictions of the change-detection model . . . . . . . . . . . . . . . . . . . 14

2.1 Experiment 1 trial procedure and change-types . . . . . . . . . . . . . . . . . . . . . . . . 17

2.2 Experiment 1 accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.3 Experiment 1 model fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.1 Experiment 2 accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.2 Experiment 2 model fit for substitute trials . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.3 Experiment 2 model fit for interchange trials . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.1 Experiment 3 accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.2 Comparison between Experiment 2 and Experiment 3 . . . . . . . . . . . . . . . . . . . . 36

4.3 Experiment 3 model fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.1 Trial-types presented in Experiment 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5.2 Illustration of the independent feature-store hypothesis . . . . . . . . . . . . . . . . . . . 42

5.3 Shapes presented in Experiment 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.4 Experiment 4 accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5.5 Comparison between Experiment 3 and Experiment 4 . . . . . . . . . . . . . . . . . . . . 48

5.6 Experiment 4 model fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

vi

Page 8: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

LIST OF TABLES

Table

5.1 Capacity estimates from all four experiments . . . . . . . . . . . . . . . . . . . . . . . . 50

C.1 Estimated parameters from Experiment 1 model fits . . . . . . . . . . . . . . . . . . . . . 76

C.2 Observed and predicted accuracy for Experiment 1 . . . . . . . . . . . . . . . . . . . . . 77

C.3 Goodness-of-fit measures from Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . 77

D.1 Estimated parameters from Experiment 2 model fits . . . . . . . . . . . . . . . . . . . . . 79

D.2 Observed and predicted accuracy for Experiment 2 . . . . . . . . . . . . . . . . . . . . . 80

D.3 Goodness-of-fit measures from Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . 81

E.1 Estimated parameters from Experiment 3 model fits . . . . . . . . . . . . . . . . . . . . . 82

E.2 Observed and predicted accuracy for Experiment 3 . . . . . . . . . . . . . . . . . . . . . 83

E.3 Goodness-of-fit measures from Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . 83

F.1 Estimated parameters from Experiment 4 model fits . . . . . . . . . . . . . . . . . . . . . 85

F.2 Observed and predicted accuracy for Experiment 4 . . . . . . . . . . . . . . . . . . . . . 85

F.3 Goodness-of-fit measures from Experiment 4 . . . . . . . . . . . . . . . . . . . . . . . . 85

vii

Page 9: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

LIST OF APPENDICES

Appendix

A. Derivation of the Change-Detection Model for Experiments 1–3 . . . . . . . . . . . . . . . . 66

B. Derivation of the Change-Detection Model for Experiment 4 . . . . . . . . . . . . . . . . . 68B.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68B.2 Color-Interchange and Shape-Interchange Trials . . . . . . . . . . . . . . . . . . . 70B.3 Object-Interchange Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71B.4 Dual-Feature-Interchange Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72B.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

C. Model Fit for Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

D. Model Fit for Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

E. Model Fit for Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

F. Model Fit for Experiment 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

viii

Page 10: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

CHAPTER I

INTRODUCTION

Pilots keep track of a dizzying amount of visual information while operating commer-

cial and military aircraft. Air traffic controllers on the ground must remember the contents

of their radar displays while executing multiple peripheral tasks. Automobile drivers must

remember the positions of other cars on the road while planning their route and attending

to road signs, traffic lights, and dashboard instruments. Our ability to perform these tasks

suggests that we have mechanisms for retaining visual information as it appears, changes,

disappears, and reappears in the environment. The psychological mechanisms that store

information while it is relevant to the task at hand are referred to asworking memory

(WM). The standard theory of WM includes separate stores for verbal information and

visual information (Baddeley, 1986).

In this dissertation, I will investigate the representations and processes that are used

for visual WM. In the following sections, I will introduce hypotheses about the possible

forms of visual representation, and then present research suggesting that there is visual

storage for such representations. Recent research regarding visual WM has been limited

to claims mostly about the form of the stored representations, without consideration of

the processes that support storage. To improve upon this, I will present a model of perfor-

mance in the change-detection task, which has been used frequently in visual WM research

1

Page 11: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

2

(e.g., Pashler, 1988; Phillips, 1974; Vogel, Woodman, & Luck, 2001). The model spec-

ifies assumptions about the processes required to perform this task, provides quantitative

predictions regarding response accuracy in the task, and allows me to estimate the amount

of information retained from a brief visual display. In the next four chapters, I will present

a series of experiments in order to test the assumptions of the model and make inferences

about the representations stored in visual WM.

1.1 Visual Representation

Before discussing storage of visual information, I will address the ways in which we

may represent displayed information while it is visible in the environment. Doing so will

help us subsequently to characterize the possible contents of visual WM.

The world consists of objects that are perceived individually within a hierarchical or-

ganization: We see a chair, a tree, or a building as an individual, unitary thing. Most real

objects consist of separable parts, each part being itself a distinct object. For example, a

car is an object, but so are its windows, doors, and passengers—when changing a flat tire,

we view every single lug nut as a distinct and important object.

However, in the laboratory, one may use stimuli that cannot be separated into sub-

objects: a blue square, for example, cannot be readily perceived as more than one thing.

Yet even a blue square has separable components, such as color and shape, which can

change independently. Color, shape, and other object components such as size, orientation,

and location, are referred to asfeature dimensions, and any particular object has values on

each feature dimension (e.g., “blue” color, “square” shape).

We can describe a stimulus such as a blue square as a collection of feature values

with a common location. Correspondingly, we can treat the various features of an object

as separate mental representations—that is, we can think about an object’s color without

Page 12: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

3

thinking about its shape. Certain features (e.g., color, orientation) are extracted quite early

in visual processing (Van Essen & Anderson, 1990) and are primitive components in the

construction of internal object representations (Treisman & Gelade, 1980). Nevertheless,

as an object changes (e.g., by moving or changing color), we can maintain a sense of that

object as the same thing that it was before—that is, we may perceive the sameobject token,

even if the physical stimulus changes in some way.

Kahneman, Treisman, and Gibbs (1992) showed that people can associate a set of fea-

tures with a particular object and maintain this association as the object moves or disap-

pears temporarily. In one experiment, they showed participants two common shapes (e.g.,

a square and a triangle) on a display, each containing a different letter. The letters dis-

appeared and the shapes moved to different spatial locations—the movement was smooth

and continuous, so participants could track each shape to its final position. After a short

delay, a target letter appeared in one of the shapes, and participants had to name it as

quickly as they could. If the target matched the letter that had originally appeared in that

shape, participants were faster at naming it than if it matched the letter that had appeared in

the other shape or if the target was a new letter. This result, along with others reported by

Kahneman et al., suggests that the letter identities were somehow linked to the shapes in

which they originally appeared, and that the links were maintained as the objects moved.

Consequently, Kahneman et al. (1992) proposed that people formobject files, which

are abstract schemas for an object and allow its features to be stored together as a single

representation. In essence, an object file may allow us to “chunk” the component features

of a real-world object (Miller, 1956): we can then represent the chunk as a single unit and

readily access its components. When we represent a single feature of an object (e.g., its

color) we can only refer to that feature by its value. However, when we chunk that feature

into an object file, we can then refer to each of the features as part of the object and link

Page 13: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

4

those features to other components of that object. This process of linking features together

in an object file is referred to asbinding.

In summary, visual information can be represented in several different forms: raw vi-

sual images, perceptual features, and object files. These are temporary representations of

stimuli in the environment: they can be created and modified as needed, and may dis-

appear when they are no longer relevant. In the next section, I will discuss how these

representations might be stored in visual WM.

1.2 Visual Storage

Presumably, when an object is physically present in the environment and visible, it is

available for processes as needed—either it is already represented in visual WM, or it can

be encoded readily. If an object is visible, its representation in visual WM issupported.

When an object disappears from the environment, its representation becomesunsupported:

if the representation decays, there is no way to regenerate it. In order to prevent decay of

unsupported representations, there must be processes thatmaintainthe contents of visual

WM. It is also possible for an unsupported representation to beunmaintained, which

means that it could decay.1

There is evidence for multiple types of visual memory. One is a detailed but very short-

lived store for raw visual input, thevisual sensory store. It has been studied by Sperling

(1960), who presented participants with a display containing 12 letters arranged in three

rows of 4 letters each. Display exposure duration varied from 5–500 ms, with no effect

on results. Immediately after display offset, participants heard a tone indicating one of the

three rows, and participants reported the letters from that row. Recall accuracy was 76%,

suggesting that participants had access to approximately 9 letters when the tone sounded.1The distinction between maintained and unmaintained applies only to unsupported representations. One could say that all supported

representations are maintained externally, but supported representations cannot be unmaintained.

Page 14: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

5

Increasing the delay between display offset and tone onset monotonically decreased recall

accuracy, down to a limit of about 4 letters at a delay of 500 ms. The 9 letters must not have

been available for long; otherwise recall would have remained high at longer tone delays.

The results suggest the existence of a high-capacity, rapidly-decaying memory system. Its

purpose is to store visual input briefly while other processes transfer parts of it to more

durable memory systems (Coltheart, 1984; Sperling, 1969). It cannot retain unsupported

visual representations.

Other researchers have found evidence of longer-term visual storage. Phillips (1974)

used a change-detection task in which participants had to remember partially-filled check-

erboard patterns during a blank inter-stimulus interval (ISI). After the ISI, a second check-

erboard pattern appeared, and participants determined whether or not the two patterns

were identical. Performance was nearly perfect at ISIs of 20 ms and steadily declined as

ISI increased. Phillips’ results showed a double dissociation: increasing the size of each

checkerboard pattern only affected performance at longer ISIs, while a visual mask only

affected performance at shorter ISIs. These results suggest that performance at short ISIs

was mediated by a different memory system than at longer ISIs. Presumably, visual sen-

sory memory is still intact at short ISIs, but a longer-duration store is needed for retention

during longer ISIs. Phillips also presented evidence that the checkerboard patterns were

difficult to describe verbally, so it seems reasonable to assume that the longer-duration

store was visual in nature.

Vogel et al. (2001) presented more evidence for a longer-duration visual store (see also

Luck & Vogel, 1997). They used a change-detection task with displays consisting of col-

ored squares, as illustrated in Figure 1.1. On a trial in their experiments, a display was

presented for about 100 ms, next the screen remained blank for 900 ms, and then a sec-

ond display appeared that was either identical to the first or differed in the color of one

Page 15: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

6

Sameor

Different

Display-1(100 ms)

ISI(900 ms)

Display-2(2000 ms)

Response

Time

Figure 1.1:Illustration of the visual change-detection task used by several researchers (e.g., Vogel, Wood-man, & Luck, 2001). Note: Stimuli are not drawn to scale.

square (e.g., in Figure 1.1, the stimulus on the lower right has changed color). Participants

determined whether or not the displays were identical. Vogel et al. showed that response

accuracy declined as the number of squares in each display increased, suggesting that

there was some limited capacity system for retaining the squares from the first display.

Further analysis suggested that participants retained approximately four items’ worth of

information from the first display. Requiring participants to remember two digits while

performing the change-detection task had no effect on performance, suggesting that stor-

age was not verbal.2 Thus, there is some evidence that unsupported visual representations

can be retained, and that retention is in the form of a visual code—such retention would

be mediated by visual WM.

Furthermore, Vogel et al. (2001) described another change-detection experiment in

which participants had to detect changes in an object’s color or its orientation. In one

condition, participants knew that only the color could change; in another condition, they

knew that only the orientation could change; in a third condition, participants could not

predict which feature would change, so they had to retain both color and orientation infor-

mation in order to detect changes. Performance was identical across all three conditions.

That is, performance depended on the number of objects in each display, but not on the2Participants were asked to rehearse the two digits quietly. In a separate experiment, Vogel et al. (2001) showed that the two-digit

load reduced accuracy in a verbal memory task.

Page 16: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

7

number of features that needed to be remembered.

To account for their results, Vogel et al. (2001) propose theobject-file hypothesis:

that visual WM stores about four object files, which are supposedly used to perform the

change-detection task. Their proposal makes sense in principle: Object files are compact

units of visual information, so once created, it would be efficient to retain them. The

evidence presented by Vogel et al. is certainly consistent with this hypothesis. Other re-

searchers also claim to have found supporting evidence for stored object files (Irwin &

Andrews, 1996; Wheeler & Treisman, 2002; Xu, 2002).

However, some authors have presented evidence that they claim shows no retention of

bound object information (Isenberg, Nissen, & Marchak, 1990; Nissen, 1985; Stefurak

& Boynton, 1986). For example, Nissen (1985) showed that recall of one feature was

independent of recall of another feature from the same object. On the surface, this suggests

that an object’s features were not stored together. Yet such independent feature storage is

not necessarily antithetical to the object-file hypothesis. Alternatively, one could assume

that features are stored in object files, but are encoded independently and imperfectly.

Under these assumptions, storage of one feature does not guarantee storage of another

feature, even if the two are stored together in an object file.

In general, evaluating these contradictory claims is difficult. The hypothesis being

debated is simply that WM stores object files. However, assumptions regarding the pro-

cesses that act upon these files are also needed to evaluate this hypothesis. Performing

any memory task requires at least the encoding, retention, and retrieval of information.

The change-detection task is further complicated in that it requires a comparison between

two sets of information. Results from these tasks cannot be interpreted accurately without

specifying both the representations and the processes that act upon them.

Therefore, further work is needed to determine what types of information are stored in

Page 17: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

8

visual WM and how they are processed. In this dissertation, I will describe research that

tests the object-file hypothesis based on a model that makes specific assumptions regarding

the processes underlying task performance. In order to compare my results with those of

previous researchers, these experiments make use of the change-detection task. In the

following section, I will specify a model of performance for that task.

1.3 A Processing Model of Performance for the Change-Detection Task

In this section, I will present a model of performance for the change-detection task.

The goal of doing so is to spell out my assumptions about task performance with the hope

of justifying the change-detection task as a practical method for studying visual WM.

Assuming the first goal is satisfied, a second goal is to specify a theoretical framework

within which I can interpret results from the change-detection task and make inferences

about the representations that visual WM contains.

In presenting my model, I will focus on the processes necessary to store and use infor-

mation relevant to the change-detection task. I will avoid making assumptions regarding

the form of representation stored or the capacity in visual WM, since the purpose of my ex-

periments is to infer these. The processes cannot be fully specified without also specifying

the representations upon which they act, but consideration of the processes first will help

determine whether the change-detection task is worthwhile. For now, I will assume that

the contents of visual WM include one or more types of visual representation (i.e., object

features, bound location-feature pairs, and object files) like those described in Section 1.1.

These will be referred to generically as “items”.

1.3.1 The Visual Change-Detection Task

To review, a trial in the visual change-detection task consists of three phases (see Fig-

ure 1.1): the first display appears briefly, the screen remains blank during the ISI, then

Page 18: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

9

the second display appears and remains visible until the participant responds. On “differ-

ent” trials, one or more of the stimuli in the first display will have changed in the second

display: I refer to the to-be-changed stimuli as thetargets.

1.3.2 Stages of Processing

There are three stages of processing involved in performing the change-detection task,

paralleling the three phases of the task itself: First, the participant must encode and store

items from the first display. Second, the stored items must be maintained during the ISI.

Third, the stored items must be retrieved from memory and compared to those in the

second display. Participants can detect changes between the displays only if they stored at

least one of the targets.

Encoding Stage

The encoding stage depends on which representations are used in visual WM. For the

moment, I only assume that encoding is rapid and perfect, such that there are multiple

independent items stored in visual WM after a brief display.

Maintenance Stage

Once the first display disappears, the items in WM become unsupported and must be

maintained or they will decay. I assume there is a limited number of items that can be

maintained during the ISI. Therefore, there must be a selection process that determines

what to maintain. Ideally, representations of one or more targets would be selected for

maintenance, but the targets are unknown. Thus, selection can be viewed as random.

Unmaintained items are assumed to decay rapidly and completely. Storage of individual

items is assumed to be independent, so decay of one item has no effect on retention of other

items. Maintained items remain in visual WM during the ISI with no loss of information.

Page 19: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

10

Comparison Stage

When the second display appears, the stored items in WM from the first display must

be compared to the stimuli in the second display. There are at least three ways this could

be accomplished: One possibility is that the contents of WM are immediately replaced

by the stimuli in the second display—essentially, the maintained items become supported

again and are updated immediately with new information. A second possibility is that

comparison requires simultaneous maintenance of items from the first display and the

corresponding items from the second display. A third possibility is that the maintained

items from the first display are compared with supported items from the second display.

The first possibility is rather uninteresting: the old and new stimuli must be compared

before overwriting occurs, and this comparison process needs to be explained. The sec-

ond possibility would be worrisome for those attempting to estimate WM capacity from

change-detection accuracy, since half of the maintained items would be dropped in order

to maintain comparison stimuli. In the General Discussion, I will provide evidence that

this hypothesis is incorrect. Until then, I argue that the comparison process assumed by

the second possibility is unnecessarily inefficient: according to it, limited capacity must

be committed to maintaining supported items. Instead, I will adopt the third possibility,

which assumes that the maintained items from the first display are compared to supported

items from the second display, without loss of maintained items.

Under the third possibility, the comparison process consists of three sub-stages: selec-

tion of maintained items, selection of supported items, and comparison. First, maintained

items are selected for comparison one at a time. I assume that selection is serial and does

not interfere with maintenance. Second, for each maintained item, a supported item is se-

lected for comparison to it; this sub-stage is discussed further below. Third, the maintained

item is compared to the supported item. I assume that any difference between the two items

Page 20: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

11

is readily detected and that comparing two items does not interfere with maintenance.

The second sub-stage of the comparison process has as its input the currently selected

maintained item. It searches through the supported items (i.e., those representing stimuli

in the second display) in order to find an appropriate one to compare with its input. If

the spatial location of the supported item is known, then there is sufficient information to

locate the correct stimulus from the second display. If this is the case, then selection of

the comparison item is assumed to occur immediately, and does not interfere with other

items in visual WM. On the other hand, if the maintained item does not include location-

information, or locations are unreliable (e.g., stimuli change locations between displays),

then the supported items must be searched in order to find the best comparison—which is

“best” depends on the task. Such a search is serial and participants may opt to shorten it

by using imperfect heuristics for determining what qualifies as “best”. Furthermore, the

search requires selecting supported items from a variety of spatial locations, which may

interfere with the maintenance of other information in memory (Awh, Jonides, & Reuter-

Lorenz, 1998).

1.3.3 Consequences of the Present Assumptions

In the change-detection task, the process of comparing information between displays is

rather complicated. However, given the assumptions I have made, it may proceed without

disturbing the maintained items in memory under certain circumstances. Namely, if the

locations of maintained items and supported items are known, then the second sub-stage

of the comparison process should not interfere with maintenance. Therefore, change-

detection performance may be useful for studying visual WM.

The model I presented allows for systematic interpretations of the results of change-

detection experiments. Additionally, it allows me to derive quantitative predictions re-

garding performance in the task, as described in the next section.

Page 21: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

12

1.4 Quantitative Predictions Based on the Model

In this section, I will briefly describe quantitative predictions that the present processing

model makes for the change-detection task. The predictions are based on the assumptions

I have described above, and ones presented by Pashler (1988).

The model assumes that, when the first display disappears, participants randomly select

a limited number of items for maintenance. The number of items that can be maintained

is the capacity of visual WM. On “different” trials, some of the items in the first display

are targets: If any target is maintained, then a change will be detected when the second

display appears. If no changes are detected in any of the maintained items, then either no

targets were maintained or no changes occurred, but the participant cannot know which is

the case. Thus, if no changes are detected, participants will guess with some probability

that a change occurred. This will lead to errors on “same” trials, but will also increase

accuracy on “different” trials.

As a result, the model predicts change-detection accuracy given stimulus numerosity,

N, the number of targets,t, capacity,k, and guessing rate,g. The predicted hit-rate,H (i.e.,

the probability of responding “different” given that a change occurred), is

(1.1) H = g+(1−g) ·(

1−t−1

∏i=0

N−k− iN− i

)

unlessk≥ N− t +1, in which caseH = 1. The derivation of Equation 1.1 is presented in

Appendix A. Generally, hit rate is predicted to increase ask or t increase, and decrease as

N increases. The correct-rejection rate (i.e., the probability of responding “same” given

that no change occurred) is simplyCR= 1−g, since the only way to make an error on

“same” trials is by guessing incorrectly (there is nothing to fail to detect).

Both k andg are treated as free parameters in fitting the model to observed data. In

keeping with Pashler (1988) and Vogel et al. (2001), I assume thatg varies with numeros-

Page 22: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

13

ity, so there is in fact onegi for each leveli of numerosity in an experiment.3 However, I

assume that each participant has only one value ofk for storing one type of representation.

Given thatk andgi are assumed to vary across individuals, I fit the model separately to

each participant’s data; figures and reported parameter estimates are averaged across the

individual fits. In an experiment withm levels of numerosity, there will bem+1 free pa-

rameters, and there will be at least2m observed data points per participant (from “same”

and “different” trials), leaving at least2m− (m+1) = m−1 degrees of freedom to test the

model’s goodness-of-fit.

The model predicts that, for each participant, Equation 1.1 will fit observed data well

with a single value ofk. From fitting the model, I test the assumptions of the model and

estimatek. A bad fit means that some of the model’s assumptions are incorrect. Figure 1.2

illustrates the quantitative predictions of the model with respect to hit rate as a function of

numerosity. Figure 1.2A shows predicted hit rate under different values ofk, with t andg

fixed—the plot illustrates howk can be varied to find the best fit to the data. Figure 1.2B

shows predicted hit rate under different values oft, with k andg fixed—the plot shows the

improvement predicted with each additional target.

Note that, as stated thus far, the model makes no claims about the type of information

stored in WM. It estimates how many item’s worth of information must be stored in order

to obtain the observed accuracy (after correcting for guessing). The content of each “item”

must be inferred from experimental manipulations.

Note also that a participant’s actual capacity must be an integer. However, I will treat

the estimatedk as a real number: participants may not always use all available capacity,

so the estimated capacity is an average. Additionally, there may be partial information

storage, which I will address in Experiment 4.3I could assume a systematic relationship betweenN andg (e.g., a two-parameter linear model) in order to reduce the number of

free parameters. This would require additional assumptions in the model and is not necessary given the number of degrees of freedomin the data. Additionally,gi is constrained by accuracy on “same” trials.

Page 23: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

14

Numerosity

Hit

Rat

e

0.0

0.2

0.4

0.6

0.8

1.0

1 2 3 4 5 6 7 8 9 10

k=1k=2k=3k=4k=5

A: Capacity

Numerosity

Hit

Rat

e

0.0

0.2

0.4

0.6

0.8

1.0

1 2 3 4 5 6 7 8 9 10

t=1

t=2

t=3

B: No. of Targets

Figure 1.2:Illustration of the quantitative predictions of the change-detection model for hit rate as a functionof numerosity. A: Predicted accuracy fork = 1, . . . ,5, from bottom to top, with fixed values oft = 1 andg = .05. B: Predicted accuracy fort = 1,2,3, from bottom to top, with fixed values ofk = 3 andg = .05. (k is capacity,t is the number of targets, andg is the guessing rate.)

1.5 Testing the Object-File Hypothesis

The purpose of introducing the present processing model and its quantitative predic-

tions is to provide a basis for testing the object-file hypothesis. The hypothesis claims that

WM stores object files independently of one another. This allows further specification of

the processes that encode, retain, and retrieve information in visual WM.

An object file stores all the known features for one particular object as a single unit.

The features include color, shape, and spatial location. Object-file creation is assumed to

be rapid, so visual WM can hold a large, potentially unlimited number of supported object

files. The encoding of features into an object file is currently assumed to be perfect: if

an object file exists in WM, then all the object’s features are stored with it. Under the

assumptions of the processing model, only a limited number of unsupported object files

can be maintained concurrently. Unmaintained object files decay rapidly and completely.

Page 24: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

15

The model also assumes that the object files in memory are stored independently of one

another. Retrieval of an object file provides access to all of its contents, so comparison

between a maintained object file and a supported one can proceed as described in the

model.

In this dissertation, I will present four change-detection experiments that test the object-

file hypothesis based on the model I have proposed. Each experiment will test the contents

of visual WM during the change-detection task by using a variety of change types. The

results will also allow for tests of the goodness-of-fit of the quantitative predictions made

by the model and estimates of the number of items stored in WM.

Page 25: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

CHAPTER II

EXPERIMENT 1

The primary goal of Experiment 1 was to test the object-file hypothesis’ assertion that

visual WM stores only object files. If participants store only object files, then they should

always have access to both the color and the location of a stimulus. Thus, I tested par-

ticipants in a change-detection task with changes that either required knowing onlywhich

colors had been presented, or required knowingwhereeach color had been.

The change-detection task used in Experiment 1 was similar to that used by Vogel et al.

(2001). Each display contained a number of colored squares, and changes were introduced

by changing the colors of one or two randomly selected squares. On blocks ofsubstitute

trials, the color of one square changed to a color that was not present elsewhere in the

display: detecting such a change required participants to remember only the colors in the

first display. On blocks ofinterchangetrials, the colors of two squares changed places:

detecting such a change required participants to remember the colors and their locations,

since the set of colors did not change between displays. The procedure for each change-

type is illustrated in Figure 2.1.

Under the assumptions of the object-file hypothesis, participants in Experiment 1 al-

ways stored combined color-location information, so they used the same set of informa-

tion to detect changes on both substitute and interchange trials. Thus, the only difference

16

Page 26: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

17

Different(Substitute)

Display-1(177 ms)

ISI(897 ms)

Display-2(2000 ms or RT)

Response

Different(Interchange)

Same

Time

Figure 2.1:Illustration of the trial procedure and change-types used in Experiment 1. Note: Stimuli are notdrawn to scale and were not outlined in the experiment.

between substitute and interchange trials was in the number of targets—one on substi-

tute trials and two on interchange trials—so accuracy should be lower on substitute trials

relative to interchange trials.

In contrast to the object-file hypothesis, one alternative hypothesis assumes that visual

WM stores only unbound color-features. This hypothesis predicts that participants should

be unable to detect changes on interchange trials. A second alternative hypothesis assumes

that participants have access to both object files and unbound feature information. This hy-

pothesis predicts that accuracy on substitute trials should be higher than expected, since

participants could use at least two sources of information to detect a new color. Experi-

ment 1 enables these alternative hypotheses to be tested versus the object-file hypothesis.

There were also two secondary goals in Experiment 1: first, to test the assumptions

of my processing model by comparing its quantitative predictions to observed data, and

second, to measure the capacity of visual WM. If the results are consistent with the predic-

tions of the object-file hypothesis, then the model should accurately predict the difference

Page 27: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

18

in accuracy on substitute and interchange trials as simply a difference between detecting

one target versus detecting two targets (see Figure 1.2B). Additionally, the model should

be able to fit the observed data with only one capacity parameter.

2.1 Method

2.1.1 Participants

Eight students from the University of Michigan participated in this experiment. All

were right handed, had normal or corrected-to-normal vision, and passed a color blindness

test (H-R-R Pseudoisochromatic Plates, Richmond International, Inc., Boca Raton, FL).

Participants received $7.50 per hour plus a bonus for accurate performance.

2.1.2 Apparatus

Stimuli were presented on a Sony Trinitron 17 in monitor connected to a Dell Optiplex

GX1 computer. Participants sat 80 cm from the monitor with their heads on a chin-rest.

The monitor refresh duration was approximately 11.8 ms (84.7 Hz); all trial event dura-

tions are reported in multiples of it. The experiment was programmed and run using the

E-Prime software suite (Schneider, Eschman, & Zuccolotto, 2002a, 2002b). Responses

were collected from a custom-built response box connected through the computer’s serial

port, with software detection of responses (Morris, 1992).

2.1.3 Stimuli

Visual displays were generated prior to each trial. The displays were composed of 2–8

colored squares measuring0.65◦ on each side. The squares were placed randomly within

a 9.8◦× 7.3◦ rectangle such that the center-to-center distance between any two squares

was at least2.0◦. The color of each square was selected randomly (without replacement)

from the set of black, blue, brown, cyan, green, orange, red, violet, white, and yellow.

Stimuli were presented against a light gray background. On 50% of the trials, two identical

Page 28: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

19

displays were generated (“same” trials). On the remaining trials, the second display was

modified from the first in one of two ways: on substitute trials, by changing the color

of a randomly selected square to one of the unused colors; and on interchange trials, by

switching the colors of two randomly selected squares.

2.1.4 Procedure

Each trial proceeded as follows: a 100 ms tone onset simultaneously with a fixation

cross; after 507 ms, the fixation disappeared and display-1 was presented for 177 ms; the

screen remained blank for 897 ms during the ISI; finally, display-2 was presented until the

participant responded or 2000 ms passed. The participant’s task was to determine whether

the two displays were the “same” or “different”. Participants responded by pressing a

button with the index finger of either hand; side of response was counterbalanced across

participants. Participants received points for correct responses and lost points for errors.

2.1.5 Design

Interchange and substitute trials were blocked. Participants were informed of the change-

type at the beginning of each trial block. Blocks of each type were grouped together;

participants performed four blocks of one type, then switched to the other type. Each par-

ticipant ran in four sessions on separate days, with the first session used for practice. After

removing the first session and the initial practice blocks from sessions 2–4, there were 48

trials at each combination of match (“same” vs. “different”), change-type, and numerosity.

2.2 Results

Mean proportion of correct responses is plotted by numerosity in Figure 2.2. I used

arc-sine transformed accuracy data (y′ = arcsin√

y) in all statistical analyses in order to

compensate for unequal variances present in binomial data (Hogg & Craig, 1995); figures

Page 29: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

20

2 3 4 5 6 7 8

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Prop

ortio

n C

orre

ct

Same Substitute

Same Interchange

Different Substitute

Different Interchange

Figure 2.2:Mean proportion of correct responses as a function of numerosity for Experiment 1. Trianglesindicate “same” trials and circles indicate “different” trials. Filled points indicate substitute trialsand open points indicate interchange trials.

and model fits are based on the untransformed results. I submitted the transformed accu-

racy data to a 2 (match)× 2 (change-type)× 7 (numerosity) repeated measures ANOVA.

The main effects of all three factors were reliable. Accuracy decreased with increasing

numerosity;F(6,42) = 107.67, MSE= .0073, p < .001. Accuracy on “different” trials

(.84) was lower than on on “same” trials (.92);F(1,7) = 17.02, MSE= .032, p = .0044.

Accuracy on interchange trials (.87) was lower than accuracy on substitute trials (.89);

F(1,7) = 7.10, MSE= .012, p = .032. The interaction between match and numerosity

was reliable, reflecting that accuracy on “different” trials declined faster with numerosity

than on “same” trials;F(6,42) = 18.09, MSE= .0060, p < .001. No other interactions

were reliable (p > .2).

Page 30: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

21

2.3 Model Fit

The model described by Equation 1.1 predicts that accuracy should increase with the

number of targets. However, performance in Experiment 1 was worse on interchange trials

with two targets than on substitute trials with one target. This suggests that different types

of information may be used to detect changes on substitute and interchange trials, with

different capacities for each type (see Wheeler & Treisman, 2002). Therefore I fit the ob-

served data with two separate sets of capacity and guessing parameters: one for substitute

trials and one for interchange trials. Given that the information needed to detect changes

on interchange trials (bound color-locations) is a subset of the information needed on sub-

stitute trials (bound color-locations and unbound color features), I expect the estimated

visual WM capacity for interchange trials to be lower than for substitute trials.

To fit the models, I used the R statistical package (Ihaka & Gentleman, 1996), which

includes a non-linear minimization procedure (nlm) that searches a parameter space to

minimize an arbitrary function (Dennis & Schnabel, 1983). I used thenlm function to

minimize the sum-squared-error between the observed and predicted data. I calculated a

separate fit for each combination of participant and change-type, with 8 free parameters for

each fit: 1 capacity (k) and 7 guessing parameters (gi , i = 2,3, . . . ,8). Each change-type

included 14 observations—7 “same” and 7 “different”—resulting in14−8= 6 degrees of

freedom for each individual fit. The model fits and parameter estimates reported below are

averaged across the individual fits.

Figure 2.3A shows the averaged observed and predicted data for the substitute trial

blocks;R2 = .96, RMSE= .027, andk = 4.53±0.37 (RMSE= root mean-squared error;

the range indicated by “±” is the 95% confidence interval). Figure 2.3B shows the aver-

aged observed and predicted data for the interchange trial blocks;R2 = .97, RMSE= .018,

Page 31: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

22

2 3 4 5 6 7 8

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Prop

ortio

n C

orre

ct

A: Substitute

Same

Different

Observed

Predicted

2 3 4 5 6 7 8

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Prop

ortio

n C

orre

ct

B: Interchange

Same

Different

Observed

Predicted

Figure 2.3:Mean observed and predicted proportion of correct responses as a function of numerosity forExperiment 2. Triangles indicate “same” trials and circles indicate “different” trials. Solid linesindicate observed data and dashed lines indicate predicted data based on the fitted model. A:Substitute trials. B: Interchange trials.

andk = 2.66±0.41. Note that the seven guessing parameters are almost entirely deter-

mined by the “same” accuracy: fitting just the “different” trials with one single free pa-

rameter yields a fit almost identical to the full model. Generally, the guessing rate (i.e.,

responding “different” when guessing) increased with numerosity, and the fit of the model

was reasonably good. Further details of the fit are provided in Appendix C.

2.4 Discussion

The object-file hypothesis, in its current state, predicts that accuracy should have been

higher on the interchange trials than on the substitute trials. Since the results are clearly

inconsistent with this, the hypothesis must be modified or cast aside. That participants

were able to detect changes successfully on interchange trials shows that they must have

retained some color-location information, which is consistent with the object-file hypoth-

esis. The superiority of performance on substitute trials may reflect the presence of extra

Page 32: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

23

color features that remain after object files decay.

Therefore, I will modify the object-file hypothesis in order to accommodate the results

of Experiment 1: instead of all-or-none object-file decay, I propose a decay process that

deletes individual features one at a time. Under thisfeature-based decay assumption, the

features of unmaintained object files will decay individually. Thus, after a 900 ms interval,

there may be partially-decayed object files still in WM, some containing just color features,

and others containing just location features. The partially-decayed object files can be

retrieved and any intact color-features used as input for the comparison process. Then,

the supported object files from the second display will be searched for matching colors: if

none are found, a change must have occurred.

Since it is unlikely that both a color feature and a location feature will remain in an

unmaintained object file, partially-decayed files cannot be used to detect changes on in-

terchange trials. Thus, accuracy will be higher on substitute trials than on interchange

trials (assuming an equal number of targets). Under the feature-based decay assumption,

capacity estimates from interchange trials reflect the number of maintained object files,

and capacity estimates from substitute trials are contaminated by unmaintained, partially-

decayed object files.

The quantitative predictions of the change-detection model provide a reasonable fit to

the observed data. However, the model tends to over-predict response accuracy at low

numerosities because it cannot account for errors when capacity is larger than numerosity.

Furthermore, I could not test the fit of the model under different numbers of targets (t),

since the difference between substitute and interchange performance seemed to involve

more than justt. Fitting the model to different levels oft would provide a better test of the

model’s assumptions. Such a fit will be further explored in Experiment 2.

The average capacity estimate for Experiment 1 was lower than the estimate reported

Page 33: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

24

by Vogel et al. (2001): they found capacity to be approximately3.40±0.81 items in their

second experiment. However, Vogel et al.’s estimates are derived from substitute trials—

the only type of trial used in their studies. The average capacity estimate from substitute

trials in the present experiment was about4.53±0.37 items, reliably greater than Vogel

et al.’s estimates. Under the object-file hypothesis with the feature-based decay assump-

tion, these estimates are contaminated by partially-decayed object files. Nevertheless, the

difference between my findings and theirs is a cause for concern. One potential expla-

nation is that participants in the present experiment received extensive practice and had

financial incentives to perform well. In particular, the incentives may have encouraged

them to “cheat” by storing some information verbally; this possibility will be investigated

in Experiment 3. However, it is also possible that participants in previous studies were not

motivated to perform as well as possible, so the estimates of their capacities were too low.

The average capacity estimate based on data from interchange trials was approximately

2.66. Under the object-file hypothesis, participants must have been able to retain at least

3 object files, otherwise capacity would have been less than or equal to 2. However, the

estimated capacity does not equal 3, so it cannot reflect the actual capacity of visual WM—

participants could not have stored 2.66 object files. One possible explanation is that the

present model incorrectly assumes that each object file in visual WM contains complete

information about an object. In fact, each object file may contain only partial information,

so the present capacity estimates may underestimate the actual object-file capacity. This

possibility will be further explored in Experiment 4.

Given the results of Experiment 1, I modified the object-file hypothesis by assuming

feature-based decay instead of all-or-none object-file decay. Furthermore, the capacity

estimates indicate retention of about 2.66 object files, so I propose that participants can

maintain up to 3 object files in visual WM.

Page 34: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

CHAPTER III

EXPERIMENT 2

Experiment 2 expanded the design of Experiment 1 in order to better test the fit of my

model and the previous conclusions drawn from it. In Experiment 1, participants could

use more information to detect changes on substitute trials than on interchange trials, so

I could not make a direct comparison between the two change-types. In Experiment 2, I

added conditions so I could measure the effect of additional targets within each change-

type.

Experiment 2 included four trial-types: substitute-one, substitute-two, interchange-two,

and interchange-three, where “substitute” and “interchange” refer to the change-type and

“one”, “two”, and “three” refer to the number of targets on “different” trials. I will refer to

the second manipulation as “change-size”. Thus, for both substitute and interchange trials,

I could compare two levels of change-size to determine whether the increase in accuracy

was as predicted by the model. Presumably, the only difference between substitute-one

and substitute-two trials was in the number of targets: Participants had to know which

colors were present in both, but they had a higher likelihood of storing a target in the

latter condition. Similarly, on interchange-two and interchange-three trials, participants

had to know where each color was, but had more targets available in the latter condition.

Generally, increasing the number of targets on a “different” trial should make the changes

25

Page 35: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

26

easier to detect within each change-type.

Experiment 2 also provided a test of the object-file hypothesis with the feature-based

decay assumption. According to it, participants should retain a limited number of main-

tained object files that allow them to detect changes on interchange trials, and also some

partially-decayed object files with intact color-features that allow them to detect changes

on substitute trials.

3.1 Method

The method for Experiment 2 was identical to Experiment 1 (see Section 2.1), except

for the modifications described below.

3.1.1 Participants

A new group of 10 students from the University of Michigan participated in this study.

None of them had participated in the previous experiment.

3.1.2 Apparatus

The experiment-running computer was upgraded to a Dell Optiplex GX300.

3.1.3 Stimuli

There were three modifications to the stimulus displays. First, on “different” trials,

changes were generated in one of four ways: (a) on substitute-one trials, by changing the

color of one randomly selected stimulus to a color not already present in the display; (b)

on substitute-two trials, by changing the colors of two randomly selected stimuli to two

different colors not already present in the display; (c) on interchange-two trials, by swap-

ping the colors of two randomly selected stimuli; and (d) on interchange-three trials, by

shuffling the colors of three randomly selected stimuli such that all three changed color but

no new colors were introduced into the display. Second, the smallest numerosity was three

Page 36: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

27

to allow for enough stimuli in the interchange-three condition, and the largest numerosity

was eight to allow for two new colors in the substitute-two condition. Third, the display

area was reduced to4.5◦×4.5◦ of visual angle in order to reduce the spread of stimuli.

3.1.4 Procedure

The new computer (see Section 3.1.2) had a refresh duration of approximately 13.33 ms

(75.04 Hz). Exposure durations were adjusted to yield integral numbers of frames for each

display phase: The fixation cross was presented for 507 ms, display-1 for 200 ms, and the

ISI for 906 ms.

3.1.5 Design

All factors were manipulated within-block such that there were 4 (trial-type)× 2

(match)× 6 (numerosity) = 48 trials per block. Each block was split into two halves

with a short break between each half. Participants were not informed about the different

trial-types. They were instructed to detect any change in the display.

Each participant completed four experimental sessions. The first session and the ini-

tial practice blocks from sessions 2–4 were dropped from the analysis, resulting in a to-

tal of 30 trials at each combination of trial-type, match, and numerosity. Note that the

within-block manipulation of trial-type means that “same” trials cannot be grouped by

trial-type. Note also that there are four trial-types, but they can be treated as a2×2 de-

sign with change-type—substitute vs. interchange—and change-size—small (substitute-

one and interchange-two) vs. large (substitute-two and interchange-three) as factors. The

2×2 design is not exactly balanced with respect to the number of targets, but does allow

separate tests for the effects of each factor.

Page 37: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

28

3.2 Results

Mean proportion of correct responses is plotted by numerosity in Figure 3.1. Since

the “different” trials are of primary interest and there is no way to distinguish “same”

trials across trial-types, only the “different” trials were analyzed. I submitted the arc-

sine transformed data from “different” trials to a 2 (change-type)× 2 (change-size)× 6

(numerosity) repeated measures ANOVA. The main effect of change-type was marginally

reliable, with lower accuracy on interchange trials (.84) than on substitute trials (.88);

F(1,9) = 4.71, MSE= .074, p = .058. The effect of change-size was reliable, with lower

accuracy on trials with a small number of targets (substitute-one and interchange-two; .79)

than on trials with a large number of targets (substitute-two and interchange-three; .92);

F(1,9) = 164.30, MSE= .019, p < .001. Planned comparisons show that accuracy on

interchange-two trials (.77) was reliably lower than accuracy on interchange-three trials

(.90); F(1,9) = 102.30, MSE= .012, p < .001. Similarly, accuracy on substitute-one

trials (.81) was reliably lower than accuracy on substitute-two trials (.94);F(1,9) = 86.48,

MSE= .023, p< .001. Finally, accuracy on interchange-two trials (.77) was reliably lower

than accuracy on substitute-two trials (.94);F(1,9) = 49.43, MSE= .057, p < .001.

3.3 Model Fit

The model was fit using the same method described in Section 2.3. One fit was obtained

for each combination of participant and change-type; the “same” trials were used for both

change-types. Each fit included 1 capacity parameter, 6 guessing parameters, and was fit

to 18 observed data points (the “same” trials and two change-size conditions at each of 6

numerosities), resulting in18−7 = 11degrees of freedom per participant. The model fits

and parameter estimates reported below are averaged across the individual fits.

Figure 3.2 shows the averaged observed and predicted data for the substitute trials;

Page 38: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

29

3 4 5 6 7 8

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Pro

port

ion

Cor

rect

Same

Substitute−Two

Interchange−Three

Substitute−One

Interchange−Two

Figure 3.1:Mean proportion of correct responses as a function of numerosity for Experiment 2. The pointsindicate results from “same” (open triangles), substitute-two (filled squares), interchange-three(open squares), substitute-one (filled circles), and interchange-two (open circles) trials.

R2 = .98, RMSE= .018, andk = 4.56±0.66. Figure 3.3 shows the averaged observed

and predicted data for the interchange trials;R2 = .96, RMSE= .024, andk= 2.57±0.53.

Again, notice that the six guessing parameters in each fit are almost entirely determined

by the “same” accuracy. Further details of the fit are provided in Appendix D.

3.4 Discussion

The primary goal of Experiment 2 was test the assumptions of the present processing

model by fitting it to a diverse set of data. The model fit accuracy on both substitute and

interchange trials reasonably well, although it again over-predicted accuracy at low nu-

merosities. However, the predicted and observed accuracies are consistently parallel on

“different” trials in both Figure 3.2 and Figure 3.3. This demonstrates that the increase in

accuracy due to change-size is correctly predicted by the model. The reasonable fit sug-

Page 39: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

30

3 4 5 6 7 8

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Pro

port

ion

Cor

rect

Same

Substitute−Two

Substitute−One

Observed

Predicted

Figure 3.2:Mean observed and predicted proportion of correct responses on substitute trials as a func-tion of numerosity for Experiment 2. The points indicate results from “same” (open triangles),substitute-two (filled squares), and substitute-one (filled circles) trials. Solid lines indicate ob-served data and dashed lines indicate predicted data based on the fitted model.

gests that the assumptions of the processing model may accurately describe performance

on the change-detection task.

The capacity estimates provided by the model for Experiment 2 are virtually identical to

those for Experiment 1. The mean estimate derived from interchange trials is2.66±0.41

for Experiment 1 and2.57± 0.53 for Experiment 2. This confirms the assumption that

visual WM can maintain approximately 3 object files. The mean estimate derived from

substitute trials again suggests about 2 additional color-features remaining from partially-

decayed object files, under the feature-based decay assumption.

Page 40: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

31

3 4 5 6 7 8

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Pro

port

ion

Cor

rect

Same

Interchange−Three

Interchange−Two

Observed

Predicted

Figure 3.3:Mean observed and predicted proportion of correct responses on interchange trials as a func-tion of numerosity for Experiment 2. The points indicate results from “same” (open triangles),interchange-three (open squares), and interchange-two (open circles) trials. Solid lines indicateobserved data and dashed lines indicate predicted data based on the fitted model.

Page 41: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

CHAPTER IV

EXPERIMENT 3

Experiment 3 tested whether participants were verbally encoding stimuli in the change-

detection task. Since the first two experiments did not control for verbal encoding, par-

ticipants may have relied on verbal WM to store information from the first display. It

is possible, though unlikely, that they used verbal WM exclusively. Alternately, they may

have supplemented visual WM by verbally encoding some information. Vogel et al. (2001)

argue that verbal encoding would be difficult given that the first display is only present for

100–200 ms. Additionally, it is difficult to imagine how one can code color and location

information verbally. However, verbally encoding information from just one object would

contaminate the results of the previous experiments, and such partial verbalization cannot

be ruled out by these arguments.

In order to reduce the likelihood of verbal encoding, Experiment 3 required participants

to perform an articulatory suppression task while also performing the change-detection

task. The assumption is that, if participants are encoding some stimuli verbally, then they

must be recoding visual information into verbal information, which requires (possibly

covert) articulation of color names and spatial locations. Given the further assumption that

participants cannot articulate two things at the same time, an articulatory suppression task

will eliminate verbal encoding. Therefore, on half of the trial blocks, participants recited

32

Page 42: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

33

a string of nonsense syllables (“bee-bay-bah-boo”) during each trial. The experimenter

listened to participants during the block in order to ensure that they spoke continuously on

every trial from the onset of the fixation cross until they responded.

Under the object-file hypothesis with the feature-based decay assumption, performance

on substitute trials reflects both maintained object files and unmaintained, partially-decayed

object files. Since I am primarily interested in studying maintained information, Experi-

ment 3 included only interchange-two and interchange-three trials. If participants rely on

verbal WM to store bound color-location information, then the articulatory suppression

task should reduce change-detection accuracy and estimated capacity. Furthermore, the

processing model should fit data with the articulatory suppression task better than without

it, since information in verbal WM is unlikely to satisfy the assumptions of the model.

4.1 Method

Experiment 3 was identical to Experiment 2 (see Section 3.1) except for the changes

described below.

4.1.1 Participants

A new group of eight University of Michigan students participated in this study.

4.1.2 Stimuli

There were two trial-types: interchange-two and interchange-three (see Section 3.1.3

for more information). The range of numerosities was changed to 5–10, since no additional

colors were needed for substitute trials.

4.1.3 Procedure

Participants were required to perform an articulatory suppression task on certain trial

blocks. For the suppression task, participants had to recite the string “bee-bay-bah-boo”

Page 43: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

34

out loud during each trial at a rate of about 3–4 syllables per second. During the first ses-

sion, participants received practice with the articulatory suppression task, with the goal of

speaking continuously throughout a trial and pausing only between their response and the

onset of the fixation cross. The precise rate of articulation was not considered important,

as long as participants spoke continuously, without stopping or making errors.

4.1.4 Design

During the first session, participants received several trial blocks of practice with the

change-detection task alone. After these blocks and throughout the remaining sessions,

the suppression task was required on every other block, with the remaining blocks serving

as a control. Participants were told at the beginning of each block whether or not they had

to perform the suppression task. Block order was counterbalanced across participants.

Each participant took part in four experimental sessions. The first session and the initial

practice blocks from the last three sessions were dropped from the analysis, leaving 33

trials at each combination of change-size, match, numerosity, and suppression condition.

4.2 Results

Mean proportion of correct responses is plotted by numerosity in Figure 4.1. The ar-

ticulatory suppression task seems to have had no overall effect on performance in the

change-detection task. I submitted the arc-sine transformed data from “different” trials

to a 2 (suppression condition)× 2 (change-size)× 6 (numerosity) repeated measures

ANOVA.1 Accuracy on suppression trials (.674) was not reliably different from accu-

racy on no-suppression trials (.678);F(1,7) = 0.18, MSE= .0072, p = .68. Further-

more, no interaction involving suppression reached significance (ps> 0.7). Accuracy on

interchange-two trials (.59) was reliably lower than accuracy on interchange-three trials1Including the “same” trials reduced the significance of the statistical tests slightly, but did not change the overall pattern of results.

There appears to be no reliable difference in guessing rate between conditions.

Page 44: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

35

5 6 7 8 9 10

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Pro

port

ion

Cor

rect

Same

Interchange−Three

Interchange−Two

Suppression

No Suppression

Figure 4.1:Mean proportion of correct responses as a function of numerosity for Experiment 3. The pointsindicate results from “same” (triangles), interchange-three (squares), and interchange-two (cir-cles) trials. Open points indicate results from suppression blocks and filled points indicate resultsfrom no-suppression blocks.

(.76);F(1,7) = 243.08, MSE= .0074, p < .001.

As a second check on the effects of articulatory suppression, I compared the results of

Experiment 3 to those of Experiment 2, which had no articulatory suppression task. Both

experiments included interchange-two and interchange-three trials. Mean proportion of

correct responses for both experiments is shown in Figure 4.2. The figure shows accuracy

from “same”, interchange-three, and interchange-two trials, with the Experiment 3 results

collapsed across suppression conditions. The plot suggests that participants in Experi-

ment 3 performed worse than those in Experiment 2.

I submitted the combined, arc-sine transformed data to a 2 (experiment)× 2 (change-

size)× 4 (numerosity) repeated measures ANOVA, with experiment treated as a between-

subjects factor. I only included the numerosities common to both experiments (5–8), col-

Page 45: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

36

3 4 5 6 7 8 9 10

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Pro

port

ion

Cor

rect

Same

Interchange−Three

Interchange−Two

Experiment 2

Experiment 3

Figure 4.2:Mean proportion of correct responses as a function of numerosity for Experiment 2 and Exper-iment 3. The points indicate results from “same” (triangles), interchange-three (squares), andinterchange-two (circles) trials. Dashed lines indicate results from Experiment 2 and solid linesindicate results from Experiment 3 (collapsed across suppression conditions).

lapsed across suppression conditions in Experiment 3, and removed all “same” trials—

including only one or the other suppression condition and/or including “same” trials did

not influence the results. The difference in accuracy between Experiment 2 (.78) and Ex-

periment 3 (.73) was marginally reliable;F(1,16) = 2.35, MSE= .10, pone-tailed= .07.

None of the interactions involving experiment were reliable (ps> .5).

4.3 Model Fit

I fit the model to the results of Experiment 3 using the same method described in Sec-

tion 2.3. I fit accuracy on “same”, interchange-two, and interchange-three trials collapsed

across suppression condition. I also fit separate models to each suppression condition with

similar outcomes. Each participant contributed 18 observations, and each fit included 1

Page 46: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

37

5 6 7 8 9 10

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Pro

port

ion

Cor

rect

Same

Interchange−Three

Interchange−Two

Observed

Predicted

Figure 4.3:Mean observed and predicted proportion of correct responses as a function of numerosity forExperiment 3. The points indicate results from “same” (triangles), interchange-three (squares),and interchange-two (circles) trials. Solid lines indicate observed data and dashed lines indicatepredicted data based on the fitted model. The observed data are collapsed across suppressionconditions.

capacity parameter and 6 guessing parameters, for a total of18−7 = 11 degrees of free-

dom per participant. The model fits and parameter estimates reported below are averaged

across the individual fits.

Figure 4.3 shows the averaged observed and predicted data for Experiment 3;R2 = .99,

RMSE= .016, andk = 2.25±0.30. Further details of the fit are provided in Appendix E.

4.4 Discussion

If participants used verbal encoding to perform the change-detection task during Exper-

iment 3, then the articulatory suppression task should have disrupted their performance.

The lack of any reliable difference suggests that verbal encoding did not play a role in this

experiment. However, participants performed slightly worse in Experiment 3 than in Ex-

Page 47: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

38

periment 2, which did not include a verbal control. Some of the participants in the first two

experiments may have used a verbal encoding strategy to supplement visual storage. In

Experiment 3, the articulatory suppression task apparently led participants to never adopt

such a strategy, even on trial blocks with no suppression task.

There is further evidence for the possibility of verbal encoding in Experiment 1 and

Experiment 2. The capacity estimates from those experiments were slightly higher than in

Experiment 3 (2.66 and 2.57 vs. 2.25), consistent with supplementary verbal storage. The

confidence intervals from the previous experiments are also slightly larger (0.41 and 0.53

vs. 0.30). The differences between the previous experiments and this one are small and

not reliable. However, the possibility that participants supplement their visual memory

is a cause for concern. Future experiments should take precautions to discourage verbal

encoding: The articulatory suppression task used in Experiment 3 seems effective in at

least reducing the amount of verbally stored information.

Another intriguing finding from Experiment 3 is that the model fit the observed results

better than in previous experiments. This is also consistent with my proposal that previous

experiments were contaminated with verbal encoding. Participants may have selectively

used verbal encoding at different numerosities, which would render a single-capacity fit

inappropriate. Additionally, it is unlikely that the properties of visual WM assumed by

my model apply to verbal WM. Since verbal encoding was less likely in Experiment 3,

it provides a better test of the model than did the earlier experiments. While the capacity

estimates are slightly lower, they are still consistent with a maintenance limit of three

object files.

Page 48: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

CHAPTER V

EXPERIMENT 4

In Experiment 4, I attempted to expand my tests for the types of information partici-

pants must store to detect changes. In the previous experiments, participants could detect

interchanges by retaining only bound pairs of color and location, or by retaining repre-

sentations that combined color, location, and shape. The object-file hypothesis claims that

visual WM stores object files that contain all the information relevant to a particular object.

Therefore, I need to test whether participants have access to more than just color-location

bindings.

In Experiment 4, stimuli varied on two dimensions—color and shape—and changes

could occur in one or both dimensions. One goal of Experiment 4 was to establish that

visual information other than color can be bound to location. If participants can detect

interchanges involving the shape of objects, then they must be able to bind the shape to

spatial location.

Another goal of Experiment 4 was to determine whether color-location and shape-

location information are stored together in object files. According to the object-file hy-

pothesis, visual WM stores about three object files. For example, if a red triangle, a blue

circle, and a green diamond appear in the environment, visual WM can maintain object

files for all three, with each object file containing the color, shape, and location of one

39

Page 49: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

40

object. Such a storage system is consistent with the findings of the first three experiments.

However, there is an alternative hypothesis that can explain the previous findings just

as well. Theindependent feature-store hypothesisproposes that visual WM consists of a

set of independent, feature-specific stores. According to this hypothesis, each store codes

one feature-dimension, retaining bound pairs of feature values and locations. The stores

encode and retain features independently of one another. In the example from the previous

paragraph, visual WM would store “red at locationi”, “blue at location j”, “triangle at

locationi”, etc.

Both hypotheses make specific predictions regarding performance in Experiment 4, but

before discussing the predictions, I will introduce the methodology. Experiment 4 used a

change-detection task similar to the first three experiments. The chief difference was that

the objects in each display varied in both color and shape.

Figure 5.1 illustrates the four change-types used in this experiment (in figures, I ab-

breviate the names of the four change-types by removing “interchange”). On the left side

of the figure is a sample first display and on the right side are four samples of a second

display, one for each change-type. On “different” trials, changes were generated by inter-

changing feature values between two objects. The interchanges involved either two colors

(color-interchange), two shapes (shape-interchange), or both. In the last case, the color

and shape interchange could either both involve the same two objects, or they could in-

volve different pairs of objects. When the color and shape change involved a single pair

of objects, then it appeared as if one object switched places with another object—this is

called anobject-interchangetrial. When the color and shape changes involved different

pairs of objects, then a total of four objects changed with respect to one feature each—this

is called adual-feature-interchangetrial.

Page 50: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

41

Display-1

Display-2

Shape

Color

Dual-Feature

Object

Figure 5.1:Illustration of the trial-types presented in Experiment 4. The first display is shown on the left,and four possible change-types are shown on the right. Note: Stimuli are not drawn to scale andwere not outlined in the experiment; arrows are for illustration only.

5.1 Predictions of the Independent Feature-Store Hypothesis

The independent feature-store hypothesis differs from the object-file hypothesis in its

assumptions regarding encoding, storage, and maintenance of visual information. It as-

sumes that an object’s features are encoded into one or more of several feature-specific

stores along with the object’s location. Thus, there is a store for retaining bound color-

location pairs and another store for retaining bound shape-location pairs. The hypothesis

also assumes a capacity limit on the number of unsupported features that can be main-

tained at any one time. The capacity limits may differ between stores. Most importantly,

the hypothesis assumes that in each feature store, retention processes select features for

maintenance independently of other feature-stores. Thus, the maintained features in one

store will not correlate with the maintained features in another store. This is illustrated in

Figure 5.2: the diamond in the lower-left is represented in both stores, but the other three

objects are represented only in one store each. In other respects, the independent feature-

store hypothesis is compatible with the change-detection model presented in Section 1.3.

Page 51: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

42

Shape-Locations

Display-1

Color-LocationsIndependent

Feature-SpecificStores

Figure 5.2:Illustration of the independent feature-store hypothesis. On the left side is a sample display; onthe right side are two feature-specific stores, one for bound color-location pairs, the other forbound shape-location pairs. Each store is shown containing unsupported, maintained feature-location pairs.

The independent feature-store hypothesis can account for performance on interchange

trials in the first three experiments. According to it, participants in those experiments

searched the color-location store to detect changes on interchange trials. The predictions

of the independent feature-store hypothesis are identical to those made by the object-file

hypothesis for my earlier experiments.

The hypothesis also makes specific predictions regarding performance in Experiment 4.

These predictions depend on the number of targets available on a trial of any particular

change-type. Under the independent feature-store hypothesis, there are two color targets

on color-interchange trials and two shape targets on shape-interchange trials. The proba-

bility of detecting a color-interchange is the probability of storing one of the color targets

and the probability of detecting a shape-interchange is the probability of storing one of

the shape targets. On object-interchange and dual-feature-interchange trials, there are two

color targetsandtwo shape targets. Thus, the probability of detecting a change on object-

interchange and dual-feature-interchange trials is just the probability of storing either one

Page 52: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

43

of the color targets or either one of the shape targets or some combination of these.

Therefore, under the assumptions of the independent feature-store hypothesis, hit rate

(i.e., the probability of responding correctly on “different” trials) on object-interchange

trials should equal the hit rate on dual-feature-interchange trials. Furthermore, the hit rates

for both of these change-types should equal the union of hit rates on color-interchange and

shape-interchange trials:

(5.1) Hdual-feature= Hobject= Hcolor+Hshape−HcolorHshape

whereHi is the hit rate fori-interchange trials.

5.2 Predictions of the Object-File Hypothesis with Unreliable Coding

The object-file hypothesis, as described in Section 1.5, assumes that visual WM retains

all of an object’s features as a single chunk, with multiple features per object using no more

capacity than a single feature per object. If an object file is supported or maintained, all

of its features must be accessible. Thus, the object-file hypothesis predicts that accuracy

on color-interchange trials should equal accuracy on shape-interchange trials. This is an

unreasonable prediction: the colors used in Experiment 4 may be easier to perceive than

the shapes in brief presentations. If this is the case, then accuracy on color-interchange

trials will be greater than accuracy on shape-interchange trials (Nissen, 1985; Wheeler &

Treisman, 2002).

To account for potential differences in perceiving different features, I will modify the

object-file hypothesis by adding anunreliable coding assumption. Under this assumption,

the encoding of an object’s features is imperfect and does not necessarily result in success-

ful storage of the feature. Each feature of one object will be successfully encoded with

discrimination probabilityd; otherwise, that feature is lost. For simplicity, each participant

is assumed to have oned for each feature dimension: for Experiment 4, each participant

Page 53: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

44

has oned for color (dc) and one for shape (ds).

The object-file hypothesis with unreliable coding makes assumptions about the types of

targets available on “different” trials of each change-type. There are two targets on color-

interchange trials, and one of their color-features must be retained. Similarly, there are

two targets on shape-interchange trials, and one of their shape-features must be retained.

Again, there are two targets on object-interchange trials, and either a color or a shape fea-

ture must be retained for one of them. There are four targets on dual-feature-interchange

trials, and a color or shape must be retained from one of them.

These assumptions lead to quantitative predictions regarding accuracy in Experiment 4,

as specified in Appendix B. In general, the modified object-file hypothesis predicts that

accuracy on dual-feature-interchange trials should be much higher than on any other trial-

type, with accuracy on object-interchange trials below dual-feature-interchange trials and

above color-interchange and shape-interchange trials.

5.3 Method

Experiment 4 was identical to Experiment 2 (see Section 3.1) except for the changes

described below.

5.3.1 Participants

A new group of ten students participated in Experiment 4.

5.3.2 Stimuli

Stimuli in Experiment 4 varied in both color and shape. The ten colors were the same

as in previous experiments, and the ten shapes are shown in Figure 5.3. Each shape was

drawn so that its maximum height (i.e., the difference between its highest and lowest

points) was approximately0.65◦ when viewed at a distance of 80 cm. On each trial, every

Page 54: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

45

Figure 5.3: Shapes presented in Experiment 4.

stimulus was randomly assigned a color and a shape (both without replacement) such that

there was no repetition of a color or a shape within any one display. Each display consisted

of 4, 6, or 8 stimuli.

Changes on “different” trials were generated in one of four ways (see Figure 5.1): (a)

on color-interchangetrials, by swapping the colors of two randomly selected objects; (b)

on shape-interchangetrials, by swapping the shapes of two randomly selected objects;

(c) onobject-interchangetrials, by swapping both the colors and shapes of two randomly

selected objects; and (d) ondual-feature-interchangetrials, by swapping the colors of two

randomly selected objects and the shapes of two different randomly selected objects. (On

object-interchange trials, two objects changed on both feature dimensions, and on dual-

feature-interchange trials, a total of four objects changed on one feature dimension each.)

5.3.3 Procedure

On every trial, participants performed an articulatory suppression task identical to the

one used in Experiment 3.

5.3.4 Design

All factors were manipulated within-block such that there were 4 (change-type)× 2

(match)× 3 (numerosity) = 24 trials per block. Each participant completed three exper-

imental sessions. The first session and the initial blocks of the last two sessions were

dropped from the analysis, resulting in a total of 34 observations at each combination of

Page 55: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

46

change-type, match, and numerosity.

5.4 Results

Mean proportion of correct responses is plotted by numerosity in Figure 5.4 (the dashed

line is discussed below). I submitted the arc-sine transformed data from “different” trials

to a 4 (change-type)× 3 (numerosity) repeated measures ANOVA. The main effect of

numerosity was reliable;F(2,18) = 83.12, MSE= .011, p < .001. The main effect of

change-type was also reliable;F(3,27) = 73.37, MSE= .011, p < .001. I conducted

planned comparisons within change-type. These revealed that accuracy on each of the

change-types was reliably higher than the change-type immediately below it in the follow-

ing order: dual-feature-interchange (.87)> object-interchange (.74)> color-interchange

(.66)> shape-interchange (.56);F(1,9) = 33.46, 12.16, 16.57, MSE= .014, .013, .011,

ps< .01, respectively.

I compared the results of Experiment 4 with those of Experiment 3. Figure 5.5 plots

accuracy from the object-interchange and color-interchange trials of Experiment 4 along

with accuracy from the interchange-two trials from Experiment 3 (suppression condition

only). Performance in Experiment 3 was virtually identical to performance in the color-

interchange condition of Experiment 4.

5.5 Model Fits

The independent feature-store hypothesis makes quantitative predictions regarding ac-

curacy on object-interchange and dual-feature-interchange trials relative to accuracy on

color-interchange and shape-interchange trials, as specified in Equation 5.1. The predic-

tion for the object-interchange and dual-feature-interchange trials is shown by the dashed

line labelled “Independence Prediction” in Figure 5.4. Note that accuracy on both object-

interchange and dual-feature-interchange trials are predicted to equal the values indicated

Page 56: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

47

Numerosity

Pro

port

ion

Cor

rect

4 6 8

0.4

0.5

0.6

0.7

0.8

0.9

1.0Independence Prediction

Same

Dual−Feature

Object

Color

Shape

Figure 5.4:Mean proportion of correct responses as a function of numerosity for Experiment 4. Thepoints indicate results from “same” (open triangles), dual-feature-interchange (“+”), object-interchange (“×”), color-interchange (squares), and shape-interchange (circles) trials. Thedashed line indicates accuracy predicted under the independent feature-store hypothesis forobject-interchange and dual-feature-interchange trials (see text).

by the dashed line. However, observed accuracy on object-interchange trials was reliably

lower than on dual-feature-interchange trials and appears to have been much lower than

predicted by the independent feature-store hypothesis. These results are clearly inconsis-

tent with the independent feature-store hypothesis, so it can be rejected.

Given this outcome, I next fit a model based on the object-file hypothesis with unre-

liable coding, as outlined in Section 5.2. Derivations of the equations used in this model

are presented in Appendix B. I fit accuracy from “same” trials and all four change-types.

I calculated a separate fit for each participant, with 5 conditions× 3 numerosities = 15

observations per participant. Each fit involved 6 free parameters—1 capacity (k), 2 dis-

crimination probabilities (dc andds), and 3 guessing rates. This left a total of15−6 = 9

Page 57: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

48

4 5 6 7 8

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Numerosity

Pro

port

ion

Cor

rect

Interchange−Two (Experiment 3)

Object (Experiment 4)

Color (Experiment 4)

Figure 5.5:Mean proportion of correct responses as a function of numerosity for Experiment 3 and Ex-periment 4. For Experiment 3 (dashed line), the circles indicate results from interchange-twotrials on suppression blocks. For Experiment 4 (solid lines), the points indicate results fromobject-interchange (“×”) and color-interchange (squares) trials.

degrees of freedom per participant. The model fits and parameter estimates reported below

are averaged across the individual fits.

Figure 5.6 shows the averaged observed and predicted data for Experiment 4;R2 = .99,

RMSE= .015, k = 2.99±0.74, dc = .75± .10, andds = .59± .07. Further details of the

fit are provided in Appendix F.

5.6 Discussion

Based on the results of Experiment 4, I can reject the independent feature-store hy-

pothesis. Therefore, there is probably a dependence between the features of each object in

visual WM, consistent with the object-file hypothesis. The nearly identical performance

between Experiment 3 and Experiment 4 (see Figure 5.5) shows that storing multiple fea-

Page 58: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

49

Numerosity

Pro

port

ion

Cor

rect

4 6 8

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Same

Dual−Feature

Object

Color

Shape

Observed

Predicted

Figure 5.6:Mean observed and predicted proportion of correct responses as a function of numerosity forExperiment 4. The points indicate results from “same” (open triangles), feature-interchange(“+”), object-interchange (“×”), color-interchange (squares), and shape-interchange (circles)trials. Solid lines indicate observed data and dashed lines indicate predicted data based on thefitted model.

tures per object does not increase the demands on visual WM.

Accuracy predicted by the model based on the object-file hypothesis with unreliable

coding was remarkably consistent with observed results. The model was able to fit the data

with a single capacity parameter, consistent with the hypothesis that there is one store for

retaining visual information. The estimates of discrimination probability for Experiment 4

suggest that encoding the shapes used in the experiment was reliably more difficult than

encoding the colors. Overall, Experiment 4 provides support for the object-file hypothesis

with unreliable coding.

The capacity estimates were higher in Experiment 4 than in previous experiments. For

example, estimated capacity was2.25±0.30for Experiment 3 (which also included an ar-

Page 59: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

50

Corrected Capacity EstimatesExperiment 1 3.56Experiment 2 3.44Experiment 3 3.01Experiment 4 (uncorrected) 2.99

Table 5.1:Mean capacity estimates (k) from each of the four experiments. For Experiments 1, 2, and 3, the“corrected” capacity estimates are shown (k∗/dc, wherek∗ is the uncorrected capacity estimateand dc is the discrimination probability for Experiment 4); for Experiment 4, the uncorrectedcapacity estimate is shown. (See text for more details.)

ticulatory suppression task), compared with2.99±0.74 for Experiment 4. The reason for

the discrepancy is that the capacity estimates for Experiment 4 were derived from a model

that accounts for retention of partial information (i.e., the unreliable coding assumption).

Earlier estimates were derived from a model that assumed perfect color-feature encod-

ing, so the capacity estimates confounded capacity and discrimination probability—that

is, in earlier experiments, the reported “capacity” actually estimatedk∗ = kdc. To better

compare the previous experiments to Experiment 4, I can “correct” the estimated capacity

from the first three experiments by calculatingk∗/dc. Assuming that the discrimination

probability for Experiment 4 is equal to that for the first three experiments—this seems

reasonable given that the set of colors was identical in all experiments—I can correctk for

the first three experiments using the value ofdc from Experiment 4.

The average capacity for all four experiments is presented in Table 5.1; the corrected

capacity (i.e., the averagek∗ divided by the averagedc from Experiment 4) is shown for

the first three experiments, and the uncorrected capacity is shown for Experiment 4. An

ANOVA conducted on individual capacity estimates revealed no reliable differences be-

tween the four experiments. However, capacity was slightly higher for Experiments 1

and 2 relative to Experiments 3 and 4, presumably due to contamination from verbal en-

coding. For Experiment 3 and Experiment 4, in which there was little or no contamination

from verbal encoding, the capacity estimates are almost exactly three items, consistent

with a maintenance limit of three object files.

Page 60: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

CHAPTER VI

GENERAL DISCUSSION

The goal of this dissertation has been to study how information is represented and

processed in visual WM. As discussed in the Introduction, visual information may be rep-

resented in several different ways while it is supported: raw images, primitive perceptual

features, feature-location pairs, and object files. I have proposed that visual WM stores

object files, and that these files are the most durable form of representation once an object

disappears from view.

I have reported four experiments to test a precise model based on the object-file hypoth-

esis. In Experiment 1, I found that participants had access to both bound color-location

information and unbound color information. In Experiment 2, I confirmed this finding

and showed evidence that color-location bindings for distinct objects are stored indepen-

dently of each other. In Experiment 3, I found that participants may have verbally coded

some stimuli in the first two experiments, but when such coding was discouraged, per-

formance was more consistent with predictions of the object-file hypothesis than in the

first two experiments. Finally, in Experiment 4, I found that participants did not store

the different features of an object separately from each other: Rather, my model revealed

that object-based storage with unreliable coding of feature information explained the re-

sults remarkably well. All of my findings are consistent with the object-file hypothesis, as

51

Page 61: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

52

proposed here.

In this chapter, I will describe the present theory of visual WM in more detail and

evaluate it in light of past and present findings. I will then discuss some areas that I

believe require further study.

6.1 The Object-File Theory of Visual Working Memory

In this section, I will present a theory of visual WM, based on the object-file hypothesis,

as I currently view it. Many of my claims are only partially supported by my own results,

and others are purely speculative: I present these claims in order to suggest areas in which

more research is needed. Purely speculative claims are indicated with a “∗” symbol. The

description here builds on that presented in the Introduction with two modifications: the

feature-based decay assumption from Section 2.4 and the unreliable-coding assumption

from Section 5.2. I present the theory by answering a series of questions that a complete

theory of visual WM must answer:

1. How does information enter visual WM?

2. What type of information is stored there?

3. How much is stored (i.e., what is visual WM’s capacity)?

4. How is information in visual WM organized?

5. How is it accessed?

6. How is it maintained?

7. What is its duration?

8. How is it lost from memory?

Page 62: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

53

6.1.1 How Does Information Enter Visual Working Memory?

According to the present theory, when an object appears in the environment, an object

file is created in visual WM to store information about it. Object-file creation is assumed

to occur rapidly and in parallel, so that a large number of object files can be created con-

currently from a brief display. Initially, an object file only contains the object’s location,

but other features are encoded as they become available. Perceptual features do not neces-

sarily become available at the same time. This means that object-file formation is likely an

ongoing process. Objects in the environment can also change (e.g., change color, move),

so they can be updated while they are physically present. As an object’s features become

available, they are encoded into the object file. Feature encoding is assumed to be noisy,

so there is some probability that a feature will not be stored correctly with the object file.

However, encoding of location is relatively fast and accurate, so a supported or maintained

object file will typically contain useful location information.

6.1.2 What Type of Information is Stored in Visual Working Memory?

According to the theory, the basic unit of visual WM is an object file. Each object file

contains all of the visual features of each object that have been successfully encoded. The

features include color, shape, and location. Visual WM holds both supported (visible) and

unsupported (not visible) visual information, and both types of information are stored as

object files.

6.1.3 How Much is Stored in Visual Working Memory?

The theory assumes that visual WM can hold an unlimited number of supported object

files—that is, as long as an object is visible in the environment, an object-file representa-

tion of it may exist in visual WM. However, only about three unsupported object files can

be maintained simultaneously in visual WM.

Page 63: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

54

6.1.4 How is Information in Visual Working Memory Organized?

The theory assumes that each object file is stored independently of other object files in

visual WM. This means that loss of one object file from WM does not lead to the loss of

other object files. I claim that both supported and unsupported object files are retained in

one storage system.

6.1.5 How is Information in Visual Working Memory Accessed?

According to the theory, object files in memory can be readily retrieved. Retrieval of

an object file provides access to all of the features it contains. Retrieval is assumed not

to interfere with other object files in memory. Object files can be retrieved based on their

location, but not based on their contents: For example, if a red object needs to be recalled,

each object file in memory must be a retrieved one at a time, and its contents inspected to

determine whether its color-feature is red.∗

6.1.6 How is Information in Visual Working Memory Maintained?

Supported object files do not need to be maintained. Unsupported object files are inter-

nally maintained by a rehearsal process, and the observed capacity limit is due to a limit in

the number of object files that can be rehearsed repeatedly without loss of their contents.∗

6.1.7 What is the Duration of Information in Visual Working Memory?

An object file that is supported or maintained will remain in memory indefinitely. Sup-

ported object files remain in memory as long as the stimuli they represent are visible.

Unsupported object files remain in memory as long as the rehearsal mechanism is able to

maintain them.

Page 64: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

55

6.1.8 How is Information Lost from Visual Working Memory?

Object files that are not supported or maintained will be lost from memory. If mainte-

nance capacity is exceeded, then one or more object files must be dropped. Alternately, if

an object file is no longer needed, then maintenance of it can cease.

The theory claims that loss occurs through feature decay (e.g., an object’s location dis-

appears from the object file). Thus, an unmaintained object file will remain in memory but

lose its features—eventually, it will contain no useful information. Features are assumed

to decay independently of each other and feature decay occurs in an all-or-none manner.

Presumably, once all features have decayed from an object-file, it disappears (or is deleted)

from memory. Currently, I make no claims about the time-course of feature decay.

It is possible that certain events can disrupt visual WM.∗ For example, new visual

information may replace old information. If the new information is potentially relevant

to the task, some old object files in memory may be dropped from rehearsal in order to

maintain some of the new ones. If the information is irrelevant to the task, it should not

interfere with maintained object files: This is because the role of maintenance is to retain

relevant information, so it should be able to protect its contents. Unmaintained items are

susceptible to interference from irrelevant information.

6.1.9 Summary

In this section, I have presented a tentative theory of visual WM based on the storage of

object files. The stored representations include an unlimited number of supported object

files and a limited number of unsupported object files. Unsupported object files must be

maintained by rehearsal or their features will decay.

Page 65: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

56

6.2 Evaluating the Object-File Theory

The object-file theory is only worthwhile if it can explain and predict real data. The goal

of the studies presented in this dissertation has been to investigate the form and capacity

of visual WM storage, so one would hope that the theory can explain the results. The

following subsection shows that that this hope has been realized.

6.2.1 Application of Theory to Present Results

In Experiments 1 and 2, participants had to detect substitutions and interchanges involv-

ing the colors of stimuli. Under the assumptions of the object-file theory, they maintained

about three object files during the ISI; the other object files generated from the first display

either decayed completely, or lost some feature information (e.g., color, shape, and loca-

tion). When the second display appeared, participants could retrieve stored object files to

check certain locations and determine whether the colors there had changed—this allowed

detection of either color interchanges or substitutions. They could also use any partially

decayed object files that retained color but not location information to determine whether

that color was still present somewhere in the display—this allowed detection of color sub-

stitutions but not interchanges. Thus, performance on substitute trials should be superior

to performance on interchange trials, as my experiments revealed.

I fit the model for the change-detection task to data from both interchange and substitute

trials, and it fit reasonably well. In Experiment 2, it captured the effect of change-size on

accuracy quite well. The capacity estimates were consistent with retention of at least three

object files plus one or two partially decayed object files with intact color-features.

The results of Experiment 3 suggested that participants may have occasionally used

verbal coding in Experiments 1 and 2. With this covert verbalization reduced or eliminated

in Experiment 3, one would expect to find results that are still consistent with the object-file

Page 66: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

57

theory. In the first two experiments, the model over-predicted accuracy at low numerosities

and under-predicted at high numerosities. However, for Experiment 3, the mathematical

model fit the data much better, closely capturing the effects of numerosity and change-size

on accuracy. Capacity estimates were slightly reduced, but still consistent with a limit of

at least three object files.

In Experiment 4, participants had to detect interchanges involving either one or two

features—color and shape. Under the assumptions of the object-file theory, WM can store

two features per object just as easily as it can store one feature per object. However,

encoding each feature may be imperfect, so some object files may contain only one feature,

and others may contain no features (except for location). Thus, detection of color and

shape interchanges would depend on the relative probability of encoding each feature.

Given a single capacity, different encoding probabilities for each feature dimension, and

guessing rate parameters, the object-file theory makes quantitative predictions regarding

performance in Experiment 4. These predictions are well matched by the observed results.

New revised capacity estimates suggested retention of three object files. Thus, the object-

file theory can account for all of the results reported in this dissertation.

6.2.2 Application of Theory to Results from Past Studies of Visual Working Memory

Previous research by other investigators has also examined performance in tasks re-

quiring the retention and retrieval of unsupported information in visual WM. In this sub-

section, I will examine the results of some of these studies. Here, I will show how the

object-file theory explains the results, and, in some cases, how they bear on the assump-

tions of the theory.

Page 67: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

58

Vogel, Woodman, and Luck (2001)

As mentioned in the Introduction, Vogel et al. (2001) used a change-detection task to

test the hypothesis that visual WM contains object files (see Section 1.2). They found

that detection of changes with multi-featured objects was no more difficult than detection

of changes with single-featured objects. Their results are consistent with the predictions

of the present theory: storing additional features within an object file does not increase

the demand on visual WM. Participants could retrieve any maintained object file and

determine whether there were changes on any number of feature dimensions.

Wheeler and Treisman (2002)

Wheeler and Treisman (2002) conducted a change-detection experiment using both

substitute and interchange trial blocks, similar to Experiment 1 in this dissertation. They

found that detection of two changes on interchange trials was more difficult than detection

of two changes on substitute trials, supporting my conclusions in Experiment 1 and Exper-

iment 2. In another experiment, Wheeler and Treisman used a change-detection task with

stimuli that varied in color and shape. On both “same” and “different” trials of this exper-

iment, all of the stimuli moved to previously unoccupied locations in the second display.

In one block type, changes were generated by interchanging the colors of two shapes.

Under the assumptions of the object-file theory, each object file in visual WM had to

be compared with half of the objects in the second display (on average) in order to find

an appropriate comparison, since the objects were no longer in the same locations. The

theory claims that searching a display interferes with rehearsal, so fewer (only one or two)

object files may be retained under these conditions—this was a speculative claim. Thus,

interchange accuracy should be reduced in this experiment relative to ones in which no

object moved: this is what Wheeler and Treisman (2002) report.1

1Wheeler and Treisman (2002) also reported some results that are difficult for the object-file theory to explain. For example, they

Page 68: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

59

Jiang, Olson, and Chun (2000)

Jiang, Olson, and Chun (2000) reported a series of studies investigating the effects of

irrelevant color and location changes on change-detection accuracy. On each trial, one

stimulus in the second display was cued, and participants only had to judge whether that

stimulus had changed—the other stimuli served as distracters. Participants had to detect

either color substitutions or location changes (i.e., movement to a previously empty loca-

tion).

In one experiment, Jiang et al. (2000) showed that changing the colors of all stimuli had

no effect on the accuracy of detecting location changes. This demonstration is consistent

with the present theory. Once an object file has been retrieved from memory, its location

can be checked to determine whether a physical object exists at that location. Thus, loca-

tion comparisons would not be influenced by other features in an object file, just as Jiang

et al. found.

In another experiment, Jiang et al. (2000) showed that detection of color substitu-

tions was worse when all the stimuli changed locations haphazardly than when no stimuli

moved. However, there was no effect on accuracy when stimuli were moved by scaling

up the distance between them—that is, the stimuli moved such that their relative locations

were preserved. Performance under this condition was similar to performance under the

condition with no movement at all.

Again, these results are consistent with the present theory. As with Wheeler and Treis-

man’s (2002) results, the process of finding a physical stimulus that corresponds to one in

memory may interfere with maintenance when the locations in memory do not match the

locations in the display, supporting a speculative claim made by the theory. Nevertheless,

found that change-detection accuracy on substitute trials in an experiment with stimuli that changed locations was equal to accuracy inan experiment with stimuli that stayed in the same locations—the theory predicts a decrease in substitute accuracy when stimuli changelocations. However, there are methodological differences between Wheeler and Treisman’s studies and mine (e.g., amount of practiceand incentives) and their unexpected results are based on between-subject comparisons, so further research is needed.

Page 69: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

60

preservation of relative object locations may provide enough information to rapidly find

the appropriate object in the environment, as Jiang et al. (2000) showed.

Saiki (2003)

Saiki (2003) conducted a series of change-detection experiments in which three colored

circles, arranged in a triangular pattern, rotated around a central fixation point.2 In most of

these experiments, Saiki presented a sequence of ten frames in which the stimuli repeatedly

flashed on and off, with each successive exposure displacing them by30◦, 45◦, or60◦ from

their previous positions—the stimuli appeared to be spinning on a disk illuminated by a

stroboscope. Participants had to detect color interchanges that occurred from one display

to the next in the middle of the sequence. The object-file theory predicts that participants

can maintain three object files in memory, but the process of determining which object file

corresponds to which stimulus in each frame takes time and interferes with rehearsal. This

is especially a problem in the60◦ condition, since the direction of rotation is impossible

to determine from the stimulus locations alone. The object-file theory therefore predicts

that accuracy should decrease with increasing angular displacement—that is, the further

the stimuli move with each frame, the harder it is to match object files to objects—which

is what Saiki found.

Nissen (1985)

Nissen (1985) studied participants’ memory for object features using a cued-recall task.

In each of her experiments, she presented participants with a brief (< 200 ms) display

containing four stimuli arranged above, below, to the left, and to the right of fixation. Each

stimulus was formed with one of four colors and one of four shapes; thus, each stimulus

had three features: color, shape, and location. The stimulus display was replaced by a2The display conditions used by Saiki (2003) are difficult to describe verbally. The article is published in an online journal that

includes animated sample trials. These can be found on the Internet athttp://journalofvision.org/3/1/2/ .

Page 70: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

61

mask and a visually presented word that indicated a location or color. According to the

object-file theory, participants in Nissen’s experiments stored some of the objects in visual

WM and then searched memory for an object with a feature value that matched the cue. On

trials when participants maintained the relevant (cued) object file, they should have been

able to recover any two features (e.g., shape, location) when cued with any one feature

(e.g., color). Nissen found that this was the case.

Two interesting findings from Nissen’s (1985) experiments were that (a) recall of shape

based on a location cue was independent of recall of color, and (b) recall of shape based on

a color cue was dependent on correct recall of location. The first finding is consistent with

independent encoding of each feature from an object: successfully encoding an object’s

color is unrelated to successfully encoding its shape.3 The second finding supports the

assumption that an object’s location is always encoded in an object file, but other features

may not be. If participants maintained an object file with the cued color, then they will

be able to recall its location; however, the shape feature may not have been successfully

encoded with the object file.

Irwin and Andrews (1996)

Using a partial-report technique, Irwin and Andrews (1996) studied retention of visual

information across eye movements (saccades). In one of their experiments, participants

had to memorize a set of six colored letters presented for about 300–500 ms. Display offset

was triggered by saccade initiation. Following the saccade, a location cue appeared, and

participants had to recall which letter and color had been presented at the cued location.

According to the object-file theory, about three object files containing color and let-3The object-file theory predicts that there should be some correlation between the features of an object: Recall should be perfect

for all the features of a maintained object file (assuming the features were encoded successfully), but poor for the features of anunmaintained object file. However, this correlation should only be apparent when there are a large number of objects, and Nissen(1985) only presented four objects in her experiments. Intriguingly, Monheit and Johnston (1994) replicated Nissen’s studies with sixobjects and found dependence between object features.

Page 71: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

62

ter identity were maintained during the saccade. Once the cue appeared, memory was

searched for an object file with a location that matched the cue’s location. If the object file

was found, then its features, if available, were recalled. If no corresponding object files

were available, then a reasonable guess could be generated based on the unmaintained,

partially-decayed object files that still contained color or letter information.

There are four aspects of Irwin and Andrews’s (1996) results that are relevant to these

predictions of the object-file theory. First, participants performed above chance, so they

were able to store and retain the locations of at least some of the colors and letters across

fixations. Second, recall of color and letter identity were dependent on one another (e.g.,

correct recall of a color was more likely if the letter was recalled correctly than if the

letter was not recalled correctly), consistent with the claim that storage is based on object

files, not on features (see Footnote 3). Third, overall accuracy suggested retention of about

three object files after relatively long delays (750 ms), which agrees with my estimates.

Fourth, analysis of errors showed that incorrectly recalled colors and letters were more

likely than chance to come from the array—that is, if participants responded incorrectly,

the incorrect response was from another item in the memory set. This supports the proposal

that, if participants could not find a maintained object file that corresponded with the cued

location, then they guessed from among any features contained in partially-decayed object

files.

The capacity estimates reported by Irwin and Andrews (1996) support those from my

experiments. As described in Section 1.3, the comparison stage of change-detection per-

formance could interfere with maintenance, rendering capacity estimates too low. In Ir-

win and Andrews’s cued-recall study, there was no second display, yet their estimates

match mine. Thus, the comparison process probably does not interfere with maintenance

(assuming the objects do not shift locations), and capacity estimates derived from the

Page 72: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

63

change-detection task are not too low.

6.3 Directions for Future Research

Overall, the object-file theory seems to be able to explain a variety of results. In my pre-

sentation of the object-file theory, I made many claims which are only partially supported

by experimental evidence and some which are completely untested. This suggests several

directions for future research. In this section, I will discuss some of these directions and

how they might inform the theory.

One direction will be to seek confirmation for these results in other tasks. The object-

file theory can be used to derive specific predictions for a variety of tasks: successful tests

of these predictions would provide strong evidence for the theory. One possibility is to

use a matching task (e.g., Sternberg, 1969; Wheeler & Treisman, 2002), which is similar

to a change-detection task except that there is only one comparison object. Another is

to replicate cued-recall studies like those used by Irwin and Andrews (1996) and Nissen

(1985), but with stimuli and conditions similar to those of the experiments I have reported.

A second direction will be to test the assumption of unreliable coding. Experimental

manipulations that affect perceptual discriminability—such as exposure duration or stim-

ulus contrast—should selectively influence encoding probability and not WM capacity.

Additionally, manipulating the similarity of one set of features (e.g., by making all the

colors different shades of blue), should influence only that feature’s encoding probability.

A third direction will be to test the theory’s claims regarding object-file decay. I as-

serted that maintained object files remain in memory indefinitely, while unmaintained ones

undergo feature decay. Thus, change-detection accuracy on interchange trials should de-

cline with ISI up until around 500–1000 ms, and then remain constant at longer ISIs (if

rehearsal is allowed). Accuracy on substitute trials should decline more slowly with ISI as

Page 73: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

64

more features decay, eventually dropping to the level of interchange trials.

A fourth direction will be to test the theory’s predictions regarding resistance to in-

terference. I predicted that irrelevant visual disruption (e.g., a visual mask) should not

interfere with maintained object files in WM. However, directing attention to items in a

display should interfere with rehearsal. One could test change-detection performance with

a variety of tasks presented during the retention interval—e.g., a fixation cross, a choice-

reaction-time task with a visual stimulus. I predict that a stimulus requiring a response, or

one providing information about the second display, will reduce capacity estimates relative

to current estimates. Stimuli that are irrelevant should not affect performance.

6.4 Conclusion

I have presented a detailed and explicit theory of visual WM. It claims that the basic

units for visual storage are object files, and it specifies what an object file is and how it

is formed, maintained, used, and lost from memory. The theory can be used to derive

quantitative predictions regarding performance in a variety of experimental tasks. I have

described a model of performance for the change-detection task and presented a series of

experiments to test its predictions. In addition to supporting the predictions of the model,

the experiments provided estimates of the number of object files that can be maintained

and the fidelity with which they are encoded into memory. Further tests of the theory, if

successful, will bolster it and help to specify its untested assumptions.

Page 74: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

APPENDICES

65

Page 75: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

66

APPENDIX A

Derivation of the Change-Detection Model for Experiments 1–3

In this Appendix, I will generalize the change-detection model presented by Pashler

(1988) for cases in which more than one stimulus object changes on “different” trials. The

assumptions of the model are described in Sections 1.3 and 1.4. Briefly, if any of the

objects in memory are targets (i.e., they change in the second display), then the change

will be detected. If no change is detected, then participants guess with probabilityg that a

change occurred, and with probability1−g that no change occurred. Hence, the hit rate

(i.e., the probability of responding “different” given that a change occurred) will be

(A.1) H = p+(1− p)g = g+(1−g)p

wherep is the probability of detecting a change given that one occurred. Since the only

reason for mistakenly reporting a change is through guessing, the correct-rejection rate

(i.e., the probability of responding “same” given that no change occurred) will be

(A.2) CR= 1−g

In EquationA.1, p is the probability of retaining one of the target objects in memory

during the ISI—that is, the probability of selecting one of the targets for maintenance when

the first display disappears. The calculation ofp can be simplified by calculating1− p,

which is the probability of failing to maintain any of the targets. On a trial, there areN

Page 76: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

67

stimulus objects, and up tok of them can be maintained. On a “different” trial, there aret

target objects in the display. Thus, the probability of failing to maintain any targets is

1− p =

(N−tk

)(N

k

)

=(N− t)!

k!(N− t−k)!

/ N!k!(N−k)!

=(N−k)!

(N−k− t)!

/ N!(N− t)!

=(N−k)(N−k−1) · · ·(N−k− t +1)

N(N−1) · · ·(N− t +1)

=t−1

∏i=0

N−k− iN− i

(A.3)

The smallest numerator,N− k− (t − 1), must be greater than or equal to 0; otherwise

the probability may be negative (this also ensures that the denominator is greater than

zero). When this condition does not hold,k is large enough that a target will always be

maintained. Hence, ifk≥ N− t +1, then1− p = 0.

Solving EquationA.3 for p and incorporating it into EquationA.1 yields

H = g+(1−g) ·(

1−t−1

∏i=0

N−k− iN− i

)

unlessk≥ N− t +1, in which caseH = 1. This is presented as Equation 1.1 on page 12.

Note that whent = 1, Equation 1.1 reduces to

p = g+(1−g) ·(

1−∏i=0

N−k− iN− i

)

= g+(1−g) ·(

NN− N−k

N

)

= g+(1−g) ·(

kN

)

which is the hit-rate formula presented by Pashler (1988) for detecting a single change.

Page 77: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

68

APPENDIX B

Derivation of the Change-Detection Model for Experiment 4

In this Appendix, I will generalize the model described in Appendix A to account

for performance in Experiment 4 under the assumptions of the object-file hypothesis with

unreliable feature-coding. The assumptions of this model are described in Section 5.2. The

derivations of these formulae are rather elaborate, so I will begin with some preliminary

explanations.

B.1 Preliminaries

In Experiment 4, the hit-rate for each change-type (i.e., the probability of responding

“different” given that a particular type of change occurred) is given by EquationA.1. That

is, the probability of responding correctly depends on the probability of storing one of the

target features or, having failed to store any target features, guessing with probabilityg that

a change occurred. The correct-rejection rate (i.e., the probability of responding “same”

given that no change occurred) is given by EquationA.2.

In order to detect a change on “different” trials, one or more target features must be

stored. Under the assumptions of the object-file hypothesis with unreliable feature-coding,

successful storage requires both correctly encoding the target feature(s) and selecting some

object file(s) for maintenance that contains the correctly encoded target feature(s).

Page 78: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

69

In general, the probability of storing a featuref from objecti is

P(store featuref from objecti) = P(encodefi ∩ maintaini)

= P(encodef )P(maintaini)(B.1)

This assumes that feature encoding and selection for maintenance are independent of one

another.

The probability of satisfying the first requirement—encoding the target feature(s)—is

simply

(B.2) P(encodef ) = df

where f = c,s for colors and shapes, respectively.

The probability of satisfying the second requirement—maintaining one or more of the

target object files—is more complicated. For example, if there are three target objects

in a display, maintaining object files for either one, two, or three of them will satisfy

the second requirement. In general, anyu of the t target object files can be maintained

(u = 1,2,3, . . .), and overall accuracy is a combination of the probability of maintaining

any one target object file, any two target object files, and so on.

If only one target object file is maintained, then the probability of having encoded its

target featuref is df ; if two target object files are maintained, then the probability of hav-

ing encoded both of their target features isd2f ; for three target object files, the probability

is d3f ; and so on. In order to find the probability of maintainingu of the t target object

files and correctly encoding some of their features, I need to specify the probability of

maintaining a particular subset ofu target object files without concern for the othert−u

targets or the otherN− t non-target object files: The product of this probability withduf

gives the probability of maintaining and encoding allu target object files and their target

features.

Page 79: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

70

The probability of maintainingu object files, given maintenance capacityk and object

numerosityN, is

P(maintainu targets|k,N) =

(uu

)(N−uk−u

)(N

k

)

=1(N−u)!

(k−u)!(N−u−k+u)!

/N!

k!(N−k)!

=(N−u)!

(k−u)!(N−k)!· k!(N−k)!

N!

=k!

(k−u)!· (N−u)!

N!

=k(k−1) · · ·(k−u+1)

N(N−1) · · ·(N−u+1)

=u−1

∏i=0

k− iN− i

(B.3)

Thus, the probability of maintaining one target object file is simplyk/N, the probability

of maintaining two targets is

k(k−1)N(N−1)

and so on.

If both requirements are satisfied, then at least one target feature will be stored in visual

WM and the change will be detected. In the remainder of this Appendix, I will specify the

probability of storing at least one of the target features for each of the four change-types

used in Experiment 4. In equations, I will shorten the names for the four change-types by

removing “interchange” from each name.

B.2 Color-Interchange and Shape-Interchange Trials

On color-interchange trials, two colors swap places. Under the assumptions of the

object-file hypothesis, detection of a color-interchange requires storage of either one or

Page 80: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

71

both color features:

pcolor = P(store color 1∪ store color 2)

= P(store color 1)+P(store color 2)−P(store color 1∩ store color 2)

where “store colori” refers to the event that theith target color is successfully stored.

From EquationsB.1, B.2, andB.3, I know that the probability of storing one particular

target color is

P(store color 1) = P(store color 2)

= P(encodec1)P(maintain1)

=kN

dc

The probability of storing two target colors is

P(store color 1∩ store color 2)

= P(encodec1 ∩ encodec2)P(maintain1 ∩ maintain2)

=k(k−1)N(N−1)

d2c

Note thatP(encodec1 ∩ encodec2) = d2c because encoding one feature is assumed to be

independent of encoding other features. Thus, the probability of detecting a change on

color-interchange trials is

(B.4) pcolor = 2kN

dc− k(k−1)N(N−1)

d2c

The probability of detecting changes on shape-interchange trials is derived in the same

manner as for color-interchange trials:

(B.5) pshape= 2kN

ds− k(k−1)N(N−1)

d2s

B.3 Object-Interchange Trials

On object-interchange trials, both the color and shape of two target objects swap lo-

cations. Under the assumptions of the object-file hypothesis, detection of a change on an

Page 81: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

72

object-interchange trial requires storage of either the color or shape (or both) of either one

or both target objects:

pobject= P(store features 1∪ store features 2)

= P(store features 1)+P(store features 2)−P(store features 1∩ store features 2)

where “store featuresi” refers to the event that either the color or shape (or both) of the

ith target object is successfully stored. The probability of storing the features of one target

object is

P(store features 1) = P(store features 2)

= P(encodec1 ∪ encodes1)P(maintain1)

=kN

(dc +ds−dcds)

Again, feature-encoding is assumed to be independent, so

P(encodec ∪ encodes) = P(encodec)+P(encodes)−P(encodec ∩ encodes)

= dc +ds−dcds

The probability of storing either one or both features for both of the target objects is

P(store features 1∩ store features 2)

= P(encodec1 ∪ encodes1)P(encodec2 ∪ encodes2)P(maintain1 ∩ maintain2)

=k(k−1)N(N−1)

(dc +ds−dcds)2

Therefore,

(B.6) pobject= 2kN

(dc +ds−dcds)− k(k−1)N(N−1)

(dc +ds−dcds)2

B.4 Dual-Feature-Interchange Trials

On dual-feature-interchange trials, the colors of two objects and the shapes of two

other objects swap locations—see Figure 5.1 on page 41 for an illustration. Under the

Page 82: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

73

assumptions of the object-file hypothesis, detection of a dual-feature-interchange requires

storage of either of the two colors that changed, or either of the two shapes that changed:

pfeature= P(store color 1∪ store color 2∪ store shape 3∪ store shape 4)

= P(C1∪C2∪S3∪S4)

where “store colori” refers to the event that the color of theith target object is successfully

stored and “store shapei” refers to the event that the shape of theith target object is

successfully stored. For brevity, I denote these four events such thatCi is equivalent to

“store colori” and Si is equivalent to “store shapei”. Then the probability of detecting a

change on dual-feature-interchange trials is

pfeature= P(C1)+P(C2)+P(S3)+P(S4)

−P(C1C2)−P(C1S3)−P(C1S4)−P(C2S3)−P(C2S4)−P(S3S4)

+P(C1C2S3)+P(C1C2S4)+P(C1S3S4)+P(C2S3S4)

−P(C1C2S3S4)

(B.7)

We have already established that the probability of storing one target feature,f1, is

kN

df1

The probability of storing two target features,f1 and f2, is

k(k−1)N(N−1)

df1df2

The probability of storing three target features,f1, f2, and f3, is

k(k−1)(k−2)N(N−1)(N−2)

df1df2df3

The probability of storing all four target features,f1, f2, f3, and f4, is

k(k−1)(k−2)(k−3)N(N−1)(N−2)(N−3)

df1df2df3df4

Page 83: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

74

Incorporating the above four formulae into EquationB.7 yields

pfeature=kN

(2dc +2ds)

− k(k−1)N(N−1)

(d2c +4dcds+d2

s)

+k(k−1)(k−2)

N(N−1)(N−2)(2d2

cds+2dcd2s)

− k(k−1)(k−2)(k−3)N(N−1)(N−2)(N−3)

(d2cd2

s)

(B.8)

B.5 Summary

EquationsB.4, B.5, B.6, andB.8 are the probabilities of detecting each of the change-

types in Experiment 4 predicted under the object-file hypothesis with unreliable feature-

coding. While these equations are rather unwieldy, they specify the probability of detect-

ing a change on the four change-types across numerosity using only three parameters—

k, dc, andds. Plugging each into EquationA.1 gives the quantitative predictions of the

object-file hypothesis for hit rates in Experiment 4.

Page 84: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

75

APPENDIX C

Model Fit for Experiment 1

The model was fit using thenlm function in the R statistical package (Ihaka & Gen-

tleman, 1996). Thenlm function takes two arguments, a function to minimize and a set

of adjustable parameters. I gave it a function that first calculates the predicted accuracy

given the current parameter values, and then calculates the sum-squared-error (SSE) be-

tween the predicted and observed values. The parameter values were a vector containing

the capacity (k) and the seven guessing parameters (gi , i = 1,2, . . . ,8). I selected data and

minimized the SSE separately for each participant. I fit two separate models, one to the

substitute data, and one to the interchange data.

TableC.1 shows the individual and averaged capacity estimates and guessing rates for

the substitute blocks and the interchange blocks separately. Thenlm procedure performs

an unconstrained search through the parameter space, so it is possible for it to find values

that are outside the bounds of the model. In particular, there is no constraint that the

guessing probabilities (gi) remain in the range[0,1]. In TableC.1, participant 4 has a

guessing rate of -0.01 at numerosity 7 (g7) on substitute trials: this is a slight deviation,

and all the other guessing probabilities fall into a reasonable range.

TableC.2 shows mean observed and predicted probability correct from each fit (these

are the data plotted in Figure 2.3 on page 22). TableC.3 shows the SSE, root-mean-

squared-error (RMSE), andR2 for each participant. Note that the averages reported here

Page 85: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

76

ParametersParticipant k g2 g3 g4 g5 g6 g7 g8

Substitute1 4.37 0.02 0.04 0.02 0.11 0.16 0.16 0.192 4.87 0.00 0.04 0.04 0.08 0.09 0.08 0.133 4.33 0.04 0.00 0.04 0.03 0.06 0.20 0.114 5.20 0.02 0.00 0.00 0.00 0.03 -0.01 0.065 4.41 0.00 0.02 0.02 0.07 0.19 0.12 0.116 4.21 0.02 0.00 0.06 0.20 0.25 0.34 0.297 4.98 0.00 0.02 0.02 0.04 0.14 0.11 0.138 3.86 0.02 0.00 0.00 0.09 0.06 0.08 0.20

Mean 4.53 0.02 0.02 0.03 0.08 0.12 0.14 0.1595% CI 0.37 0.01 0.02 0.02 0.05 0.06 0.09 0.06

Interchange1 2.29 0.02 0.06 0.08 0.18 0.18 0.11 0.192 3.12 0.06 0.04 0.06 0.07 0.03 0.03 0.103 2.39 0.02 0.02 0.09 0.05 0.07 0.05 0.044 3.17 0.00 0.08 0.04 0.08 0.16 0.11 0.155 2.07 0.02 0.06 0.21 0.17 0.14 0.19 0.166 2.22 0.04 0.04 0.18 0.20 0.19 0.24 0.277 3.36 0.02 0.02 0.00 0.04 0.08 0.07 0.118 2.64 0.00 0.06 0.04 0.07 0.10 0.12 0.11

Mean 2.66 0.02 0.05 0.09 0.11 0.12 0.11 0.1495% CI 0.41 0.02 0.02 0.06 0.06 0.05 0.06 0.06

TableC.1: Estimated parameters for each participant from Experiment 1 model fits. (95% CI: 95% confi-dence interval.)

are not the same reported in the text: For the text, I calculated the RMSE andR2 for the

averaged fits (i.e, the fits shown in TableC.2).

Page 86: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

77

NumerosityMatch 2 3 4 5 6 7 8

SubstituteSame

Observed 0.98 0.98 0.97 0.92 0.89 0.87 0.87Predicted 0.98 0.98 0.97 0.92 0.88 0.86 0.85

DifferentObserved 0.99 0.97 0.95 0.85 0.81 0.72 0.68Predicted 1.00 1.00 1.00 0.91 0.79 0.70 0.63

InterchangeSame

Observed 0.98 0.95 0.91 0.90 0.89 0.88 0.86Predicted 0.98 0.95 0.91 0.89 0.88 0.89 0.86

DifferentObserved 0.99 0.96 0.91 0.87 0.79 0.67 0.63Predicted 1.00 1.00 0.95 0.85 0.77 0.69 0.64

TableC.2: Mean observed and predicted probability correct for Experiment 1 as a function of match andnumerosity for substitute and interchange trials.

Participant SSE RMSE R2

Substitute1 0.059 0.065 0.6932 0.012 0.030 0.9253 0.019 0.036 0.9244 0.036 0.051 0.7915 0.024 0.042 0.8966 0.016 0.033 0.9407 0.022 0.039 0.8858 0.007 0.022 0.978

Mean 0.024 0.040 0.87995% CI 0.014 0.011 0.078

Interchange1 0.032 0.048 0.9232 0.008 0.024 0.9713 0.015 0.033 0.9784 0.002 0.011 0.9935 0.016 0.033 0.9706 0.014 0.032 0.9647 0.012 0.030 0.9398 0.017 0.034 0.956

Mean 0.015 0.031 0.96295% CI 0.007 0.009 0.019

TableC.3: Goodness-of-fit measures for each participant from Experiment 1 model fits, shown separatelyfor substitute and interchange trials. (SSE: sum-squared-error; RMSE: root-mean-squared-error;95% CI: 95% confidence interval.)

Page 87: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

78

APPENDIX D

Model Fit for Experiment 2

Table D.1 shows the individual and averaged capacity estimates and guessing rates

for the substitute and interchange trials separately. TableD.2 shows mean observed and

predicted probability correct from each fit (these are the data plotted in Figure 3.2 and

Figure 3.3). TableD.3 shows the SSE, RMSE, andR2 for each participant.

Page 88: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

79

ParametersParticipant k g3 g4 g5 g6 g7 g8

Substitute1 5.55 0.01 0.07 0.08 0.04 0.05 0.062 4.37 0.03 0.02 0.06 0.07 0.12 0.093 5.79 0.00 0.03 0.02 0.03 0.09 0.074 4.76 0.01 0.02 0.04 0.03 0.06 0.095 3.88 0.01 0.01 0.03 0.04 0.14 0.036 5.66 0.02 0.01 0.04 0.05 0.07 0.047 4.96 0.00 0.03 0.05 0.09 0.07 0.088 3.79 0.02 0.07 0.05 0.11 0.10 0.179 3.68 0.02 0.08 0.11 0.08 0.19 0.15

10 3.15 0.01 0.03 0.05 0.02 0.02 -0.05Mean 4.56 0.01 0.04 0.05 0.06 0.09 0.07

95% CI 0.66 0.01 0.02 0.02 0.02 0.03 0.04Interchange

1 2.45 0.01 0.06 0.08 0.02 0.05 0.122 2.62 0.03 0.02 0.05 0.08 0.15 0.123 4.12 0.00 0.03 0.02 0.04 0.09 0.044 2.67 0.01 0.02 0.04 0.03 0.10 0.035 2.79 0.01 0.01 0.01 0.01 0.08 0.116 3.14 0.02 0.01 0.03 0.04 0.10 0.047 2.60 0.00 0.03 0.03 0.10 0.08 0.098 1.67 0.02 0.06 0.04 0.05 0.14 0.189 1.54 0.01 0.10 0.08 0.10 0.20 0.19

10 2.08 0.01 0.02 0.04 0.03 0.04 -0.01Mean 2.57 0.01 0.04 0.04 0.05 0.10 0.09

95% CI 0.53 0.01 0.02 0.02 0.02 0.03 0.05

TableD.1: Estimated parameters for each participant from Experiment 2 model fits. (95% CI: 95% confi-dence interval.)

Page 89: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

80

NumerosityChange-Type 3 4 5 6 7 8

SubstituteSame

Observed 0.99 0.96 0.95 0.95 0.92 0.93Predicted 0.99 0.96 0.95 0.94 0.91 0.93

Substitute-OneObserved 0.97 0.91 0.85 0.78 0.69 0.63Predicted 1.00 0.96 0.88 0.77 0.68 0.60

Substitute-TwoObserved 1.00 0.99 0.97 0.94 0.92 0.84Predicted 1.00 1.00 0.99 0.95 0.91 0.85

InterchangeSame

Observed 0.99 0.96 0.95 0.95 0.92 0.93Predicted 0.99 0.96 0.96 0.95 0.90 0.91

Interchange-TwoObserved 0.94 0.87 0.77 0.71 0.70 0.63Predicted 0.98 0.91 0.81 0.72 0.67 0.61

Interchange-ThreeObserved 0.99 0.97 0.92 0.88 0.85 0.79Predicted 1.00 0.99 0.95 0.88 0.82 0.76

TableD.2: Mean observed and predicted probability correct for Experiment 2 as a function of change-typeand numerosity for substitute and interchange trials.

Page 90: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

81

Participant SSE RMSE R2

Substitute1 0.012 0.026 0.8812 0.012 0.026 0.9663 0.030 0.041 0.6524 0.031 0.041 0.8455 0.070 0.062 0.8246 0.016 0.030 0.8607 0.025 0.038 0.8608 0.119 0.081 0.6549 0.047 0.051 0.859

10 0.022 0.035 0.975Mean 0.038 0.043 0.838

95% CI 0.024 0.012 0.078Interchange

1 0.077 0.066 0.7312 0.012 0.025 0.9453 0.024 0.037 0.7344 0.028 0.040 0.8905 0.058 0.057 0.7196 0.051 0.053 0.7097 0.049 0.052 0.7918 0.043 0.049 0.9149 0.062 0.059 0.866

10 0.027 0.039 0.948Mean 0.043 0.048 0.825

95% CI 0.014 0.009 0.070

TableD.3: Goodness-of-fit measures for each participant from Experiment 2 model fits, shown separatelyfor substitute and interchange trials. (SSE: sum-squared-error; RMSE: root-mean-squared-error;95% CI: 95% confidence interval.)

Page 91: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

82

APPENDIX E

Model Fit for Experiment 3

For Experiment 3, I fit one model to all of the data, collapsing across the suppression

and no-suppression conditions. TableE.1 shows the individual and averaged capacity

estimates and guessing rates. TableE.2 shows mean observed and predicted probability

correct (these are the data plotted in Figure 4.3 on page 37). TableE.3 shows the SSE,

RMSE, andR2 for each participant.

ParametersParticipant k g5 g6 g7 g8 g9 g10

1 2.71 0.01 0.01 0.04 0.06 0.06 0.042 2.04 0.02 0.09 0.08 0.08 0.09 0.103 2.03 0.11 0.09 0.17 0.12 0.16 0.094 2.66 0.06 0.03 0.04 0.05 0.04 0.085 2.31 0.06 0.07 0.04 0.05 0.09 0.096 1.63 0.01 0.04 0.02 0.08 0.11 0.107 2.37 0.02 0.05 0.14 0.06 0.12 0.088 2.21 0.02 0.07 0.07 0.06 0.05 0.11

Mean 2.24 0.04 0.06 0.08 0.07 0.09 0.0995% CI 0.30 0.03 0.02 0.04 0.02 0.03 0.02

TableE.1: Estimated parameters for each participant from Experiment 3 model fits. (95% CI: 95% confi-dence interval.)

Page 92: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

83

NumerosityChange-Type 5 6 7 8 9 10

SameObserved 0.95 0.94 0.93 0.94 0.92 0.93Predicted 0.96 0.94 0.92 0.93 0.91 0.91

Interchange-TwoObserved 0.72 0.66 0.62 0.55 0.54 0.47Predicted 0.76 0.67 0.60 0.54 0.51 0.47

Interchange-ThreeObserved 0.91 0.85 0.79 0.72 0.65 0.64Predicted 0.93 0.85 0.78 0.71 0.66 0.62

TableE.2: Mean observed and predicted probability correct for Experiment 3 as a function of change-typeand numerosity.

Participant SSE RMSE R2

1 0.034 0.044 0.9192 0.020 0.034 0.9623 0.025 0.038 0.9394 0.026 0.038 0.9375 0.021 0.034 0.9616 0.030 0.040 0.9657 0.033 0.043 0.9238 0.040 0.047 0.923

Mean 0.029 0.040 0.94195% CI 0.006 0.004 0.016

TableE.3: Goodness-of-fit measures for each participant from Experiment 3 model fits. (SSE: sum-squared-error; RMSE: root-mean-squared-error; 95% CI: 95% confidence interval.)

Page 93: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

84

APPENDIX F

Model Fit for Experiment 4

TableF.1 shows the individual and averaged capacity estimates, discrimination proba-

bilities, and guessing rates for the Experiment 4 model fit. TableF.2 shows mean observed

and predicted probability correct (these are the data plotted in Figure 5.6 on page 49).

TableF.3 shows the SSE, RMSE, andR2 for each participant.

Page 94: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

85

ParametersParticipant k dc ds g4 g6 g8

1 4.83 0.47 0.55 0.09 0.17 0.112 2.86 0.86 0.59 0.06 0.09 0.283 1.56 0.97 0.66 0.08 0.03 0.034 3.49 0.77 0.64 0.04 0.15 0.005 2.42 0.81 0.69 0.00 0.14 0.156 3.84 0.67 0.53 0.06 0.07 0.237 3.12 0.67 0.66 0.01 0.09 0.148 1.90 0.84 0.72 0.06 0.06 0.099 2.01 0.77 0.49 0.11 0.09 0.18

10 3.90 0.63 0.40 0.07 0.06 0.04Mean 2.99 0.75 0.59 0.06 0.09 0.12

95% CI 0.74 0.10 0.07 0.03 0.03 0.06

TableF.1: Estimated parameters for each participant from Experiment 4 model fits. (95% CI: 95% confi-dence interval.)

NumerosityChange-Type 4 6 8

SameObserved 0.93 0.89 0.89Predicted 0.94 0.91 0.88

Dual-Feature-InterchangeObserved 0.97 0.87 0.75Predicted 0.96 0.85 0.74

Object-InterchangeObserved 0.88 0.70 0.64Predicted 0.90 0.73 0.62

Color-InterchangeObserved 0.80 0.64 0.54Predicted 0.81 0.64 0.54

Shape-InterchangeObserved 0.67 0.54 0.48Predicted 0.70 0.55 0.47

TableF.2: Mean observed and predicted probability correct for Experiment 4 as a function of change-typeand numerosity.

Participant SSE RMSE R2

1 0.028 0.043 0.9252 0.046 0.055 0.8373 0.072 0.069 0.9194 0.024 0.040 0.9295 0.120 0.089 0.7386 0.017 0.033 0.9337 0.077 0.071 0.7838 0.040 0.052 0.9369 0.061 0.064 0.894

10 0.014 0.031 0.973Mean 0.050 0.055 0.887

95% CI 0.024 0.013 0.054

TableF.3: Goodness-of-fit measures for each participant from Experiment 4 model fit. (SSE: sum-squared-error; RMSE: root-mean-squared-error; 95% CI: 95% confidence interval.)

Page 95: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

REFERENCES

86

Page 96: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

87

REFERENCES

Awh, E., Jonides, J., & Reuter-Lorenz, P. A. (1998). Rehearsal in spatial working mem-

ory. Journal of Experimental Psychology: Human Perception and Performance, 24,

780–790.

Baddeley, A. D. (1986).Working Memory.Oxford, UK: Clarendon.

Coltheart, M. (1984). Sensory memory: A tutorial review. In H. Bouma & D. G.

Bouwhuis (Eds.),Attention and Performance X: Control of Language Processes(pp.

259–285). Hillsdale, NJ: Erlbaum.

Dennis, J. E., & Schnabel, R. B. (1983).Numerical Methods for Unconstrained Opti-

mization and Nonlinear Equations.Englewood Cliffs, NJ: Prentice-Hall.

Hogg, R. V., & Craig, A. T. (1995).Introduction to Mathematical Statistics(5th ed.).

Upper Saddle River, NJ: Prentice-Hall.

Ihaka, R., & Gentleman, R. (1996). R: A language for data analysis and graphics.Journal

of Computational and Graphical Statistics, 5, 299–314.

Irwin, D. E., & Andrews, R. V. (1996). Integration and accumulation of information

across saccadic eye movements. In T. Inui & J. L. McClelland (Eds.),Attention and

Performance XVI: Information Integration in Perception and Communication(pp.

125–155). Cambridge, MA: MIT Press.

Isenberg, L., Nissen, M. J., & Marchak, L. C. (1990). Attentional processes and the

Page 97: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

88

independence of color and orientation.Journal of Experimental Psychology: Human

Perception and Performance, 16, 869–878.

Jiang, Y., Olson, I. R., & Chun, M. M. (2000). Organization of visual short-term memory.

Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 683–

702.

Kahneman, D., Treisman, A., & Gibbs, B. J. (1992). The reviewing of object files:

Object-specific integration of information.Cognitive Psychology, 24, 175–219.

Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features

and conjunctions.Nature, 390, 279–281.

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our

capacity for processing information.Psychological Review, 63, 81–97.

Monheit, M. A., & Johnston, J. C. (1994). Spatial attention to arrays of multidimensional

objects.Journal of Experimental Psychology: Human Perception and Performance,

20, 691–708.

Morris, C. C. (1992). Using the IBM-compatible microcomputer’s serial port as an input-

output interface.Behavior Research Methods, Instruments, & Computers, 24, 456-

460.

Nissen, M. J. (1985). Accessing features and objects: Is location special? In M. I. Posner

& O. S. Marin (Eds.),Attention and Performance XI(pp. 205–219). Hillsdale, NJ:

Erlbaum.

Pashler, H. (1988). Familiarity and visual change detection.Perception & Psychophysics,

44, 369–378.

Phillips, W. A. (1974). On the distinction between sensory storage and short-term visual

memory.Perception & Psychophysics, 16, 283–290.

Page 98: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

89

Saiki, J. (2003). Feature binding in object-file representations of multiple moving items.

Journal of Vision, 3, 6–21.

Schneider, W., Eschman, A., & Zuccolotto, A. (2002a).E-Prime Reference Guide.Pitts-

burgh, PA: Psychology Software Tools, Inc.

Schneider, W., Eschman, A., & Zuccolotto, A. (2002b).E-Prime User’s Guide.Pitts-

burgh, PA: Psychology Software Tools, Inc.

Sperling, G. (1960). The information available in brief visual presentations.Psychologi-

cal Monographs, 74, (11, Whole No. 498).

Sperling, G. (1969). Successive approximations to a model for short-term memory. In

R. N. Haber (Ed.),Information-Processing Approaches to Visual Perception(pp. 32–

37). New York: Holt, Rinehart, and Winston.

Stefurak, D. L., & Boynton, R. M. (1986). Independence of memory for categorically

different colors and shapes.Perception & Psychophysics, 39, 164–174.

Sternberg, S. (1969). Memory scanning: Mental processes revealed by reaction-time

experiments.American Scientist, 57, 421–457.

Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention.Cogni-

tive Psychology, 12, 97–136.

Van Essen, D. C., & Anderson, C. H. (1990). Information processing strategies and

pathways in the primate retina and visual cortex. In S. F. Zornetzer, J. L. Davis,

& C. Lau (Eds.),An Introduction to Neural and Electronic Networks(pp. 43–72).

Orlando, FL: Academic Press.

Vogel, E. K., Woodman, G. F., & Luck, S. J. (2001). Storage of features, conjunctions,

and objects in visual working memory.Journal of Experimental Psychology: Human

Perception and Performance, 27, 92–114.

Page 99: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

90

Wheeler, M. E., & Treisman, A. M. (2002). Binding in short-term visual memory.Jour-

nal of Experimental Psychology: General, 131, 48–64.

Xu, Y. (2002). Limitations of object-based feature encoding in visual short-term memory.

Journal of Experimental Psychology: Human Perception and Performance, 28, 458–

468.

Page 100: REPRESENTATION AND PROCESSING OF …bcalab/documents/Fencsik2003.pdf2.1.2 Apparatus ... 5.1 Capacity estimates from all four experiments ... B.2 Color-Interchange and Shape-Interchange

ABSTRACT

REPRESENTATION AND PROCESSINGOF OBJECTS AND OBJECT FEATURES

IN VISUAL WORKING MEMORY

by

David Fencsik

Chair: David E. Meyer

Past studies on visual working memory (WM) have inspired the hypothesis that people

use WM to temporarily retain a limited amount of visual information from their environ-

ment for current task performance (e.g., Vogel, Woodman, & Luck, 2001). According to

this hypothesis, information that is retained is stored in the form of independent object

files, each containing all the features of a single object. However, the results of these past

studies are difficult to interpret, and the assumptions of the object-file hypothesis have not

been carefully tested. In this dissertation, I present a model of performance for a visual

change-detection task based on the premise that visual WM stores object files. This model

makes quantitative predictions regarding performance in the task and provides estimates

of visual WM capacity. I then describe new experiments that test several assumptions

of the model. The qualitative results of the experiments and the quantitative consistency

between predicted and observed data suggest that people have independent storage for

approximately three object files in visual WM.