vision and color meeting: vision sessions

13
Optical Society of America and University of California Irvine Vision and Color meeting October 13 th 15 th UCI Student Center Vision Sessions October 13 th and 14 th , 2001 Saturday A. Noise limitations in early vision AV1. Response nonlinearity and internal noise revealed by psychophysical means. LEONID L. KONTSEVICH, CHIEN-CHUNG CHEN AND CHRISTOPHER W. TYLER, Smith- Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA 94115Our theoretical analysis shows how the separate nonlinearities of signal transduction and its accompanying noise can be derived from the measurements of incremental and decremental thresholds over wide range of contrasts. The resulting nonlinearity estimates represent a signature of the visual processing stage that limits the human observers performance. The analysis was applied to contrast discrimination thresholds measured for sustained and transient Gabor patches with 3 cycle/deg spatial carrier. In both cases the predominant noise was found to be multiplicative with a power exponent of 0.76 - 0.85 and the source of this noise preceded by an accelerating signal transducer with power 2 - 2.7. These exponents combine to account for the classic compressive power of about 0.4 for the signal-to-noise ratio in contrast discrimination. Similarity between the nonlinearities for sustained and transient stimuli suggests that for our stimuli transient and sustained signals are processed in the same pathway. The estimated transducer acceleration suggests that there is a direct computation of contrast energy in the visual cortex. AV2. Detecting and discriminating signals in correlated noise and statistically defined backgrounds. ARTHUR BURGESS, Radiology Dept., Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115 Detection and discrimination of lesions (signals) in medical images is limited by imaging system noise and normal patient structure. Insight into the effects of these limiting factors can be gained by doing carefully designed experiments with a variety of decision tasks. The results are then used to develop and evaluate models of human visual signal detection, discrimination and identification. The modelling approach is based on statistical decision theory which allows one to specify optimum algorithms and calculate ideal Bayesian observer performance for simple, precisely defined tasks. It should be noted that this study of high level decision tasks, requiring the use of prior knowledge. This is distinct from the study of mechanisms in early stages of the visual system. First, performance of tasks limited by uncorrelated (white) noise will be discussed. Human efficiency for these tasks is surprisingly high - typically 20 to 60% relative to the ideal observer. Then, performance of tasks using correlated noise and simulated, statistically-defined backgrounds will be discussed. The goal was to determine whether humans can compensate for correlations (prewhiten). Three classes of models have been investigated - (1) nonprewhitening, (2) partially prewhitening (due to spatial frequency channels) and (3) fully prewhitening. Partial prewhitening models agree best with human results. Finally, investigations using signals and backgrounds obtained from real medical images such as cardiac angiograms and mammograms will be described. AV3. Thresholds and Noise. THEODORE E. COHN AND ERIC HORNSTEIN, Visual Detection Lab, U. C. Berkeley, Berkeley, CA 94720-2020 and Dept. of Ophthalmology, UCSF, San Francisco, CA 94143-0730Question: How does human observer performance compare to ideal observer performance? We show two situations where an internal noise leads to significant departure from ideal observer behavior. Consider the visual threshold phenomena of Webers, Riccos and Blochs Laws. These three laws depart markedly from the ideal observer counterparts of de Vries-Rose, Pipers and Pierons Laws, referred to as the square-root laws of quantum fluctuation theory. Consider small, relatively brief uniform spots of light. The square-root laws describing threshold as a function of area, duration and background are often observed but never simultaneously. This latter fact alone, has perhaps discouraged a generation of psychophysicists from attaching significance to the occurrence of these laws. But it may be equally discouraging that linear laws like those of Bloch and Ricco are often said to be laws of complete summation. We show how simple, physiologically plausible degradations of ideal observer structure, related to summation limitations, can cause Riccos and Blochs Laws. The key to understanding is the existence of more than one source of noise. Next, consider stimuli that vary sinusoidally in time (or in space). Psychophysical determination of the contrast threshold is thought to reflect a filter function. We measured the temporal contrast sensitivity function (TCSF) in a photoreceptor using an adaptation of a

Upload: optical-society

Post on 02-Oct-2016

219 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Vision and Color Meeting: Vision Sessions

Optical Society of America and University of California Irvine Vision and Color meeting

October 13th � 15th

UCI Student Center

Vision Sessions October 13th and 14th, 2001

Saturday A. Noise limitations in early vision AV1. Response nonlinearity and internal noise revealed by psychophysical means. LEONID L. KONTSEVICH, CHIEN-CHUNG CHEN AND CHRISTOPHER W. TYLER, Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA 94115�Our theoretical analysis shows how the separate nonlinearities of signal transduction and its accompanying noise can be derived from the measurements of incremental and decremental thresholds over wide range of contrasts. The resulting nonlinearity estimates represent a signature of the visual processing stage that limits the human observer�s performance. The analysis was applied to contrast discrimination thresholds measured for sustained and transient Gabor patches with 3 cycle/deg spatial carrier. In both cases the predominant noise was found to be multiplicative with a power exponent of 0.76 - 0.85 and the source of this noise preceded by an accelerating signal transducer with power 2 - 2.7. These exponents combine to account for the classic compressive power of about 0.4 for the signal-to-noise ratio in contrast discrimination. Similarity between the nonlinearities for sustained and transient stimuli suggests that for our stimuli transient and sustained signals are processed in the same pathway. The estimated transducer acceleration suggests that there is a direct computation of contrast energy in the visual cortex. AV2. Detecting and discriminating signals in correlated noise and statistically defined backgrounds. ARTHUR BURGESS, Radiology Dept., Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115�Detection and discrimination of lesions (signals) in medical images is limited by imaging system noise and normal patient structure. Insight into the effects of these limiting factors can be gained by doing carefully designed experiments with a variety of decision tasks. The results are then used to develop and evaluate models of human visual signal detection, discrimination and identification. The modelling approach is based on statistical decision theory which allows one to specify optimum algorithms and calculate ideal Bayesian observer performance for simple, precisely defined tasks. It should be noted that this study of high level decision tasks, requiring the use of prior knowledge. This is distinct from the study of

mechanisms in early stages of the visual system. First, performance of tasks limited by uncorrelated (white) noise will be discussed. Human efficiency for these tasks is surprisingly high - typically 20 to 60% relative to the ideal observer. Then, performance of tasks using correlated noise and simulated, statistically-defined backgrounds will be discussed. The goal was to determine whether humans can compensate for correlations (prewhiten). Three classes of models have been investigated - (1) nonprewhitening, (2) partially prewhitening (due to spatial frequency channels) and (3) fully prewhitening. Partial prewhitening models agree best with human results. Finally, investigations using signals and backgrounds obtained from real medical images such as cardiac angiograms and mammograms will be described. AV3. Thresholds and Noise. THEODORE E. COHN AND ERIC HORNSTEIN, Visual Detection Lab, U. C. Berkeley, Berkeley, CA 94720-2020 and Dept. of Ophthalmology, UCSF, San Francisco, CA 94143-0730�Question: How does human observer performance compare to ideal observer performance? We show two situations where an internal noise leads to significant departure from ideal observer behavior. Consider the visual threshold phenomena of Weber�s, Ricco�s and Bloch�s Laws. These three laws depart markedly from the ideal observer counterparts of de Vries-Rose, Piper�s and Pieron�s Laws, referred to as the square-root laws of quantum fluctuation theory. Consider small, relatively brief uniform spots of light. The square-root laws describing threshold as a function of area, duration and background are often observed but never simultaneously. This latter fact alone, has perhaps discouraged a generation of psychophysicists from attaching significance to the occurrence of these laws. But it may be equally discouraging that linear laws like those of Bloch and Ricco are often said to be laws of �complete summation�. We show how simple, physiologically plausible degradations of ideal observer structure, related to summation limitations, can cause Ricco�s and Bloch�s Laws. The key to understanding is the existence of more than one source of noise. Next, consider stimuli that vary sinusoidally in time (or in space). Psychophysical determination of the contrast threshold is thought to reflect a filter function. We measured the temporal contrast sensitivity function (TCSF) in a photoreceptor using an adaptation of a

Page 2: Vision and Color Meeting: Vision Sessions

psychophysical staircase procedure. We show that the TCSF does not mirror the filter at low background where more than one source of noise exists. AV4. Some Classification Image Methodology Issues. ALBERT J AHUMADA, JR., NASA Ames Research Center—This paper will review the classification image method in conjunction with the frozen noise method for estimation observer internal noise. Together these methods can estimate linear observer classification functions and test specific hypotheses about these functions. Simplified derivations of near optimal weight estimates will be presented. Simulations of linear channel models using nonlinear combination rules will be used to demonstrate that the shapes of the channels are not necessarily revealed by the method. B. Optimal observers in perception and cognition BV1. Bayesian Natural Selection and the Evolution of Optimal Detection. W.S. GEISLER AND R.L. DIEHL, University of Texas, Austin, TX, USA—There can be little doubt that the statistical properties of the environment drive the design of perceptual systems, and thus there has been a great deal of interest recently in measuring the statistics of natural images. We show that Bayesian statistical decision theory provides the appropriate framework for exploring the formal link between the statistics of the environment and the evolving genome. The Bayesian framework is summarized by the fundamental equation below, which describes how the expected number of organisms carrying allele vector a at time t+1 is related to the number of organisms carrying that allele vector at time t, the prior probability of a state of the environment at time t, ( );p tω , the likelihood of a stimulus given the state the environment, ( )p s ω , the

likelihood of a response given the stimulus, ( )pa r s , and

the growth factor (1 + birth rate - death rate) given the response and the state of the environment, ( ),γa r ω . We demonstrate with examples how this Bayesian approach neatly divides natural selection into appropriate parts, which can be measured individually and then combined to model the whole process. We also formulate a maximum-fitness ideal observer and demonstrate some of the circumstances under which natural selection achieves ideal performance.

( ) ( ) ( ) ( ) ( ) ( )1 ; ,O t O t p t p pγ+ = ∑ ∑ ∑a a a aω r s

ω r ω r s s ω

BV2. Visual Statistics Pollution: should we worry? BENJAMIN T. BACKUS, Dept. of Psychology, University of Pennslyvania�Visually puzzling stimuli are fascinating.

Perhaps this is because the visual system needs time to learn the subtle distinctions that underlie the construction of accurate visual percepts. It seems plausible that novel stimuli, for example those that violate previously reliable statistical contingencies, would engage specific learning processes within the visual system. An example of a puzzling, fascinating stimulus is a crisp image projected onto the sidewalk, that attracts attention because it looks like a property of the sidewalk surface rather than the illuminant. Another is the printed fuzzy "shadow" that evokes three-dimensionality for lettering. Older and more familiar statistically anomalous anthropogenic objects include photographs, movies, brightly colored objects that are neither flowers nor fruit nor poisonous tree frogs, holograms on credit cards, etc. The concern is this: what used to be statistically reliable cues have been rendered (so to speak) ambiguous. Commercial exploitation of visual cues to evoke nonveridical percepts is likely to continue, because the resulting stimuli are interesting to look at. As a consequence, it may be getting harder to build reliable percepts. Is visual statistics pollution (VSP) already degrading perceptual performance? How can we quantify the threat? A theoretical ideal observer that uses Bayesian inference provides, at least in principle, a means for determining whether visual stimuli are objectively confusable. Experimentation may be needed to determine the actual vulnerability of the visual system to various sorts of VSP. BV3. Mixture models and the probabilistic structure of monocular depth cues. DAVID C. KNILL, Center for Visual Science, University of Rochester�Monocular cues to depth derive their informativeness from a combination of perspective projection and prior constraints on the way scenes in the world are structured. For many cues, the appropriate priors are best described as mixture models; that is, the relevant parameters of an object or scene are created by one of a set of discrete generative models (some surface contours are planar cuts through surfaces, some are geodesic, etc.). Different generative models lead to different priors, which, in turn determine the geometrical structures for cues. This simple observation has important ramifications for our understanding of structure-from-X problems. In particular, it raises to the forefront the question of how the visual system selects, based on image data, the appropriate prior to use for cue interpretation. I will discuss an analysis of the structure of the posterior distribution, p(3D structure | Image data), in the presence of mixed priors which reveals a number of fundamental features of the prior selection problem. The results show how the information provided by a cue can, by itself, reliably determine the appropriate prior to use when interpreting the cue. It also provides an objective and quantitative approach to characterizing many non-linear cue interactions which can arise in a multiple cue context. I will illustrate the analysis as applied to the problem of

Page 3: Vision and Color Meeting: Vision Sessions

estimating surface shape from texture which suggests a number of novel predictions for human perception. BV4. Comparison of two weighted integration models of the cueing task: optimal Bayesian and linear. STEVEN S. SHIMOZAKI, MIGUEL P. ECKSTEIN, Department of Psychology, University of California, Santa Barbara, CA, 93106 AND CRAIG K. ABBEY, Department of Biomedical Engineering, University of California, Davis, CA, 95616�The standard cueing task involves detecting a signal appearing at one of two locations, with a precue giving the probable signal location (Posner, Quart. J. Exp. Psych., 32, 3-25). Typically, a cueing effect is found, in which a valid cue (signal at cued location) leads to better performance than an invalid cue (signal at uncued location). Recently, two models have been presented with a weighted integration of information across the cued and uncued locations. One model proposes a linear combination (Kinchla, et al., Perc. Psychophys., 57, 441-450), while the second employs an optimal Bayesian combination (Shimozaki, et al., Invest. Ophth. Vis. Sci., 42, S867). They represent a challenge to common explanations of the cueing effect, such as limited attentional capacity, as both exhibit a cueing effect without changing the perceptual quality at the cued and uncued locations. To compare the two models, two observers performed a cued discrimination (80% validity, 150 msec precue) of (� = 12.4� visual angle) presented for 50 msec in white noise (� = 2.93 cd/m2, mean = 25.0 cd/m2) over a contrast range of 0.78% to 16.4% (on a pedestal of 6.25%). Overall, the Bayesian model gave a better fit (Bayesian, �2(36) = 4.79; Linear, �2(36) = 69.4), as the linear model overestimated the cueing effect at higher contrasts. Thus, the Bayesian model appears to be the more suitable model over a larger range of signal contrasts. BV5. Decay of visual memory is due to decreased signal, not increased noise. JASON M. GOLD, Indiana University, USA; ROBERT SEKULER, Brandeis University, USA; RICHARD F. MURRAY, University Of Toronto, Canada; ALLISON B. SEKULER, PATRICK J. BENNETT, McMaster University, Canada—Memory for visual patterns decays with the passage of time. This decay could be due to a reduction in internal signal strength, an increase in internal variability, or both. To distinguish among these possibilities, we estimated the effect of time delay on internal noise and signal strength by measuring observers' equivalent input noise, response consistency, and calculation efficiency in a visual pattern matching task. The task required observers to perform same/different discriminations with pairs of randomly generated 2-4 c/image band-pass filtered noise fields embedded in Gaussian white noise and separated by either a short (100ms) or long (2000ms) time delay. Increasing the time delay between stimuli had no effect on additive internal noise but reduced calculation efficiency by about 200%.

The consistency of responses between two passes through the stimulus set in high contrast external noise did not change with time delay, demonstrating that the passage of time had no effect on signal-dependent internal noise. These results imply that visual memory's decay over time is due only to a reduction in internal signal strength. In a subsequent experiment, response classification analyses revealed that at least part of this reduction was due observers' greater reliance on uninformative frequencies outside of the stimulus pass-band at the longer time delay. BV6. New methods for assessing the optimality of human observers. STANLEY A. KLEIN1, THOM CARNEY2, DENNIS M. LEVI3, SUKO TOYOFUKU1 1U C Berkeley, 2Neurometrics Institute, 3University of Houston�Introduction. Noise added to stimuli allows one to assess the optimality of observer performance. Cross-correlating the observer's response with trial-by-trial noise produces a classification image (CI) corresponding to the observer's template. A more efficient linear regression method is described which can reduce the number of trials needed for the same quality CI as long as less than 20 parameters need to be estimated. Methods. We applied this new method to detection and discrimination of Gabor-like stimuli of different bandwidths and frequency components to measure the efficiency of spatial and spatial frequency summation under different scrutiny conditions. We also measured position discrimination of Gabor-like stimuli. The stimuli and noise were sums of up to 11 frequency components. Results. In most cases the CI shape differed from the ideal observer prediction. The amount of spatial summation (bandwidth of the CI) was sensitive to exposure duration and type of fixation cues. In a replication of the Graham-Nachmias subthreshold 1st plus 3rd harmonic summation experiment we found the amount of summation depended on the duration allowed for scrutiny. Anomalous CIs were found. The position task revealed an inefficient peak detector was used rather than an ideal template. Conclusion. We measured classification images and efficiency of human observers in detection and position discrimination tasks. It was found that the templates that are used are often suboptimal for understandable reasons. C. Perceptual Learning. CV1. Perceptual learning: Psychophysics and electrophysiology. MANFRED FAHLE, University of Bremen�Perceptual learning is any relatively permanent change in the perception of a stimulus following practice. Often, the learning is independent from conscious experience and does not lead to �knowing that� as in declarative memory but to implicit memory, to �knowing how�. Similar to procedural learning, perceptual learning seems to directly modify the neuronal pathways active

Page 4: Vision and Color Meeting: Vision Sessions

during processing of the task, while not requiring an intermediate consolidation storage. Minute visual features, smaller than the diameter and spacing of foveal photoreceptors, can serve as extremely sensitive probes to monitor improvement through training in visual discrimination tasks. The results of recent investigations of perceptual learning demonstrate a surprising specificity of improvement through training for stimulus attributes such as orientation, position and exact type of task, as well as for the eye used during training. Control experiments rule out a significant contribution of motor components of learning, for example improving the quality of fixation and/or accommodation. The specificity of learning as well as the occurrence of short latency changes in cortically evoked responses over the human occipital cortex argue strongly in favour of an involvement of primary visual cortex in perceptual learning, as does perceptual learning in amnesic patients. On the other hand, perceptual learning is not just a feat of primary visual cortex, but involves massive top-down influences from higher cortical areas, since attention as well as error feedback exert strong influences on learning speed; and there is no transfer between different tasks that all use the same type of early orientation- specific filter. CV2. Learning to See in Three Dimensions. ROBERT JACOBS, Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627-0268�Why is seeing the world in three dimensions so easy? We believe that this ease is due to the fact that the visual world is highly redundant; there are many cues to perceptual properties such as depth and shape. However, combining information from multiple cues in an effective manner is non-trivial. We argue that people must learn their cue combination strategies on the basis of experience. In particular, we address the question of whether or not observers can adapt their visual cue combination strategies on the basis of consistencies between visual and haptic (touch) percepts. Berkeley (1709), Piaget (1952), and many others speculated that people calibrate their interpretations of visual cues on the basis of their motor interactions with objects in the world. Despite the intuitive appeal of this hypothesis, it has never been adequately tested. reality environment, we have conducted three experiments whose results suggest that observers adapt their visual cue combination strategies based on correlations between visual and haptic percepts. CV3. Retuning Perceptual Templates through Perceptual Learning. ZHONG-LIN LU, Laboratory of Brain Processes (LOBES), Department of Psychology, Univ. of Southern California, Los Angeles, CA 90089 AND BARBARA ANNE DOSHER, Memory, Attention and Perception (MAP) Laboratory Department of Cognitive Science, University of California, Irvine, CA 92697—We applied the PTM approach1, 2 to investigate the

mechanisms and scale invariance properties of perceptual learning. Observers judged the orientation of a Gabor as tilted + or - 8 deg of 45 deg in ten practice sessions. Using the external noise paradigm, one of eight levels of external noise was superimposed on the Gabor. Contrast thresholds were measured at two performance levels in each external noise condition. We observed substantial improvements in performance in high noise, with no significant improvement in zero/low external noise. PTM modeling identified a pure mechanism of perceptual template retuning. In a second Experiment, we found that observers did not show any further performance improvement at a new, closer, viewing distance. A major theoretical strength of the PTM approach is the ability to distinguish different mechanisms of attention or perceptual learning. In the domain of visual attention, pure cases of external noise exclusion3, 4and stimulus enhancement1,4 have been documented. This study provides the first empirical demonstration of an isolable mechanism of perceptual learning. 1Lu, Z.-L., & Dosher, B. (1998). Vision Res., 38, 1183-1198. 2 Dosher, B., & Lu, Z.-L. (1999). Vision Res., 39, 3197-3221. 3Dosher, B. & Lu, Z-L. (2000). Psychol Sci., 11, 139-146. 4Lu, Z.-L. & Dosher, B. (2000). JEP: Human Percep. Perform., 26, 1534�1548. CV4. Neural and functional effects of long-term visual deprivation. I. FINE1, A. R. WADE2, A. A. BREWER2, G.M. BOYNTON3, B.A. WANDELL2 AND D.I.A. MACLEOD1, 1Psychology Department, UCSD, 2Stanford University, 3Salk Institute�We examined visual processing in a patient (MM) whose corneal damage led to severe contrast deprivation from age 3 to 43. Five months postoperatively MM's spatial resolution remained poor: his psychophysically measured resolution limit was 1.3 cpd. A year post-operatively this resolution limit improved to 2 cpd. Spatial tuning, measured in area V1 using fMRI, was similar. Unlike normal observers, fMRI responses to counterphase gratings were weaker in V1 than in MT+ and responses are even weaker in higher areas (V2, V3, V4). Psychophysically, MM has little difficulty identifying simple shape cues, such as orientation and occlusion. However, he could not interpret more complex cues, such as illusory contours, perspective, and shape from shading. He also had difficulty identifying faces and objects. MM had less difficulty with complex motion cues, such as biological motion, KDE, and form from motion. Consistent with these results, responses in MT/MST were relatively normal, and retinotopic organization was relatively normal in V1. However, responses in higher visual areas were weak and disorganized, and faces and common objects did not produce the activity normally found in the fusiform gyrus. These results suggest that V1 and MT/MST were less susceptible to visual deprivation than other visual areas.

Page 5: Vision and Color Meeting: Vision Sessions

CV5. Perceptual Learning in Peripheral Vision. SUSANA T.L. CHUNG, Indiana University, DENNIS M. LEVI, University of California at Berkeley, BOSCO S. TJAN, University of Southern California—Performance for a variety of visual tasks improves with practice. This �perceptual learning� occurs in foveal vision, as well as in peripheral vision. The purpose of this study was to determine the functional nature of perceptual learning in peripheral vision. To do so, we tracked changes in contrast thresholds for identifying single letters presented at 10° in the inferior visual field, over a period of six consecutive days. The letters (26 lowercase Times-Roman letters, subtending 1.7°) were embedded within static two-dimensional Gaussian luminance noise. We also measured the observer response consistency using a double-pass technique on Days 1, 3 and 6. Two additional blocks were tested on each of these days at high and low noise levels. These additional blocks were the exact replicates of the corresponding block at the same noise contrast that was tested on the same day. Our results showed that following six days of training, all five observers showed a decrease in contrast thresholds, particularly at low noise levels. However, observers varied in the degree of improvement that could be attributed to a reduction in intrinsic noise (up to 60% reduction), or an increased in calculation efficiency (up to a factor of 2). We found no significant change in response consistency for any of our observers. We conclude that there are individual differences in the nature of learning in peripheral vision, with some observers learn selectively at low-noise conditions, while others learn uniformly across all conditions. D. Optical factors in Visual Resolution DV1. Retinal contrast losses and visual resolution with obliquely incident light. MATTHEW J. MCMAHON, University of Washington, Department of Biological Structure�Cones are much less efficient in capturing light with increasing angle of incidence. Under normal viewing conditions, this is accompanied by a large loss in visual performance. These experiments measured the extent to which the retina contributes to this decrease in performance. To isolate the optical effects of the retina, the anterior optics were bypassed by producing gratings directly on the retina with laser interference. This technique could produce the same pattern on the retina with various directions of incidence of the two interfering beams. To isolate the effective contrast at the level of the cones, we exploited the phenomenon of contrast-modulation flicker. When the contrast of a fringe pattern is rapidly altered between 0 and 100%, a local nonlinearity generates a brightness flicker which is proportional to the contrast seen by the cones. Our methods allowed us to examine the interaction between light and the cone mosaic at a very fine spatial scale. Contrast was slightly reduced

with oblique incidence, but the reduction was too small to be an important factor in visual resolution. Apparently, light which has escaped from, or been scattered by, nearby cones does not significantly reduce contrast. These measurements show that the contribution of the retina to the large losses in visual performance with obliquely incident light is only minor. Previous interferometric and double-pass measurements of the optical quality of the eye have also provided information about retinal sources of contrast loss. The relationship between these results and our measurements will also be discussed. DV2. Angular tuning properties of individual cone photoreceptors, AUSTIN ROODA, University of Houston�The angular tuning properties of a series of individual cone photoreceptors are measured through a direct imaging method using an ophthalmoscope equipped with adaptive optics. Results show that there is disarray in the mosaic, but it is very small. This small amount of disarray accounts for less than 1% of the spread in the overall tuning measured for an ensemble of cones. DV3. When Improving the Eye�s Optics Makes Vision Worse. DAVID R. WILLIAMS, Center for Visual Science, University of Rochester, Rochester, NY 14450—This talk reviews what is known about the relationship between optical and neural factors in spatial vision. From interference fringe experiments, it is well known that improving the optical quality of normal eyes can disrupt vision due to aliasing. More recently, adaptive optics has provided a way to reduce the monochromatic aberrations in the eye so that not only gratings but also more complex stimuli can be viewed at supernormal retinal image contrasts. Such stimuli reveal additional reasons why improving the eye�s optics can be deleterious. Chromatic aliasing is exacerbated. Moreover, chromatic fringes caused by chromatic aberration, while not especially noticeable in normal viewing, become obvious when monochromatic aberrations are reduced. The talk will also describe new evidence from grating and letter acuity experiments that post-receptoral mechanisms serving normal foveal vision may be ill-equipped to represent high spatial frequency, high contrast stimuli that lie outside the range of normal visual experience. These phenomona limit the value of correcting higher order aberrations with customized contact lenses and refractive surgery in normal eyes, though substantial improvements in visual performance could be achieved in eyes that have large amounts of higher order aberrations. DV4. Does the retinal mosaic limit visual resolution? DONALD I.A. MACLEOD, JEFF JUDSON AND WILLIAM WIGFIELD, Psychology Department, UCSD—A barely resolvable grating target has a period of about twice the foveal cone spacing, allowing adjacent photoreceptors to be in register with light and dark bars. This has been

Page 6: Vision and Color Meeting: Vision Sessions

viewed as a �Nyquist limit� situation, where higher spatial frequencies could not be represented veridically but would be perceived only as aliases. The perceived luminance profile across such a grating may, however, be based on output from many rows of cones, as the extent of cortical receptive fields implies; this would make the effective sampling density and the sampling �limit� many times greater than the observed resolution limit. Data on sensitivity to fine-scale modulation of contrast along the bars of a 40cpd grating indicate that spatial integration does indeed range, minimally, over several rows of cones. The retinal mosaic sampling limit on grating resolution, given the observed extent of spatial integration along the bars, is therefore hundreds of cycles per degree. In detecting non-uniformity of luminance in a single thin line, we find comparatively little sensitivity to stimulation of adjacent rows of cones; but in this case the resolution limit is only about 20 cpd, again well below the sampling limit. These results contradict the standard view that the retinal mosaic limits resolution. Instead, the limit (for interferometric targets) may reflect a lack of cortical receptive fields sensitive to high spatial frequencies, the pass-band of this neural spatial filter being roughly matched to that of the eye�s optics (for external targets). DV5. Predicting Visual Performance from Scalar Metrics of Optical Quality, XIN HONG, LARRY THIBOS, AND ARTHUR BRADLEY, School of Optometry, Indiana University, Bloomington, IN 47405�The relationships between a variety of scalar metrics of image quality and visual performance were examined in the Indiana Aberration Study, a comprehensive evaluation of optical aberrations in both eyes of 100 normal individuals. Monochromatic (633 nm) aberrations were measured with a Shack-Hartmann aberrometer under cycloplegia conditions with the optimum spectacle correction in place (as determined by subjective refraction). Twelve different metrics of monochromatic optical quality were computed from three functional descriptions: wavefront aberration function (WAF),point-spread function (PSF), and modulation transfer function (MTF). Visual performance was evaluated by two tasks using letter charts illuminated with white light: high-contrast letter acuity and contrast sensitivity for small letters. Results indicate that some optical measures are more predictive of visual performance than others. As would be expected of a homogeneous population with little between-subject variability, the correlations are not high (R<0.25) and only about half of these are statistically significant (P<0.01). Correlations improved slightly (R<0.3) when our optical model of the eye's aberrations was enhanced to include cone apodization (i.e. the Stiles-Crawford effect) and longitudinal chromatic aberration. In this case the best predictors of visual performance were based on the width of the MTF and compactness of the PSF. No measures based on the WAF were considered because of the conceptual difficulty in

formulating a polychromatic aberration function. Residual variance left unaccounted by the best of these linear correlations can be accounted for by relatively small amounts of measurement error due to individual differences in factors such as chromatic aberration, Stiles-Crawford effect, pupil shape, and neural mechanisms. E. Workshop on Visual Attention EV1. Isolating Mechanisms of Spatial Attention. BARBARA ANNE DOSHER, University of California, Irvine, AND ZHONG-LIN LU, University of Southern California�Two separable mechanisms - stimulus enhancement and external noise exclusion - have been identified as contributing to visual-spatial attention effects in pre-cueing and cue validity paradigms [1,2]. These mechanisms correspond with signature patterns of performance in an external noise paradigm, and can be quantitatively evaluated in the context of a perceptual template model (PTM) [3,4]. Effects of visual attention are measured by improvements in contrast thresholds for discrimination in displays systematically varying in the presence of stimulus noise (noiseless to noisy). Attention effects depend upon the nature of the cue, the presence of stimulus noise or masking, and the visual load [2,3]. The analysis of attention effects as stimulus enhancement, external noise exclusion, or a mixture, provide a systematic framework for understanding variations in the impact of directed attention. This analysis is extended to the evaluation of attention shared between multiple objects or across spatial positions. External noise exclusion is the primary mechanism of spatial attention, augmented with stimulus enhancement in some conditions. 1. Dosher, B., & Lu, Z.-L. (2000). Psychological Science, 11, 139-146. 2. Lu, Z.-L., & Dosher, B. (2000). Journal of Exp. Psychology: Human Perception & Performance, 26, 1534-1548. 3. Dosher, B., & Lu, Z.-L. (2000). Vision Research, 40, 1269-1292. 4. Lu, Z.-L., & Dosher, B. (1998). Vision Research, 38, 1172-1198. EV2. Measuring the Trajectory of Visual Attention. GEORGE SPERLING, Department of Cognitive Sciences, University of California, Irvine, Irvine CA 92697-5100, and SHUI-I SHIH, Department of Psychology, University of Southampton, Southampton, UK SO17 1BJ�Suppose an observer is cued to report only one row of a 3x3 array of letters. How long does it take for attention to move from a fixation point to the cued row (i.e., for a gate to open to admit the cued letters to short-term memory)? Does attention move simultaneously to the entire row, or are letters admitted to memory from left-to-right as in reading? These questions were addressed in a paradigm in which observers viewed a rapid stream of 3x3 arrays until a tonal cue directed them to report one row. Analysis of the of the particular arrays from which letters were reported and of

Page 7: Vision and Color Meeting: Vision Sessions

the joint probabilities of reporting pairs of letters enabled derivation of the duration and variability of cue interpretation time and of the temporal time course of the attention window. The window dynamics are independent of the distance moved. In brief, about 0.1 sec after the cue, an attention window opens for (minimally) 0.3 sec and admits information simultaneously from all the to-be-reported locations. Sunday F. Vision poster session FV1. Facilitation of Subthreshold Contrasts by Means of Texture-Slant Discrimination. L.G. APPELBAUM, University of California, Irvine; Z.L. LU AND G. SPERLING, University of California, Irvine, University of Southern California, Los Angeles�Models describing texture-slant and motion-direction discrimination employ a quadrature principle in which the squared outputs of receptive fields with similar orientations are summed to determine slant or motion energy. A consequence of such quadrature models is that amplification of subthreshold stimuli can be observed in certain types of displays. In the current task, subjects discriminate between square-wave, luminance defined gratings, oriented at +X and -X deg. Gratings are composed of successive rows that translate consistently 90 degrees. Integration between odd and even rows is required for orientation information. In the control condition, the minimum contrast m t is determined for +X �X slant discrimination. In the amplification condition, even rows have contrast m e , odd rows m o . The quadrature model predicts that discrimination depends on the product, m e m o . Therefore, large values of m e

combined with subthreshold values of m o would still produce good slant discrimination. This is contrast amplification of odd rows. To determine threshold m o , observers viewed gratings of 1.2, 2, and 2.5 c/deg; +-45, 22.5, and 11.3 deg slant; temporal durations of 50, 100, and 250ms; and values of m e ranging in contrast from

m t to 8%. Contrast amplification, M t /M o > 1, was found at all spatial frequencies and temporal durations with typical maximum amplification values of 3-5X. As in motion-direction discrimination, contrast amplification can be readily observed in slant discrimination, suggesting similar neural computations in both domains.

FV2. The flanker orientation effect on contrast discrimination. C. C. CHEN, The Neurometrics Institute AND C. W. TYLER, The Smith-Kettlewell Eye Research Institute—Purpose. To measure the effect of flanker orientation on contrast discrimination for a Gabor target in

the dual masking paradigm. Method. In the dual masking paradigm, the observer�s task was to detect a Gabor target superimposed on a Gabor pedestal in the presence of two flanking Gabors located 3 sigma away on the target�s orientation axis. Flanker orientation varied from 0~90o from the target orientation. Result. With co-oriented flankers, the target threshold vs. pedestal contrast (TvC) function showed a cross-over effect of reduced threshold at low but increased threshold at high pedestal contrast. When the flanker orientation deviated from that of the target, the threshold decrement at low pedestal contrast reduced, while reduction of threshold increments was not apparent until the flanker orientations > 60º from target orientation. Conclusion. We fit a sensitivity modulation model (Chen & Tyler, 2001, Proc. Roy. Soc. Ser. B.,) to the data, suggesting that the flanker effects are multiplicative to the excitatory and inhibitory terms of a divisive inhibition process. The model parameters showed the excitatory flanker effect reducing rapidly as the flanker orientation deviated from the target orientation. The inhibitory effect also reduced with the flanker orientation but not as much.

FV3. Subcortical contributions to masked scene perception. AMANDA M. DAWSON, University of Arizona—The phenomenon of affective transfer between a masked, valenced prime and its target has recently been attributed to a separate, subcortical pathway that is constrained to respond to threatening stimuli (Ohman, 1998; Murphy & Zajonc, 1993; LeDoux, 1996). Alternatively, affective priming may be the result of automatic processes (Robinson, 1998). For example, repeating a masked stimulus may lead to an indiscriminate increase in processing fluency, which is then interpreted as preference for the repeated stimulus (Zajonc, 1992; Bornstein & D�Agostino, 1994; Seamon, 1995). To distinguish between these two accounts, we presented complex, affective scenes to subjects at varying rates of exposure. Subjects were unable to choose the previously presented stimulus in a 2AFC task when the stimulus was presented at 33-milliseconds with masking. These same brief stimuli, however, were correctly identified in an indirect memory task (simple preference forced-choice) when analyses were separated for positive and negative primes: negative primes were not preferred while positive primes were. This result does not support a simple, perceptual fluency account of masked, affective priming. Subjects also rated the emotional value of the primes� targets: optimally displayed Japanese kana characters. The kana ratings significantly reflected the valence of the affective scene primes, regardless of whether the prime was positively or negatively valenced, or whether it was above or below visual recognition thresholds. Such priming for non-threat stimuli is inconsistent with Ohman�s (1998) formulation of an affective, subcortical processing system.

Page 8: Vision and Color Meeting: Vision Sessions

FV4. Measuring the spatial resolution of visual attention. J. GOBELL, C.H. TSENG, AND G. SPERLING, U C Irvine�Recent studies (e.g., Awh & Pashler, 2000) show that visual attention can be distributed among disjoint locations while the information from unattended locations in-between these locations can be ignored/suppressed. To investigate the spatial resolution limits of the distribution of visual attention, we use a search task in which observers must attend to a number of disjoint rows (or columns) (1 to 6) while ignoring the intervening rows (or columns). Each trial begins with an attentional instruction - a red-green square wave. Subjects must attend to the red (or green - randomly assigned to each observer) areas and to ignore the oppositely colored areas. This red-green grating is presented continuously for 400 msec, and then gradually faded to the gray background color over the next 300 msec. A 12 x 12 array of discs appears on the gray background for 150 msec. Subjects must report the location of the target (a single larger dot on the attended color), while ignoring the 10 false targets (larger dots on the unattended color) and 133 distractors (small dots). As the spatial frequency of the red-green grating increases, accuracy falls from a high of approximately 60% to a low of below 20%. These results are used to derive a measure of observers' spatial resolution of attention. FV5. Rivalry motion: A cue to cyclopean motion perception. HYUNGJUN KIM, ZHONG-LIN LU, AND GEORGE SPERLING, U C Irvine�Cyclopean motion is motion that is defined only by a moving stereoptically produced depth stimulus. Lu and Sperling (1995) suggested that the third-order motion system perceives cyclopean motion by tracking the figure (usually the foreground) versus the ground (background). A possible confound is that points on or near the horoptor stimulate corresponding points in the two eyes, thereby producing fusion, whereas points away from the horopter stimulate noncorresponding points, thereby producing rivalry. The visual system could track the rivalry-fusion pattern. Here we determine the relative contribution of depth and rivalry motion to cyclopean motion. To produce pure rivalry motion, we use a dynamic random-dot stereogram (DRDS) with stripes that are perfectly correlated in the two eyes (fused) alternating with stripes that are completely uncorrelated (rivalrous) (O'Shea & Blake, 1987). To produce a pure stereoptic depth motion, the horoptor is placed exactly in-between the far and near planes of a stereodepth grating. We find that the motion system not only senses the movement of a cyclopean fusion-rivalry grating but that rivalry motion is as strong as depth motion. When rivalry and depth motion are put into competition (by moving stimuli in opposite directions), apparent motion is cancelled. When rivalry and depth motion move together, apparent motion is enhanced. Conclusions:

Rivalry motion is another kind of input to the third-order visual motion system, is a frequent confound in ordinary DRDS motion stimuli, and can be equal in strength to depth motion. FV6. Anomalous movements of texture-defined contours. HIROYUKI ITO AND STUART ANSTIS, Dept of Psychology, UCSD, 9500 Gilman Drive, La Jolla CA 92093-0109�If the eyes move upwards, tracking a moving fingertip, the V-shaped pair of lines on the left appear to move inwards (shown by arrows), appropriate to the retinal stimulation. However, subjective contours, produced by an offset or dislocation in a grating of horizontal lines, appear to move outwards. We believe that neural blurring produces short strands of virtual �rope� obliquely linking the tips of the horizontal grating stripes, and that these strands stimulate motion detectors with oblique, elongated receptive fields.

FV7. Reading is controlled by magnocellular pathways: colored text read 30% slower than equiluminant grayscale text. T. LAWTON, Perception Dynamics Institute, 19937 Valley View Dr, Topanga, CA. 90290�Immature magnocellular pathways have been proposed to explain the visual processing deficits experienced by dyslexic (poor) readers. If magnocellular pathways control reading, then colored text should be read slower, not faster than grayscale text, since magnocellular pathways are essentially color-blind. Computerized reading rates were measured for text that was colored red, green, blue, and yellow, or was grayscale, each equated so that contrast was 100% and the mean luminance was 8 cd/m2. We measured reading rates for text that was either unfiltered or filtered to compensate for the child�s reduced contrast sensitivity functions, compared to adults. 30 children in a public elementary school, half being dyslexic, and half being normal readers, 5 each in grades 1, 2, and 3, were studied. The child read continuous text that never repeated from a simple entertaining story. Reading fluency was measured using a double staircase method to measure the speed needed to read 5 words correctly 79% of the time. Colored text was read 30% slower than equiluminant grayscale text, when reading either filtered or unfiltered colored text. This difference was significant, p�0.0000000002. These reduced reading rates did not differ between normal and dyslexic readers. Moreover, reading rates were faster when text had a low 8 cd/m2 mean luminance, compared to a high 67 cd/m2 mean luminance. Practicing a task that is optimal for activating

Page 9: Vision and Color Meeting: Vision Sessions

magnocellular pathways (left-right movement discrimination) increased reading rates 2-4 fold for both normal and dyslexic readers. This study provides more evidence that reading is controlled by magnocellular pathways, and not the high acuity parvocellular pathways. FV8. Fundamental Aspect of Image Quality Metrics: Contrast Sensitivity on Background of Varied Relative Phase. ERIKO MIYAHARA & MARK D. FAIRCHILD, Rochester Institute of Technology�Purpose In light of developing image quality metrics for a comprehensive multiscale model for image perception, we need to know human visual sensitivity to target stimuli presented upon background. As the first step to determine such sensitivity, we measured the contrast sensitivity to a sinusoidal Gabor patch superimposed on background Gabor with systematically varied relative phase between the target and the background patches. Methods Contrast thresholds of the test sinusoidal gratings were measured by spatial 4AFC procedure with two randomly interleaved staircases. The four fields contained the background sinusoidal grating Gabor that subtended 6.4° and only one of the four contained the test grating Gabor patch which subtended 1.6°. The mean luminance was 125 cd/m2. White noise replaced the stimuli upon the observer�s response to reduce aftereffect. The spatial frequency of the test grating was 8 cpd. The spatial frequency of the background grating was 1, 2, or 4 cpd and the contrast was fixed at 0.2. The relative phase of the test and the background grating was 0, �/2, or �. Eight observers with normal or corrected-to-normal visual acuity participated. Results The contrast thresholds increased as the spatial frequency of the background increased. There was no effect of phase difference between the test and the background gratings upon contrast thresholds. Conclusions The results are consistent with former studies on grating detection and masking. These imply that the relative phase between the test and the background doesn�t have to be a parameter in developing a multiscale model for image perception. FV9. No early pointwise non-linearity in shape discrimination. R. F. MURRAY1, P. J. BENNETT1,2, A. B. SEKULER1,2, (1) Psychology, University of Toronto, (2) Psychology, McMaster University�Purpose Human observers are often modelled as linear discriminators in shape discrimination tasks. However, psychophysical and physiological studies indicate that the visual system has an early pointwise nonlinearity, perhaps in the cones. We formulated a simple generalization of the linear model, incorporating an unknown early pointwise nonlinearity (a Wiener-Hammerstein model), and we used a variant of reverse correlation to measure this nonlinearity. Methods Observers performed two-alternative identification tasks in white noise: dot detection, orientation discrimination, face identification, and discriminations involving illusory and occluded contours. We computed classification images to

determine what regions observers used in these tasks. For each pixel in these regions, we computed the probability of the observer giving one response or the other, for each possible contrast level at that pixel. We will show that this measures the early nonlinearity. Results The probability of an observer giving a particular response was proportional to the contrast at each relevant pixel. That is, there was no early nonlinearity. This was true even in tasks involving illusory and occluded contours. Conclusions (1) Either there is no transduction nonlinearity, or it is effectively undone during shape discrimination. This is consistent with Nam and Chubb�s (2000) findings for judgements of texture luminance. (2) The visual system is optimized for an approximately Gaussian noise distribution. (3) Illusory and occluded contours are used like luminance contours in shape discrimination. FV10. Object-directed attention increases signal strength in human visual cortex. FRANCESCA PEI AND ANTHONY M. NORCIA, Smith-Kettlewell Eye Research Institute�There is much current interest in whether attention increases the gain of neural mechanisms (signal enhancement) or acts by other means such as reducing noise/narrowing a tuning function. We used direct measurement of signal strength-- the human visual evoked potential and an object-directed attention task to determine whether attention produces signal enhancement. The stimulus consisted of a series of 16 crosses spread across a 18.5 by 25 deg. display. The vertical and horizontal bars oscillated at nearly incommensurate temporal frequencies (F1=2.43 and F2=3.01 Hz, respectively). Responses were isolated from the EEG by analyzing the distinct temporal harmonic components generated by the vertical (1F1, 2F1, 4F1) and the horizontal bars (1F2, 2F2, 4F2). In separate blocks of trials, the observers were instructed to attend to either the vertical or the horizontal component of the crosses. Since the attentional targets were superimposed, spatial attention effects were minimal. We find that attention increases second harmonic signal amplitude by 30 to 40%. Activity at the fourth harmonic is unaffected by attention. We interpret these results as indicating that attention to an object produces signal enhancement of some but not all visual evoked activity. We suggest that the 4th harmonic is generated at a stage/locus in the visual pathway earlier than the first stage at which attention can act. Our experiments show signal enhancements similar to those found by Morgan, Hansen and Hillyard (1996, PNAS), but indicate that spatial attention is not required for amplitude enhancement. FV11. Flicker Adaptation from Subthreshold Modulation. SHERIF SHADY, DONALD I. A. MACLEOD, STEPHANIE L. CARPENTER AND SARA E. VIOLETT, Psychology Department, University of California, San Diego�Purpose: To measure the frequency response for

Page 10: Vision and Color Meeting: Vision Sessions

flicker adaptation (modulation gain control) in the human cone system. Methods: Modulation thresholds for a 30 Hz test flicker were obtained from two subjects, following pre-adaptation to high-frequency flicker. Exp 1. Adapting frequencies ranged from 20 to 50 Hz, at multiple modulation levels from 10% to 100%. "Equivalent modulations" across adapting frequencies were determined and used to derive the shape of the frequency response function for flicker adaptation. The conventional TCSF was also measured. Exp 2. For an adapting frequency of 30 Hz, modulation levels ranged from 10% down to 1% (well below the subjects' flicker thresholds). Results: The frequency response function for flicker adaptation was noticeably shallower than the TCSF for intermediate frequencies, with sensitivity between 20 and 30 Hz falling by about 30% vs. 100%, respectively. For high frequencies, 35-50 Hz, the two functions were roughly parallel. Pre-adaptation to invisible 30 Hz flicker, at modulations as low as 30% of threshold, produced small but significant elevations in threshold for a subsequent test flicker. Conclusions: Flicker adaptation in the human cone system is more sensitive to high frequencies than is conscious perception, and can be triggered by subthreshold (invisible) modulation. These results suggest a relatively early site for modulation gain control. FV12. The effect of spatial frequency on the recognition of facial identity and expression. HISAAKI TABUCHI AND TADAYUKI TAYAMA, Department of Psychology, Hokkaido University, Sapporo, Japan�This study examined the effect of spatial frequency on the recognition of facial identity and expression. This experiment employed 8 standard facial images, which combined 4 different identities and 2 types of expression (normal or happy). Six band-pass filtered facial images were constructed from each standard face, producing a total of 48 facial images that were used as test stimuli. In each trial, a standard face was initially presented, followed by the parallel presentation of two test stimuli. Each participant performed three tasks. In the Identity task, participants were required to select one of the two test stimuli whose identity was the same as the standard face. In the Expression task, they chose the target whose expression was the same as the standard face. In the Categorization task, they could select either the identity or expression category, as one of the two stimuli had the same identity, while the other had the same expression as the standard face. The reaction time for the Identity and Expression tasks suggests that identity recognition was not possible in the component of high spatial frequencies, and expression recognition was not achievable in the component of low spatial frequencies. The results of the Categorization task showed that participants tended to choose a target based on identity rather than expression. This finding suggests that unattended processing of identity occurs prior to that for expression.

FV13. Revisiting Stereoptic Motion Standstill: Stereoptic Motion Processing Has Lower Temporal Resolution than Shape Processing. CHIA-HUEI TSENG, YUNGJUN KIM, JOETTA L. GOBELL, ZHONG-LIN LU*, AND GEORGE SPERLING, University of California, Irvine, CA 92797-5100 and *University Southern California, CA 90089-1061�There are several perceptual experiences associated with objects in motion.As object speed increases, the perception ranges from no motion to smooth motion to blurred motion. Motion standstill, however, is a completely different motion perception in which an observer perceives a pattern that is moving quite rapidly as being motionless and yet its details are clearly visible. Lu, Lesmes & Sperling (1999) propose that motion standstill occurs when outputs of all motion systems are below threshold, but other parallel visual systems (e.g., color, shape, texture, depth) still produce usable outputs.We revisited motion standstill in dynamic random-dot stereograms (DRDS), first reported by Julesz and Payne (1968), making three improvements in their paradigm: gratings moved continuously across the field instead of merely wobbling back and forth; motion temporal frequency was varied independently of the quality of the stereo input; observers judged motion direction, motion speed, and the spatial frequency of the moving depth grating (to indicate that they perceived the depth pattern.)For all observers, there was a range of relatively rapid movements in which pattern judgments were accurate but motion direction judgments were at chance and the judged speed was zero. The results indicate that the pattern information can be extracted from rapidly moving displays even when all motion systems fail. The motion system that detects stereomotion has a lower temporal resolution than stereo-depth-shape system, and thus motion standstill can occur at high temporal frequencies (high velocities). FV14. Contrast sensitivity under the suppression phase of binocular rivalry for normal and strabismic observers. MIEKO YANAGISAWA AND KEIJI UCHIKAWA, Tokyo Institute of Technology�In binocular rivalry a stimulus presented in one eye is suppressed whereas a different stimulus in the other eye is perceived at a time. This binocular suppression is a phenomenon which reflects the brain selectivity for awareness. For strabismic observers the binocular suppression continuously occurs with different retinal images made by misalignment of eye positions. It has been argued whether binocular suppression of strabismic observers is different from that of normal observers. This issue would be important for a developmental aspect of the brain selectivity. In this study we measured the contrast sensitivity for an eye in suppression, fusion, monocular and binocular conditions. In the suppression condition the Gabor stimulus (2.5 c/d, 100% center contrast, 30 cd/m2

Page 11: Vision and Color Meeting: Vision Sessions

mean luminance) was presented in one eye as a dominant stimulus, and a test Gabor stimulus (1~ 8 c/d, variable contrast) was presented in the other eye for 160 ms. The contrast threshold of the test stimulus was measured. In the fusion condition the mean-luminance uniform stimulus was presented in one eye and the test stimulus in the other eye. In the monocular condition only one eye was tested with the other eye covered by an eye patch. In the binocular (control) condition the test stimulus was presented in both eyes. The results show that, for normal observers, the contrast sensitivity was higher in the binocular and fusion conditions than in the suppression condition for all spatial frequencies. For the strabismic eye, however, the contrast sensitivity was the lowest both in the fusion and in the suppression conditions. It is suggested that the binocular rivalry and strabismus suppressions have different origins. G. Mid-level vision GV1. Contextual modulation effects in apparent contrast, contrast discriminations, and spatial frequency discriminations. LYNN OLZAK, Department of Psychology, Miami University of Ohio� The apparent contrast of a small central patch of modulated texture can be significantly reduced by the presence of a high-contrast modulated surround. My colleagues and I have previously reported that performance on both contrast and spatial frequency discrimination tasks can also be reduced by a high-contrast modulated surround. All effects have been attributed to gain control processes operating laterally over space. Here, I investigate the extent to which the same processes underlie the three phenomena. In the current studies, I measured spatial frequency discrimination performance as a function of surround contrast and center-surround relative phase, and compared these results to previously obtained functions for apparent contrast and contrast discrimination tasks. Results were nearly identical to those previously obtained with the contrast discrimination task, showing characteristic masking functions that depended upon relative phase of center and surround. These functions differed substantially from results obtained with the apparent contrast measure, suggesting that different underlying processes mediate effects found with performance and with perceptual measures. GV2. A model of brightness and darkness induction based on a neural filling-in mechanism. MICHAEL RUDD, University of Washington, Box 351525, Seattle, WA 98195-1525—Rudd & Arrington1 used a brightness matching technique to measure the amount of darkness induction produced in a test disk as a function of the luminances of two concentric surround annuli. Their data were modeled with a five-parameter brightness matching

equation. It was shown that this brightness matching equation is predicted by a mechanistic model of darkness induction based on the filling in of darkness from borders. According to the model, borders generate spreading induction signals that are proportional to the log luminance ratios of the generating borders. These spreading signals �color in� regions between borders in a neural representation of the image. The magnitudes of these filled-in signals model subjective intensity. A key additional postulate of the model is that the induction signals that spread from borders are gated (i.e., partially blocked) as they spread across other borders in the image. The amount of blocking is proportional to the log luminance ratio of the blocking border. A natural question that arises in the context of this model is how darkness induction signals combine quantitatively with brightness induction to determine overall subjective intensity. Rudd2 has previously presented evidence for strong asymmetries in brightness and darkness induction. Here, new data will be presented which quantify these asymmetries. Implications of the brightness/lightness asymmetries for neural filling-in theory will be discussed. 1. Rudd, M. E., & Arrington, K. E., (in press), Vision Research. 2. Rudd, M. E. (2001). Vision Sciences Society meeting abstracts. GV3. Polarity Reversal Does Not Destroy the Poggendorff Illusion. BRUCE BRIDGEMAN, Department of Psychology, University of California, Santa Cruz, Santa Cruz, CA 95064�The only current physiology-based theory of visual illusions attributes the illusions to distortions in the activation of cortical receptive fields responsible for pattern processing. For the Poggenforff illusion, in our experiments a diagonal line traversing a wide vertical bar, a lower-left to upper-right diagonal appears to be too high on the right side to meet the line defined on the left. The geometry of elongated receptive fields predicts that the illusion is due to a bias in the location of the receptive fields near the acute angle formed by the diagonal inducing line and an edge of the vertical bar. The best-responding low spatial frequency receptive field coding the diagonal line near the bar will be biased in the direction of the bar, because the dark bar activates the receptive fields responding to the dark diagonal line. The illusion should diminish or even reverse if the inducing bar pattern and the test line are of opposite polarity (black bar and white line on gray background). In a test of this prediction, however, all subjects continued to show a Poggendorff illusion with opposite-polarity contrast. The result contradicts the receptive field theory of the illusion. GV4. Metacontrast masking is specific to luminance polarity. STUART ANSTIS AND MARK BECKER, Dept of Psychology, UCSD, 9500 Gilman Drive, La Jolla CA 92093-0109�A 1° disk was flashed up on a screen, followed after 40 ms by a snugly fitting annular mask. The disk luminance varied from 0% to 100% in 4% steps. The

Page 12: Vision and Color Meeting: Vision Sessions

ring-mask was either black (0%) or white (100%) and the surround was mid-gray (50%). Observers reported the apparent lightness of the masked disk by adjusting a matching disk. Results: On a gray surround, a black annular mask made all disks that were darker than the surround appear to be transparent, that is, of the same luminance as the surround (=complete masking). The black ring had virtually no masking effect on disks that were lighter than the surround. Conversely, a white ring made all disks that were lighter than the surround look apparently the same luminance as the surround (=complete masking), but had virtually no masking effect on disks that were darker than the surround. Conclusion: A ring that was a spatial decrement (darker than the surround) rendered invisible all decremental disks but it had virtually no effect on incremental disks. Conversely, an incremental ring rendered invisible all disks that were spatial increments, but not those that were spatial decrements. We conclude that metacontrast masking occurs within, but not between, visual ON- and OFF- pathways. GV5. Sensory processing delays measured with the eye-movement correlogram. JEFFREY B. MULLIGAN, NASA Ames Research Center�It is well known that visual sensory processing is slower for stimuli with low contrast, or low luminance, or purely chromatic contrast. The eye-movement correlogram provides a precise way to quantify these delays. The subject attempts to maintain fixation on a target which moves randomly in one or two dimensions while eye position is monitored. The eye velocity is computed, and the smooth component is interpolated in the neighborhood of saccades. The resulting smooth eye velocity is cross-correlated with the stimulus velocity. Averaging across presentations with different motion signals reveals a signal which resembles an impulse response, which is well-fit by the convolution of a Gaussian with an exponential. The location of the peak provides a simple measure of pursuit latency. Under the assumption that delays due to motor processing are constant as stimulus features are varied, we can use this technique to measure increases in sensory latency. For example, we observe an increase in latency of approximately 20 milliseconds each time we halve the contrast of a small white spot. Even larger delays have been observed for equiluminant chromatic targets, and flicker-defined second-order-motion targets. GV6. Predicting Symmetry Perception with a Contrast Energy Model. CHRISTOPHER W. TYLER, Smith-Kettlewell Eye Research Institute, 2318 Fillmore St., San Francisco, CA 94115�Rationale. Random-field patterns of reflection symmetry are perceived as a global organization extending well beyond the symmetry axis. The strength of the percept varies markedly among patterns of equal mathematical symmetry. The symmetry

of natural camouflage, such as tiger and zebra skins, is not mathematically exact. Such skin patterns are phase-random, meaning that they are undetectable to a standard symmetry transform operator. Moreover, opposite-polarity symmetry is almost equally detectable as same-polarity symmetry, eliminating both phase-specific and cross-correlation models as adequate descriptors of human symmetry processing. Methods. A global contrast-energy model of location-independent symmetry extraction was developed by defining the oriented contrast energy over the pattern and summing the matching energy patterns with their reflections about all axis displacements and axis orientations. The symmetry-energy time-series prediction for each stimulus was correlated with psychophysical estimates of symmetry strength over 120 examples of patterns with 0, 1, 2 and 4 axes of symmetry, for four observers. The model had one free parameter of the exponent of the predicted symmetry strength function. Results. Correlations of 0.93-0.95 were obtained between the psychophysical ratings and the symmetry energy model, even when restricting to patterns with perfect mathematical symmetry. Strength exponents varied from 0.5 to 0.7 for the four observers. We conclude that the symmetry energy model is a useful approach to understanding the processing of global symmetry by human observers. GV7. Sensitivities to the Changing Structure of Moving and Stationary Images. JOSEPH S. LAPPIN, DUJE TADIN, AND ERIC J. WHITTIER, Vanderbilt University�Problem: Vision is very sensitive to the relative motions of spatially separate features. Is this multi-local sensitivity characteristic of moving images, or does it apply more generally to the spatial organization of other image changes that do not involve motion? Method: Three experiments evaluated spatial acuities and contrast sensitivities to image changes produced by (a) small-amplitude oscillatory motions of Gaussian blobs and (b) stationary oscillations with equivalent local contrast change (where the local contrast changes were bilatterally symmetric but otherwise identical to those produced by motion). Exp. 1 measured thresholds for detecting changes in a single blob. Exp. 2 measured discriminations of phase differences between the oscillations of a center and two flanking blobs. Exp. 3 estimated visual correlations between spatially separate signals produced by motion and by stationary oscillations. Results: (1) Motion was more detectable than stationary contrast oscillation, especially for larger blobs. Increasing blob size had little effect on motion acuities but had a larger effect on stationary oscillation thresholds. (2) Phase discrimination thresholds were lower for motion than for stationary oscillations and were robust over increases in spatial separation and temporal frequency, but stationary oscillation thresholds increased with separation and

Page 13: Vision and Color Meeting: Vision Sessions

temporal frequency. (3) Multiple motion signals were positively correlated, but visual signals for stationary changes were negatively correlated. Conclusion: Vision is more sensitive to moving than to stationary image changes. Moving patterns are visually coherent, but stationary oscillations are not.