running title: sonority-based segmentation...

20
Running title: Sonority-based segmentation 1 Sonority effects on English word segmentation: Evidence for syllable-based strategies in a stress-timed language Jason B. Bishop College of Staten Island and The Graduate Center, City University of New York Suggested Running title: Sonority-based segmentation Word Count: 2,991 words Correspondence address: Jason B. Bishop Department of English College of Staten Island, City University of New York 2800 Victory Blvd. Staten Island, NY 10314 Telephone: (718) 982-3665 Fax: (718) 982-3643 Email: [email protected]

Upload: others

Post on 18-Apr-2020

21 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 1

Sonority effects on English word segmentation:

Evidence for syllable-based strategies in a stress-timed language

Jason B. Bishop

College of Staten Island and The Graduate Center,

City University of New York

Suggested Running title: Sonority-based segmentation

Word Count: 2,991 words

Correspondence address: Jason B. Bishop Department of English College of Staten Island, City University of New York 2800 Victory Blvd. Staten Island, NY 10314 Telephone: (718) 982-3665 Fax: (718) 982-3643 Email: [email protected]

Page 2: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 2

Abstract

The present study explored the use of linguistic knowledge in on-line word segmentation using

the word-spotting task. Native English-speaking listeners heard words such as absent embedded

in nonsense strings such as jeemabsent and jeebabsent in phonetically controlled materials.

Results showed that listeners more readily spotted targets following intervocalic sonorants than

intervocalic obstruents, suggesting a sonority-based preference in their parsing of the signal. It is

argued that this syllabification-like behavior is inconsistent with the notion that syllable-based

segmentation is determined by rhythm class. Word-level prosody, but not gradient phonotactics,

was also found to influence segmentation performance.

Page 3: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 3

1. Introduction

1.1. The problem of word segmentation

The present study concerns itself with the kinds of linguistic information listeners bring to bear

on the problem of word segmentation—i.e., the identification of word boundaries in the

continuous speech stream. Much of what we have learned about how (adult) human listeners

accomplish this task has come from the word-spotting paradigm (Cutler & Norris, 1988;

McQueen, 1996). In a word-spotting experiment, listeners must identify a word such as apple

embedded in a nonsense string such as vuffapple, and do so as quickly as possible. This task is

informative as to listeners’ segmentation of the string because performance depends crucially on

a unique parsing of that string (in the present case, vuff.apple rather than vu.fapple or vufa.pple).

When speakers provide useful phonetic cues to boundaries, these parsings can be made based on

the signal (Shatzman & McQueen, 2006; Newman, Sawusch, & Wunnenberg, 2011). However,

even without such cues, listeners segment the signal non-randomly, using knowledge about

statistically-dominant word-level prosodic patterns in their lexicon (the “Metrical Segmentation

Strategy”; Cutler & Norris, 1988) and knowledge of what constitutes a phonotactic violation

(McQueen, 1998; Norris, McQueen, Cutler, & Butterfield, 1997; see also Hanulikova, McQueen,

& Mitterer, 2010). Thus an important part of the solution to the segmentation problem lies in

non-signal-based knowledge and experience.

While these basic lexical-based biases are likely language-universal, another knowledge-

based strategy is claimed to be language-specific: the use of syllable structure. One of the earliest

findings in the study of spoken word segmentation was that French, but not English-speaking

listeners show segmentation biases that look very much like syllabifications. In their classic

findings, Cutler, Mehler, Norris, & Segui, (1986) demonstrated this asymmetry using the

Page 4: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 4

sequence monitoring task, finding that French listeners detected “BA” more quickly in ba.lance

than in bal.con, and “BAL” more quickly in bal.con than in ba.lance. Native English speakers,

however, showed no such “cross-over” pattern. It has since been argued (e.g., Cutler, McQueen,

Norris, & Somejuan, 2001) that this difference in the use of the syllable reflects the langauge’s

rhythm class. French is syllable-timed, and so segmentation can exploit syllable structure;

English is stress-timed, and so the syllable-based segmentation routine is unavailable.

1.2. Present study: sonority effects on word segmentation

The present study reexamines this rhythm class hypothesis by probing English-speaking listeners

for sensitivity to syllable structure in word-spotting. Specifically, the following question was

asked: are listeners’ on-line segmentations influenced by sonority sequencing preferences? This

was of interest because sonority-based preferences are known to be active in the grammar of

English speakers (Berent, Steriade, Lennertz, & Vaknin, 2007) in a way that depends on

syllabification of the signal (Daland, Hayes, White, Garellek, Davis, & Norrmann, 2011).

Importantly, sonority strongly influences explicit syllabifications for both French (e.g., Content,

Kearns, & Frauenfelder 2001; Goslin & Frauenfelder, 2002) as well as English-speaking subjects

(Treiman & Danis, 1988; Moreton, Feng, & Smith, 2008), in line with the cross-linguistic

generalization that well-formed syllables tend to have a sonority profile that rises maximally

from onset to nucleus and falls maximally from nucleus to coda (Sievers, 1881; Jespersen, 1904;

Hooper, 1976; Steriade, 1982; Selkirk, 1984; see also Murray & Venneman, 1983; Venneman,

1988; Clements, 1990).

Page 5: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 5

To test whether such a preference is active in the on-line segmentation routine used in English,

the experiment presented below tested listeners’ ability to spot a word like absent in (1a) versus

(1b):

(1) a. jeemabsent (/ʤimæbsәnt/)

b. jeebabsent (/ʤibæbsәnt/)

If listeners are sensitive to sonority in on-line segmentation as they are in off-line syllabification

tasks, they should spot absent more quickly in (1a) relative to (1b), since they will prefer to parse

the /m/ as a coda, rather than as an onset to the next nucleus. Conversely, in (1b), listeners’ bias

should be in the direction of parsing the obstruent /b/ as an onset rather than a coda, requiring a

re-parse in order identify the vowel-initial word absent. Importantly, it is not clear how the

lexical strategies discussed above could be employed by listeners, as neither a word boundary

before or after the crucial consonant would produce a phonotactic violation, and stress patterns

(i.e., primary stress on the second syllable’s nucleus) do not differ. Recently, an off-line study of

word-segmentation has demonstrated sensitivity to sonority patterns (in consonant clusters) using

an artificial language task (Ettlinger, Finn, & Hudson-Kam, 2012). The contribution of the

present study is to test whether similar sonority sequencing biases influence listeners on-line, as

part of the pre-lexical segmentation routine.

2. Methods

2.1. Stimuli

2.1.1. Design

Stimuli for a word-spotting experiment were based on a set of 50 disyllabic, vowel-initial

English target words: 25 with a strong-weak (SW) and 25 with a weak-strong (WS) stress

Page 6: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 6

Table 1. Example of SW and WS target words in each of the four preceding vowel and sonority conditions.

CV Context Obstruent Sonorant Target Stress Tense Lax Tense Lax /ʤib/absent /nɪb/absent /ʤim/absent /nɪm/absent /fuʃ/echo /fʊʃ/echo /fum/echo /fʊm/echo

/zeɪf/oven /zεf/oven /zeɪn/oven /sεn/oven SW /vuf/expert /væf/expert /vɑm/expert /væm/expert

/vus/ugly /vʊs/ugly /vun/ugly /vʊn/ugly /zɔɪg/ankle /zʌg/ankle /zɔɪn/ankle /zʌn/ankle /muʃ/average /mɪʃ/average /mul/average /gɛl/average

/ʃɑv/eject /ʃɪv/eject /zɑm/eject /zεm/eject /gig/enough /nʌg/enough /blim/enough /nɪm/enough /zeɪf/obsessed /zεf/obsessed /zeɪm/obsessed /zεm/obsessed

WS /zɔɪg/exert /zʊg/exert /zɔɪn/exert /zʊn/exert /bus/against /bʊs/against /fun/against /fʊn/against /vif/agree /vɪf/agree /vin/agree /vɪn/agree /fuʃ/occur /fʊʃ/occur /fun/occur /fʊn/occur

pattern, such as absent and eject, respectively. Target words were chosen that contained no

additional words embedded within them. For each target, two types of CVC syllables were

designed to precede it, one with a tense (/i,u,ɑ,o,eɪ,aɪ,oɪ/) and one with a lax vowel (/ɪ,ɛ,æ,ʌ,ʊ/).

The final C in the CVC (which, when appended on to the vowel-initial targets words, would

become an intervocalic consonant), fell into one of two sonority conditions: sonorants (/m,n,l/),

and obstruents (/b,d,g,ʤ,θ,f,s,ʃ,v,ð,z/). Obstruents were selected for their lack of informative

allophonic variation. Table 1 shows a subset of the 50 target words and their CVC contexts; for a

full list of all 200 stimuli items used (50 words 2 vowel tenses conditions two sonority

conditions), see [submitted_as_supplementary_materials].

Page 7: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 7

Additionally, 360 fillers were designed that contained no actual English words (e.g.,

/kʊˈvimɛp/), but resembled test items in other respects (i.e., equally divided among the same

sonority, vowel tenseness and stress pattern conditions).

2.1.2. Creation Auditory versions of the CVC and Target stimuli were then created, produced by a native

English speaker. Because the primary research question involved the effect of sonorant versus

obstruent consonants, it was necessarily the case that, across experimental conditions, the target

words would follow different consonants. It was therefore not possible to perform the cross-

splicing often used in word-spotting experiments to control for differences in the productions of

the targets themselves across conditions. That is, the absent from jeebabsent cannot be

segmented and spliced onto the absent in jeemabsent, due to coarticulatory effects, such as

carryover nasalization. Therefore, special care was taken to create stimuli that (a) would lack

speaker-encoded cues to intended syllabifications, and (b) would be unlikely to differ across

conditions in terms of duration. After producing stimuli carefully, it would then be possible to

“control” for any remaining differences, at least durational ones, by adding them to the statistical

model.

To this end, broad phonemic transcriptions of the stimulus materials were read and recorded

by a 23 year-old female speaker of Californian English (where /ɔ/ and /ɑ/ have merged, and the

remaining vowel, /ɑ/, patterns as tense). Recording took place in a sound-attenuating booth,

digitized at 22.05 kHz. To ensure that the consonants that would straddle the CVC-target

boundary would have phonetically ambiguous syllabic affiliation, these consonants were

produced as both codas to the CVC syllables, as well as onsets to the following target words.

This was accomplished by recording CVCs (e.g., jeeb, jeem) that were produced separately

Page 8: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 8

Figure 1. Example materials for stimulus creation. The highlighted portion of the CVC syllables was spliced on to the highlighted portion of the C+Target string to create jeebabsent and nibabsent. Stimuli for the sonorant consonant condition were created in the same way.

from target strings, and target strings like absent were produced with these same consonants as

onsets (e.g., babsent, mabsent).

A large number of versions of both CVCs like jeeb and jeem, as well as C+target strings like

mabsent and babsent were then produced and recorded, and the most durationally similar among

them selected. Then, to create the final stimuli, the CVC contexts were spliced onto the C+target

strings to produce, for example jeebabsent [ʤiˈbæbsәnt] and jeemabsent [ʤiˈmæbsәnt]. Splicing

was done (at zero-crossings) so that the consonant recorded as an onset to the target would

become the intervocalic consonant. Said otherwise, what remained in the final stimuli was the

CV portion of the recorded CVC context, followed by the C+target production (see Figure 1).

Thus, any coarticulatory information in the CV’s vowel should have been that of a tautosyllabic

Page 9: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 9

CVC Target

86

614

610

71245

220 Obstruent Condition

Sonorant Condition

j e e m a b s e n t

j e e b a b s e n t

Figure 2. Mean durations (in milliseconds) for the initial CV contexts, intervocalic consonants, and target words in each of the two sonority conditions.

coda consonant, but any (potential) phonetic cues to its syllabic affiliation during or after closure

would have suggested a syllable-initial consonant. This also created stimuli that were

durationally similar across the crucial sonority conditions, particularly with respect to targets

themselves, which differed by only 4 ms on average (see Figure 2).

Vowel tenseness conditions were created by simply splicing the same production of the

target from the tense vowel condition onto the CVC context with a lax vowel, within each

sonority condition. That is, the same production of mabsent was used in both [ʤimæbsәnt] and

[nɪmæbsәnt], and the same production of babsent was used in both [ʤibæbsәnt] and [nɪbæbsәnt];

since the vowel tenseness manipulation was orthogonal to consonant manipulation, the same

consonant could be used. The 360 filler items were similarly recorded, although produced intact

(i.e., no splicing took place).

2.2. Participants

Participants were 33 native speakers of American English, mostly undergraduates from

California, who received course credit or monetary compensation.

Page 10: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 10

2.3. Procedure

Participants served as listeners in a word-spotting experiment. Testing took place in a sound-

attenuating booth, over two sessions lasting approximately 20 minutes each. Participants were

told that they would hear a list of nonsense strings of speech, some of which would contain an

English word. Their task was to indicate when they heard a real English word by clicking a

computer key as soon as quickly as possible; immediately after pushing the key, they were to

then say that word aloud. If no word was heard in a string, subjects were to give no response, and

wait for the next trial. A brief practice session familiarized participants with the task and nature

of the stimuli. A within-subject design was used, with all participants hearing all targets in all

conditions. Stimulus items were divided into blocks such that a target word occurred once in

each block, in a different condition; blocks were counterbalanced with respect to tenseness and

sonority conditions, and fillers were distributed among the blocks. Stimuli were presented with

an ISI of 3 seconds by a MATLAB script that collected RTs. Participants’ verbal responses were

also recorded for each trial to assess accuracy.

2.4. Results

Data were dropped from four participants who either did not follow instructions or did not return

for their second session, leaving twenty-nine participants whose accuracy and RT were analyzed.

Accurate responses (defined as in Norris et al., 1997; 204) within 2 standard deviations of the

mean RT were used for analyses.

Results for all conditions are shown in Table 2. To better understand these patterns, RTs and

accuracy were modeled using mixed-effects linear and logistic regression, respectively. In

addition to random intercepts for participant and item, initial models contained the following

Page 11: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 11

Table 2. Average RT (in ms, measured from target offset) and accuracy for strong-weak and weak-strong target words in the four experimental conditions.

Strong-Weak Targets

Tense Vowel Lax Vowel

Context Context Obstruent C Context

RT 403 414 Accuracy 87.60% 89.60% Example /ʤib/absent /nɪb/absent

Sonorant C Context RT 383 392 Accuracy 93.30% 91.90% Example /ʤim/absent /nɪm/absent

Weak-Strong Targets

Tense Vowel Lax Vowel

Context Context Obstruent C Context

RT 363 347 Accuracy 90.30% 92.00% Example /ʃɑv/eject /ʃɪv/eject

Sonorant C Context RT 345 350 Accuracy 94.30% 94.70% Example /zɑm/eject /zɛm/eject

fixed-effects factors: preceding vowel tenseness, sonority of the intervocalic consonant, and

stress; stimulus-level predictors were: target duration, duration of the intervocalic consonant,

duration of preceding CV, and presentation of the target (i.e., 1st, 2nd, 3rd or 4th time the

participant had encountered that particular target word during the experiment). In initial models

the three linguistic variables were permitted to interact with one another, and with presentation.

Non-contributing factors in the models were identified (using log-likelihood ratio tests) and

removed from the final refitted models.

The results of the best fit models of RTs and accuracy are shown in Tables 3 and 4,

respectively. First, presentation was significantly associated with lower RTs and higher

accuracy, indicating that, unsurprisingly, listeners were better at spotting a target word with each

Page 12: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 12

Table 3. Estimate, standard error, t-values and p-values from the model of reaction times.

Table 4. Estimate, standard error, z-values and p-values from the logistic regression model of accuracy.

β SE z p (Intercept) -1.96 0.70 -2.79 < 0.01 Presentation 0.83 0.06 14.71 < .0001 Target duration 4.89 1.10 4.45 < .0001 Sonority (+son) 0.61 0.11 5.55 < .0001

successive time they encountered it during the experiment; the significant effect of target

duration indicated that longer target words were also spotted more successfully.

More interesting were the results for linguistic variables. Notably absent from models of both

speed and accuracy was the tenseness of the vowel in the CVC syllables—indicating this did not

influence segmentations. There was also a significant effect for stress, such that SW targets were

spotted more quickly (though not more accurately) than WS targets. Note that this is not easily

seen from looking at averages in Table 2, where WS words appear to be associated with shorter

RTs. This discrepancy results from the fact that WS targets were on average longer than SW

words, which is factored out by the model.

Crucial to the present purposes, there was also a significant effect for sonority, indicating that

targets were spotted more quickly and more accurately when they followed sonorants. On

average (collapsing across vowel tenseness conditions), targets were identified 15 ms faster when

they followed a sonorant. Although there was no significant interaction with presentation

β SE t p (Intercept) 1012.15 30.50 33.18 < .0001 Presentation -46.85 1.89 -24.84 < .0001 Target Duration -765.52 33.74 -22.69 < .0001 Sonority (+son) -18.81 7.25 -2.59 < .01 Stress (SW) -52.27 8.59 -6.09 < .0001

Page 13: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 13

(indicating that sonority’s effect was robust even on later trials), the advantage was numerically

largest for first (22ms) and second (24ms) presentations.

The results were further probed for a gradient effect of sonority on the parsing of intervocalic

consonants. If such an effect were present, we might expect that, for example, spotting absent

would be easier in a string like jeevabsent than in jeebabsent, since /v/ is a high-sonority

obstruent relative to /b/, given commonly assumed sonority hierarchies, such as the one in (2):

(2) Sonority scale for obstruents (e.g., Parker, 2002):

/ p t k ʧ / < / b d g ʤ / < / f θ s ʃ / < / v ð z ʒ /

The stimuli in the present experiment were not designed to address this question, since different

targets were paired with different obstruent consonants (i.e., we would be comparing

jee[b]absent versus vu[ʃ]echo versus chi[v]onion, etc). However, since the stimuli used in the

obstruent condition made use of an array of consonants (and multiple targets were paired with

each), it was possible exploit the data to address this question in post-hoc fashion.

RTs for target words were therefore binned according to their preceding intervocalic obstruent,

and mean RT calculated for each bin of targets (this analysis was limited to RTs, and for first

presentations only). Figure 3 shows the results of this binning, ordered according to average RT

for each bin. A second round of modeling indicated that, when target duration was taken into

account, the only noteworthy difference was a marginally significant one between targets

following /g/ and those following /v/ (β = 144.9, p=.0996). However, given the lack of control

over the target words in each group, we might have expected this to come out much more

randomly than it did; as Figure 3 shows, the numerical parsability of obstruents as codas very

much resembled the sonority scale, such that higher sonority tends to be associated with coda

parsings. A further, larger study would be needed to explore this trend further.

Page 14: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 14

0

100

200

300

400

500

600

/g/ /θ/ /d/ /ʤ/ /b/ /f/ /ʃ/ /s/ /v/ /ð/ /z/

Intervocalic Consonant

Ave

rage

RT

(ms)

(6) (6) (4) (2) (4) (9) (6) (4) (4) (2) (2)

Figure 3. Mean reaction times to targets, grouped by the obstruent consonant they followed (rank-ordered from slowest to fastest). Error bars show standard error; number in parentheses indicates the number of targets that followed that particular consonant.

3. Discussion and Conclusion

The experiment presented here tested whether English-speaking listeners showed a bias that

could be understood in terms of syllable structure. The results showed very clearly that such a

bias exists; when positing word boundaries, listeners do not treat all intervocalic consonants

equivalently. Rather, they prefer to parse sonorants with the preceding vowel (jeem.absent), and

obstruents with the following vowel (jee.babsent), making vowel-initial words more difficult to

spot in the latter case. Thus, on-line segmentations by English-speakers show the same

preference that off-line syllabifications show, which also reflect cross-linguistic preferences for

sonority profiles for syllables. This is inconsistent with the claim that native language rhythm

class dictates the utilization of such a strategy, since listeners of English––the prototypically

stress-timed language––seem to be guided by syllable structure as well. There was also some

Page 15: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 15

suggestive evidence that sonority’s effect on parsing was gradient, as treatment of obstruents was

broadly consistent with the sonority scale. This finding, though interesting, requires further

testing (with more statistical power) to be verified.

One question is whether these sonority-based preferences could actually be explained by the

same mechanisms as other findings from word-spotting studies, i.e., lexical statistics, after all

(Cairns, Shillcock, Chater, & Levy, 1997; Daland & Pierrehumbert, 2011). The models in the

present study did not include estimations of the probabilities that various consonants appear in

word-initial or word-final positions. Therefore, it may be that the sonority scale and gradient

lexical statistics make similar predictions, in which case the segmentation mechanism could

possibly have only an indirect relation to sonority. Of course, this would not explain the relation

between sonority (as a phonetic concept) and such a structuring of the lexicon—or the cross-

linguistic generalizations that require reference to sonority sequencing.

While this matter is left open for further study, it should be pointed out that such gradient

phonotactic knowledge has generally not been shown to influence segmentation in word-

spotting, possibly distinguishing this on-line task from off-line acceptability-judging (see

Albright, 2009). Notably, listeners in the present study were not found to avoid even

segmentations that would leave a preceding CV sequence with a lax vowel unparsed, a

categorically unattested form in English. This same finding has been reported by Norris et al.

(2001), and by Newman et al. (2011), who note the apparent non-penalty for ill-formed (but

syllabic) residues may indicate that phonotactic information only becomes available post-

perceptually. They also note the possibility that violations on residues may be less crucial than

those on target onsets, consistent with previous findings (McQueen 1998), and echoing claims

about the primacy of onsets over codas in segmentation (van der Lugt 2001; Dumay,

Page 16: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 16

Frauenfelder, & Content, 2002). Therefore, while it is clear from the present study that rhythm

classes are not necessary to predict basic syllabic-parsing, such mechanisms may be qualified by

independent constraints inherent to the parser.

Acknowledgements

[Acknowledgements to be added post-review].

References

Albright, A. (2009). Feature-based generalisation as a source of gradient acceptability. Phonology, 26, 9–41 .

Berent, I., Steriade, D., Lennertz, T., & Vaknin, V. (2007). What we know about what we have

never heard: evidence from perceptual illusions. Cognition, 104, 591–630. Cairns, P., Shillcock, R., Chater, N., & Levy, J. (1997). Bootstrapping word boundaries: A

bottom-up corpus-based approach to speech segmentation. Cognitive Psychology, 33, 111–153.

Clements, G. (1990). The role of the sonority cycle in core syllabification. In J. Kingston & M.

Beckman (Eds.), Papers in laboratory phonology 1: Between the grammar and physics of speech, 283–333. New York: Cambridge University Press.

Content, A., Kearns, R., & Frauenfelder, U. (2001). Boundaries versus onsets in syllabic segmentation. Journal of Memory and Language, 45, 177–199.

Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the

segmentation of French and English. Journal of Memory and Language, 85, 385–400. Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access.

Journal of Experimental Psychology: Human Perception and Performance, 14, 113–121. Cutler, A., McQueen, J., Norris, D., & Somejuan, A. (2001). The roll of the silly ball. In E.

Dupoux (Ed.), Language, brain and cognitive development: Essays in honor of Jacques Mehler, 181–194. Cambridge, MA: MIT Press.

Daland, R., Hayes, B., White, J., Garellek, M., Davis, A., & Norrmann, I. (2011). Explaining

sonority projection effects. Phonology, 28, 197–234.

Page 17: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 17

Daland, R., & Pierrehumbert, J. (2011). Learning Diphone-Based Segmentation. Cognitive Science, 35, 119–155.

Dumay, N., Frauenfelder, U., & Content, A. (2002). The role of the syllable in lexical

segmentation in French: Word-spotting data. Brain and Language, 81, 144–161. Ettlinger, M., Finn, A., & Hudson Kam, C. (2012). The effect of sonority on word segmentation:

Evidence for the use of a phonological universal. Cognitive Science, 36, 655–673. Goslin, J., & Frauenfelder, U. (2000). A comparison of theoretical and human syllabification.

Language and Speech, 44, 409–436. Hanulikova, A., McQueen, J., & Mitterer, H. (2010). Possible words and fixed stress in the

segmentation of Slovak speech. Quarterly Journal of Experimental Psychology, 63, 555–579. Hooper, J. (1976). An introduction to natural generative phonology. New York: Academic Press. Jespersen, O. (1904). Lehrbuch der Phonetik. Leipzig & Berlin: Teubner. van der Lugt, A. (2001). The use of sequential probabilities in the segmentation of speech.

Perception & Psychophysics, 63, 811–823. McQueen, J. (1996). Word Spotting. Language and Cognitive Processes, 11, 695–699. McQueen, J. (1998). Segmentation of continuous speech using phonotactics. Journal of Memory

and Language, 39, 21–46. Moreton, E., Feng, G., & Smith, J. (2008). Syllabification, sonority, and perception: new

evidence from a language game. In: R.L. Edwards, P.J. Midtlyng, C.L. Sprague, and K.G. Stensrud (eds.): Proceedings of the Chicago Linguistic Society, 41, 341–355.

Murray, R., & Vennemann, T. (1983). Sound change and syllable structure in Germanic

phonology. Language, 59, 514–528 Newman, R., Sawusch, J., & Wunnenberg, T. (2011). Cues and cue interactions in segmenting

words in fluent speech. Journal of Memory and Language, 64(4), 460–476. Norris, D., McQueen, J., Cutler, A., & Butterfield, S. (1997). The Possible Word Constraint in

the segmentation of continuous speech. Cognitive Psychology, 34, 191–243. Norris, D., McQueen, J., Cutler, A., Butterfield, S., & Kearns, R. (2001). Language-universal

constraints on speech segmentation. Language and Cognitive Processes, 16, 637–660. Parker, S. (2002). Quantifying the sonority hierarchy. Ph.D. dissertation, University of

Massachusetts, Amherst.

Page 18: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Running title: Sonority-based segmentation 18

Selkirk, E. (1984). On the major class features and syllable theory. In M. Aronoff & R. Oerhle (Eds.) Language sound structure. Cambridge, Mass.: MIT Press. 107–136.

Shatzman, K., & McQueen, J. (2006). Segment duration as a cue to word boundaries in spoken-

word recognition. Perception and Psychophysics, 68, 1–16. Sievers, E. (1881). Grundzüge der Phonetik, zur Einführung in das Studium der Lautlehre der

indogermanischen Sprachen. Leipzig: Breitkopf & Härtel. Steriade, D. (1982). Greek prosodies and the nature of syllabification. PhD dissertation, MIT. Treiman, R., & Danis, C. (1988). Syllabification of intervocalic consonants. Journal of Memory

and Language, 27, 87–104. Venneman, T. (1988). Preference laws for syllable structure and the explanation of sound

change. Berlin: Mouton de Gruyter.

Page 19: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

Supplementary Materials

Complete list of target words and CVC conditions used in the word-spotting experiment. Targets are shown in English orthography, initial CVC syllables in broad IPA transcription.

(SW Targets) Obstruent Sonorant

Tense Lax Tense Lax /dɑv/over /mɪv/over /dɑm/over /mɪm/over /dɔɪg/image /dεg/image /dɔɪm/image /dεm/image /fɔɪz/under /fεz/under /fɔɪm/under /fɪm/under /fuð/album /vʊð/album /fun/album /vʊn/album /fuʃ/echo /fʊʃ/echo /fum/echo /fʊm/echo /keɪθ/angle /kεθ/angle /keɪl/angle /kεl/angle /kus/empty /kʊs/empty /kum/empty /kʊm/empty /lɔɪb/expect /lʌb/expect /lɔɪm/expect /lʌm/expect /muʃ/average /mɪʃ/average /mul/average /gɛl/average /neɪs/effort /nεs/effort /neɪn/effort /nεn/effort /θeɪf/enter /vɛf/enter /zin/enter /næn/enter /θeɪf/extra /θεf/extra /θeɪm/extra /θεm/extra /pɔɪd/uncle /dʊd/uncle /pɔɪn/uncle /dʊn/uncle /ʧɑθ/ancient /ʧʌθ/ancient /teɪn/ancient /tʊn/ancient /ʧiv/onion /ʃɪv/onion /plul/onion /plεl/onion /vif/anxious /vʌf/anxious /vin/anxious /vεn/anxious /vub/other /vʊb/other /vul/other /vʌl/other /vuʤ/ever /vεʤ/ever /vum/ever /vεm/ever /vuf/expert /væf/expert /vɑm/expert /væm/expert /vuθ/either /gʊθ/either /vul/either /gʊl/either /vus/ugly /vʊs/ugly /vun/ugly /vʊn/ugly /zeɪf/oven /zεf/oven /zeɪn/oven /sεn/oven /zeɪv/action /zæv/action /zɑm/action /zæm/action /zɔɪg/ankle /zʌg/ankle /zɔɪn/ankle /zʌn/ankle /ʤib/absent /nɪb/absent /ʤim/absent /nɪm/absent

(WS Targets)

Obstruent Sonorant Tense Lax Tense Lax /bus/against /bʊs/against /fun/against /fʊn/against /ʤaɪf/accept /ʤæf/accept /ʤaɪm/accept /gæm/accept /dɔɪg/allow /dʊg/allow /dɔɪm/allow /dεm/allow /faɪʤ/attach bʊʤ/attach /faɪm/attach /fæm/attach /fug/astound /fεg/astound /fum/astound /fεm/astound /fuʃ/occur /fʊʃ/occur /fun/occur /fʊn/occur /gib/immerse /gɪb/immerse /θul/immerse /dæl/immerse /giʤ/annoy /gʌʤ/annoy /zin/annoy /zʊn/annoy

Page 20: Running title: Sonority-based segmentation 1jbbishop.weebly.com/uploads/1/4/5/9/14591628/cog_sonority_submi… · Running title: Sonority-based segmentation 1. Sonority effects on

/gig/enough /nʌg/enough /blim/enough /nɪm/enough /giz/erect /gεz/erect /gin/erect /gɪn/erect /keɪθ/among /kεθ/among /bul/among /ʧʊl/among /θeɪʃ/adopt /θεʃ/adopt /θeɪn/adopt /θεn/adopt /θeɪʃ/exist /θʌʃ/exist /θeɪn/exist /θʌn/exist /ʃɑv/eject /ʃɪv/eject /zɑm/eject /zεm/eject /ʧeɪð/exempt /ʧɪð/exempt /ʧeɪm/exempt /kεm/exempt /teɪθ/applaud /tæθ/applaud /teɪn/applaud /gεn/applaud /vif/agree /vɪf/agree /vin/agree /vɪn/agree /voʊd/avenge /vɪd/avenge /voʊn/avenge /vɪn/avenge /vuʤ/effect /vʊʤ/effect /vul/effect /vʊl/effect /vuθ/achieve /væθ/achieve /vul/achieve /fæl/achieve /vuʃ/object /vʊʃ/object /vul/object /vʊl/object /zeɪf/obsessed /zεf/obsessed /zeɪm/obsessed /zεm/obsessed /zɔɪg/exert /zʊg/exert /zɔɪn/exert /zʊn/exert /zif/elect /zɪf/elect /zin/elect /zɪn/elect /vuʃ/evict /vʊʃ/evict /vum/evict /vʊm/evict