This study investigated the effect of repeated evaluation and repeated exposure on grammatical acceptability ratings for both acceptable and unacceptable sentence types. In Experiment 1, subjects in the Experimental group rated multiple examples of two ungrammatical sentence types (ungrammatical binding and double object with dative-only verb), and two difficult to process sentence types [center-embedded (2) and garden path ambiguous relative], along with matched grammatical/non-difficult sentences, before rating a final set of experimental sentences. Subjects in the control group rated unrelated sentences during the exposure period before rating the experimental sentences. Subjects in the Experimental group rated both grammatical and ungrammatical sentences as more acceptable after repeated evaluation than subjects in the Control group. In Experiment 2, subjects answered a comprehension question after reading each sentence during the exposure period. Subjects in the experimental group rated garden path and center-embedded (1) sentences as higher in acceptability after comprehension exposure than subjects in the control group. The results are consistent with increased fluency of comprehension being misattributed as a change in acceptability.
In Japanese, vowel duration can distinguish the meaning of words. In order for infants to learn this phonemic contrast using simple distributional analyses, there should be reliable differences in the duration of short and long vowels, and the frequency distribution of vowels must make these differences salient enough in the input. In this study, we evaluate these requirements of phonemic learning by analyzing the duration of vowels from over 11 hours of Japanese infant-directed speech. We found that long vowels are substantially longer than short vowels in the input directed to infants, for each of the five oral vowels. However, we also found that learning phonemic length from the overall distribution of vowel duration is not going to be easy for a simple distributional learner, because of the large base-rate effect (i.e., 94% of vowels are short), and because of the many factors that influence vowel duration (e.g., intonational phrase boundaries, word boundaries, and vowel height). Therefore, a successful learner would need to take into account additional factors such as prosodic and lexical cues in order to discover that duration can contrast the meaning of words in Japanese. These findings highlight the importance of taking into account the naturalistic distributions of lexicons and acoustic cues when modeling early phonemic learning.
The labial-coronal effect has originally been described as a bias to initiate a word with a labial consonant-vowel-coronal consonant (LC) sequence. This bias has been explained with constraints on the human speech production system, and its perceptual correlates have motivated the suggestion of a perception-production link. However, previous studies exclusively considered languages in which LC sequences are globally more frequent than their counterpart. The current study examined the LC bias in speakers of Japanese, a language that has been claimed to possess more CL than LC sequences. We first conducted an analysis of Japanese corpora that qualified this claim, and identified a subgroup of consonants (plosives) exhibiting a CL bias. Second, focusing on this subgroup of consonants, we found diverging results for production and perception such that Japanese speakers exhibited an articulatory LC bias, but a perceptual CL bias. The CL perceptual bias, however, was modulated by language of presentation, and was only present for stimuli recorded by a Japanese, but not a French, speaker. A further experiment with native speakers of French showed the opposite effect, with an LC bias for French stimuli only. Overall, we find support for a universal, articulatory motivated LC bias in production, supporting a motor explanation of the LC effect, while perceptual biases are influenced by distributional frequencies of the native language.
Numerous studies have reported an effect of prosodic information on parsing but whether prosody can impact even the initial parsing decision is still not evident. In a visual world eye-tracking experiment, we investigated the influence of contrastive intonation and visual context on processing temporarily ambiguous relative clause sentences in Japanese. Our results showed that listeners used the prosodic cue to make a structural prediction before hearing disambiguating information. Importantly, the effect was limited to cases where the visual scene provided an appropriate context for the prosodic cue, thus eliminating the explanation that listeners have simply associated marked prosodic information with a less frequent structure. Furthermore, the influence of the prosodic information was also evident following disambiguating information, in a way that reflected the initial analysis. The current study demonstrates that prosody, when provided with an appropriate context, influences the initial syntactic analysis and also the subsequent cost at disambiguating information. The results also provide first evidence for pre-head structural prediction driven by prosodic and contextual information with a head-final construction.
Two eye-tracking experiments tested how pitch prominence on a prenominal adjective affects contrast resolution in Japanese adult and 6-year old listeners. Participants located two animals in succession on displays with multiple colored animals. In Experiment 1, adults' fixations to the contrastive target (pink cat â†’ GREEN cat) were facilitated by a pitch expansion on the adjective while infelicitous pitch expansion (purple rabbit â†’ ORANGE monkey) led to a garden-path effect, i.e., frequent fixations to the incorrect target (orange rabbit). In 6-year olds, only the facilitation effect surfaced. Hypothesizing that the interval between the two questions may not have given enough time for children to overcome their tendency to perseverate on the first target, Experiment 2 used longer intervals and confirmed a garden-path effect in 6-year olds. These results demonstrate that Japanese 6-year olds can make use of contrast-marking pitch prominence when time allows an establishment of proper discourse representation. Â© 2011 Elsevier Inc.
The Japanese language has single/geminate obstruents characterized by durational difference in closure/frication as part of the phonemic repertoire used to distinguish word meanings. We first evaluated infants' abilities to discriminate naturally uttered single/geminate obstruents (/pata/ and /patta/) using the visual habituation-dishabituation method. The results revealed that 9.5-month-old Japanese infants were able to make this discrimination, t(21) = 2.119, p = .046, paired t test, whereas 4-month-olds were not, t(25) = 0.395, p = .696, paired t test. To examine how acoustic correlates (covarying cues) are associated with the contrast discrimination, we tested Japanese infants at 9.5 and 11.5 months of age with 3 combinations of natural and manipulated stimuli. The 11.5-month-olds were able to discriminate the naturally uttered pair (/pata/ vs. /patta/), t(20) = 4.680, p
Recent studies on the acquisition of semantics have argued that knowledge of the universal quantifier is adult-like throughout development. However, there are domains where children still exhibit non-adult-like universal quantification, and arguments for the early mastery of relevant semantic knowledge do not explain what causes such non-adult-like interpretations. The present study investigates Japanese four- and five-year-old children's atypical universal quantification in light of the development of cognitive control. We hypothesized that children's still-developing cognitive control contributes to their atypical universal quantification. Using a combined eye-tracking and interpretation task together with a non-linguistic measure of cognitive control, we revealed a link between the achievement of adult-like universal quantification and the development of flexible perspective-switch. We argue that the development of cognitive control is one of the factors that contribute to children's processing of semantics.
In adults, native language phonology has strong perceptual effects. Previous work has shown that Japanese speakers, unlike French speakers, break up illegal sequences of consonants with illusory vowels: they report hearing abna as abuna. To study the development of phonological grammar, we compared Japanese and French infants in a discrimination task. In Experiment 1, we observed that 14-month-old Japanese infants, in contrast to French infants, failed to discriminate phonetically varied sets of abna-type and abuna-type stimuli. In Experiment 2, 8-month-old French and Japanese did not differ significantly from each other. In Experiment 3, we found that, like adults, Japanese infants can discriminate abna from abuna when phonetic variability is reduced (single item). These results show that the phonologically induced /u/ illusion is already experienced by Japanese infants at the age of 14 months. Hence, before having acquired many words of their language, they have grasped enough of their native phonological grammar to constrain their perception of speech sound sequences.
This study uses near-infrared spectroscopy in young infants in order to elucidate the nature of functional cerebral processing for speech. Previous imaging studies of infants' speech perception revealed left-lateralized responses to native language. However, it is unclear if these activations were due to language per se rather than to some low-level acoustic correlate of spoken language. Here we compare native (L1) and non-native (L2) languages with 3 different nonspeech conditions including emotional voices, monkey calls, and phase scrambled sounds that provide more stringent controls. Hemodynamic responses to these stimuli were measured in the temporal areas of Japanese 4 month-olds. The results show clear left-lateralized responses to speech, prominently to L1, as opposed to various activation patterns in the nonspeech conditions. Furthermore, implementing a new analysis method designed for infants, we discovered a slower hemodynamic time course in awake infants. Our results are largely explained by signal-driven auditory processing. However, stronger activations to L1 than to L2 indicate a language-specific neural factor that modulates these responses. This study is the first to discover a significantly higher sensitivity to L1 in 4 month-olds and reveals a neural precursor of the functional specialization for the higher cognitive network.
Developmental stuttering is a speech disorder in fluency characterized by repetitions, prolongations, and silent blocks, especially in the initial parts of utterances. Although their symptoms are motor related, people who stutter show abnormal patterns of cerebral hemispheric dominance in both anterior and posterior language areas. It is unknown whether the abnormal functional lateralization in the posterior language area starts during childhood or emerges as a consequence of many years of stuttering. In order to address this issue, we measured the lateralization of hemodynamic responses in the auditory cortex during auditory speech processing in adults and children who stutter, including preschoolers, with near-infrared spectroscopy. We used the analysis-resynthesis technique to prepare two types of stimuli: (i) a phonemic contrast embedded in Japanese spoken words (/itta/ vs. /itte/) and (ii) a prosodic contrast (/itta/ vs. /itta?/). In the baseline blocks, only /itta/ tokens were presented. In phonemic contrast blocks, /itta/ and /itte/ tokens were presented pseudo-randomly, and /itta/ and /itta?/ tokens in prosodic contrast blocks. In adults and children who do not stutter, there was a clear left-hemispheric advantage for the phonemic contrast compared to the prosodic contrast. Adults and children who stutter, however, showed no significant difference between the two stimulus conditions. A subject-by-subject analysis revealed that not a single subject who stutters showed a left advantage in the phonemic contrast over the prosodic contrast condition. These results indicate that the functional lateralization for auditory speech processing is in disarray among those who stutter, even at preschool age. These results shed light on the neural pathophysiology of developmental stuttering.
Perceptual grouping has traditionally been thought to be governed by innate, universal principles. However, recent work has found differences in Japanese and English speakers' non-linguistic perceptual grouping, implicating language in non-linguistic perceptual processes (Iversen, Patel, & Ohgushi, 2008). Two experiments test Japanese- and English-learning infants of 5-6 and 7-8 months of age to explore the development of grouping preferences. At 5-6 months, neither the Japanese nor the English infants revealed any systematic perceptual biases. However, by 7-8 months, the same age as when linguistic phrasal grouping develops, infants developed non-linguistic grouping preferences consistent with their language's structure (and the grouping biases found in adulthood). These results reveal an early difference in non-linguistic perception between infants growing up in different language environments. The possibility that infants' linguistic phrasal grouping is bootstrapped by abstract perceptual principles is discussed.
Adults typically address infants in a special speech mode called infant-directed speech (IDS). IDS is characterized by a special prosody (i.e., higher pitched, slower and hyperarticulated) and a special lexicon ("baby talk"). Here we investigated which areas of the adult brain are involved in processing IDS, which aspects of IDS (prosodic or lexical) are processed, to what extent the experience of being a parent affects the way adults process IDS, and the effects of gender and personality on IDS processing. Using functional magnetic resonance imaging, we found that mothers with preverbal infants showed enhanced activation in the auditory dorsal pathway of the language areas, regardless of whether they listened to the prosodic or lexical component of IDS. We also found that extroverted mothers showed higher cortical activation in speech-related motor areas than did mothers with lower extroverted personality scores. Increased cortical activation levels were not found for fathers, non-parents, or mothers with older children.
Infants' speech perception abilities change through the first year of life, from broad sensitivity to a wide range of speech contrasts to becoming more finely attuned to their native language. What remains unclear, however, is how this perceptual change relates to brain responses to native language contrasts in terms of the functional specialization of the left and right hemispheres. Here, to elucidate the developmental changes in functional lateralization accompanying this perceptual change, we conducted two experiments on Japanese infants using Japanese lexical pitch-accent, which changes word meanings with the pitch pattern within words. In the first behavioral experiment, using visual habituation, we confirmed that infants at both 4 and 10 months have sensitivities to the lexical pitch-accent pattern change embedded in disyllabic words. In the second experiment, near-infrared spectroscopy was used to measure cortical hemodynamic responses in the left and right hemispheres to the same lexical pitch-accent pattern changes and their pure tone counterparts. We found that brain responses to the pitch change within words differed between 4- and 10-month-old infants in terms of functional lateralization: Left hemisphere dominance for the perception of the pitch change embedded in words was seen only in the 10-month-olds. These results suggest that the perceptual change in Japanese lexical pitch-accent may be related to a shift in functional lateralization from bilateral to left hemisphere dominance.
Japanese has a vowel duration contrast as one component of its language-specific phonemic repertory to distinguish word meanings. It is not clear, however, how a sensitivity to vowel duration can develop in a linguistic context. In the present study, using the visual habituation-dishabituation method, the authors evaluated infants' abilities to discriminate Japanese long and short vowels embedded in two-syllable words (/mana/ vs. /ma:na/). The results revealed that 4-month-old Japanese infants (n = 32) failed to discriminate the contrast (p = .676), whereas 9.5-month-olds (n = 33) showed the discrimination ability (p = .014). The 7.5-month-olds did not show positive evidence to discriminate the contrast either when the edited stimuli were used (n = 33; p = .275) or when naturally uttered stimuli were used (n = 33; p = .189). By contrast, the 4-month-olds (n = 24) showed sensitivity to a vowel quality change (/mana/ vs. /mina/; p = .034). These results indicate that Japanese infants acquire sensitivity to long-short vowel contrasts between 7.5 and 9.5 months of age and that the developmental course of the phonemic category by the durational changes is different from that by the quality change.