Linguistics Colloquium, Tessa Bent, Indiana University
Abstract: Substantial variability in the speech signal arises from within- and across-talker factors (e.g., speaking style, health status, gender, dialect, native language). Rather than viewing this variability as noise, there is now ample evidence that listeners encode both linguistic and social- indexical information in highly detailed cognitive representations and that these information sources interact in ways that typically give rise to robust speech comprehension. However, some sources of variability can cause substantial decrements in speech comprehension, particularly for listeners with developing linguistic systems or for listeners of any age in adverse listening environments. In this talk, I will describe a series of experiments that tested children’s and adults’ abilities to extract linguistic information from unfamiliar regional dialects and nonnative accents using sentence and word recognition tasks (i.e., hear a word/sentence in quiet or in noise and repeat it back). Results showed that school-aged children do not have fully adult-like abilities to perceive nonnative-accented speech. In fact, fully mature word identification abilities may not emerge until adolescence. Further, the combination of an unfamiliar accent or dialect and noise causes children substantial difficulty and suggests that their representations of unfamiliar accents and dialects are fragile. Although children’s abilities to map these unfamiliar pronunciations onto words in their lexicons are still developing, young children can capitalize on contextual cues when presented with nonnative-accented speech. However, early school-aged children’s abilities to benefit from this top-down information during the perception of unfamiliar accents is not as robust as adults’. These studies suggest that children may not have developed the necessary cognitive-linguistic skills or accrued sufficient linguistic experience to promote fully mature bottom-up or top-down processing of speech that deviates from home dialect norms. Continued investigations are needed to build speech perception models that can fully explain the interplay between linguistic processing and socio- indexical variables during speech comprehension, including cases in which there is a dialect or native language mismatch between the talker and listener. [Work supported by the National Science Foundation (grant number 1461039)].