In his free time, he enjoys spending time with his family and friends, reading, listening to music, watching sports and films, and exercising. He is a two-time marathon finisher.
215.001Introduction To Spanish Linguistics
214.004Oral Communication In Spanish
Benjamin Schmeiser is an associate professor of Spanish Linguistics at Illinois State University. He earned his PhD in Spanish Linguistics, with a specialization in Phonetics and Phonology, from the University of California, Davis in 2006.
Since entering the teaching field in 1994, he has taught Spanish at many levels, ranging from elementary, junior high, and high school to junior college and the university. He has also taught English and Portuguese at the university level. As an assistant professor, he received Honorable Mention for excellence in teaching in 2011. He was awarded tenure in 2012. As an associate professor, he was the co-recipient of the Kenneth A. and Mary Ann Shaw Teaching Fellowship for the 2019-2020 academic year. He has twice been recognized as an 'MVP professor' by student-athletes. He teaches courses in Spanish (Linguistics, grammar, writing, conversation), Portuguese, and English (European Film).
His research concentrates on Spanish, Portuguese, and Pali, as well as general areas of phonetics and phonology in cross-linguistic terms. In sum, he has either taught, conducted field work, presented and/or published his research in the United States, Brazil, Canada, England, Finland, Greece, Guatemala, Japan, Poland, Portugal, and Spain. Within the United States, he has taught in five states, namely California, Illinois, Indiana, Iowa, and Kentucky.
1.1 Experimental approach
Over the last ten years, my research has been defined by two principal qualities. First, it is part of the phonetics-phonology interface. This interface refers to phonetically-guided research in phonology. In general terms, it means that abstract frameworks used to describe human speech must be grounded in fine-grained, minute phonetic detail in languages (e.g. my research in svarabhakti vowels). Until the late 1990s, there was quite often a separation between those who did phonetics (i.e. more concrete) and those who did phonology (i.e. more abstract). My work is part of a movement that synthesizes both approaches.
Second, with the exception of my work with a historical basis, my research is defined by its usage of laboratory phonology. This approach differs from past research that often cited data from previous studies. In this approach, the researcher analyzes data from participants whose speech was recorded in a controlled environment. Data are then laboriously transcribed with phonetic symbols and then analyzed. In my research, I collect data in the field and analyze it using up-to-date methods and software; for example, I conduct all spectrographic and waveform analysis in Speech Analyzer 2.6, all audio file editing in Sound Forge, and all statistical analysis in the software most commonly-used in my field, a program simply called ‘R’.
1.2 My approach
My research interests center on the properties of sounds within a dynamic language system. Languages generally contain around forty sounds used in human speech. Studies in the last fifteen years have completely changed our view of how we process these sounds in oral communication. They have illustrated with empirical data that sounds in a word are not a collection of independent, static units, like pearls on a necklace. Rather, they are interdependent, dynamic units, called ‘gestures’ in Articulatory Phonology (henceforth, AP) (Browman & Goldstein 1992; Gafos et al. in press). In this approach, a gesture is a dynamically defined articulatory movement that produces a constriction in the vocal tract. For this discussion, a gesture is roughly used to represent the movements required to produce a given sound in human language. In AP, the vowel is the underlying gesture in a syllable and consonants (i.e. constrictions) are ‘placed’ onto the vocalic gesture.
1.3 Research motivations
My work is motivated by the premise that human speech has a very intricate timing relationship between adjacent consonants, called ‘consonant clusters’. Consonant clusters represent an intriguing challenge to the linguist in that each language allows for different consonant cluster combinations, both within and across a syllable. My research seeks to answer three fundamental questions. First, what is the governing force behind these changes in timing relationships? Second, how do changes in timing relationships between sounds (i.e. gestures), particularly adjacent sounds, alter a particular language? Third, at a theoretical level, what implications do these changes have for a gestural-based approach? In what follows, I discuss how I answered these questions in further detail.
Accepted; to present on 6-8 Sep. “On Spanish Trill Production Improvement for L1 English Learners.” Pronunciation in Second Language Learning and Teaching 2018. Poster Presentation. Iowa State University. Ames, Iowa.