Speech Perception
Speech perception is the ability to comprehend speech through listening. Mankind is constantly being bombarded by acoustical energy. The challenge to humanity is to translate this energy into meaningful data. Speech perception is not dependent on the extraction of simple invariant acoustic patterns in the speech waveform. The sound's acoustic pattern is complex and greatly varies. It is dependent upon the preceding and following sounds (Moore, 1997). According to Fant (1973), speech perception is a process consisting of both successive and concurrent identification on a series of progressively more abstract levels of linguistic structure.
Nature of Speech Sounds
Phonemes are the smallest unit of sound. In any given language words are formed by combining these phonemes. English has approximately 40 different phonemes that are defined in terms of what is perceived, rather than in terms of acoustic patterns. Phonemes are abstract, subjective entities that are often specified in terms of how they are produced. Alone they have no meaning, but in combination they form words (Moore, 1997).
In speech there are vowels and consonants. Consonants are produced by constricting the vocal tract at some point along its length. These sounds are classified into different types according to the degree and nature of the constriction. The types are stops, affricates, fricatives, nasals, and approximants. Vowels are usually voiced and are relatively stable over time Moore, 1997).
Categorical Perception
Categorical perception implies definite identification of the stimuli. The main point in this area is that the listener can only correctly distinguish speech sounds to the extent that they are identified a...
... middle of paper ...
...IT Press.
Liberman, A.M. and Mattingly, I.G. (1985). The Motor Theory of Speech Perception Revised. Cognition, 21. 1-36.
Lobacz, P. (1984). Processing and Decoding the Signal in Speech Perception. Helmut Buske Verlag Hamburg.
Luce, P.A. and Pisoni, D.B. (1986). Trading Relations, Acoustic Cue Integration, and Context Effects in Speech Perception. The Psychophysics of Speech Perception. Edited by M.E.H. Schouten.
Moore, B.C.J. (1997). An Introduction to the Psychology of Hearing. (4th ed.) San Diego, CA: Academic Press.
Stevens, K.N. (1986). Models of Phonetic Recognition II: A feature based model of speech recognition. Montreal Satellite Symposium on Speech Recognition. Edited by P. Mermelstein.
Studdert-Kennedy, M. and Shankweiler, D. (1970). Hemispheric Specialization for Speech Perception. Journal of Acoustical Society of America, 48. 579-592.
The ultimate goal for a system of visual perception is representing visual scenes. It is generally assumed that this requires an initial ‘break-down’ of complex visual stimuli into some kind of “discrete subunits” (De Valois & De Valois, 1980, p.316) which can then be passed on and further processed by the brain. The task thus arises of identifying these subunits as well as the means by which the visual system interprets and processes sensory input. An approach to visual scene analysis that prevailed for many years was that of individual cortical cells being ‘feature detectors’ with particular response-criteria. Though not self-proclaimed, Hubel and Wiesel’s theory of a hierarchical visual system employs a form of such feature detectors. I will here discuss: the origins of the feature detection theory; Hubel and Wiesel’s hierarchical theory of visual perception; criticism of the hierarchical nature of the theory; an alternative theory of receptive-field cells as spatial frequency detectors; and the possibility of reconciling these two theories with reference to parallel processing.
The next speaker, Dr. Gottlieb investigated the hearing aspect of our senses. He investigated the interaction between our heari...
Seikel, J. A., King, D. W., & Drumright, D. G. (2010). 12. Anatomy & physiology for speech,
Passer, M., Smith, R., Holt, N., Bremner, A., Sutherland, E., & Vliek, M. (2009). Psychology; Science of Mind and Behaviour. (European Edition). New York.
Lu, Z.-L., Williamson, S.J., & Kaufman L. (1992, Dec 4). Behavioral lifetime of human auditory
Wessinger, C.M., Clapham, E. (2009) Cognitive Neuroscience: An Overview , Encylopedia of Neuroscience. 12(4) 1117-1122.
...tion. In true recognition, there was more activity in temporal lobe on left hemisphere, which store sounds of words.
Massaro, D. W. & Warner, D. S. (1977). Dividing attention between auditory and visual perception. Attention, Perception & Psychophysics, 21(6): 569-574.
Everyone has experienced hearing a language they do not understand. In that context, the words seem to consist of a meaningless series of sounds; this is often ascribed to the listener not knowing the definitions of the vocabulary used. However, in addition to not being familiar with the words said, a person who does not understand the language will hear and process the sounds differently than a native speaker. This fact is partially explained by categorial perception, a perceptual-learning phenomenon in which the categories of different stimuli possessed by an individual affect his or her perception.
Strategies of accommodation:. development of a coding system for conversational interaction. Journal of Language and Social Psychology, 18(2), 123-152. Krippendorff, Klaus.
These three groups were then asked to complete three different tasks. The first was to repeat and segment 20 different words (5 consonant-vowel-consonant, 5 CCVC, 5 CVCC, and 5 CCVCC) and two overall scores were administered to the participants. Both scores were out of a maximum of 20 points; the first score was based on giving 1 point for each correctly analyzed word, and the second score was based on giving 1 point for correctly analyzing medial vowels.
Researchers have provided different classifications of speech errors. They can be categorized according to the “linguistic units,” such as “phonological feature, phoneme, syllable, morpheme, word phrase, or sentence levels” (Harely, 2001, p. 376). Moreover, speech errors can be classified according to the “mechanisms” of the speech errors (Harely, 2001, p. 376). For example, Carroll (2007) classified eight of the basic types of slips of the tongue according to the error mechanism from the previous psycholinguistic studies. These errors include shift, exchanges, anticipations, perseveration, additions, deletions, substitutions, and blends.
Pickles, James O. An Intro to the Physiology of Hearing. NY: Academic Press. 1982. p- 264- 79
Consonant is a speech sound which is produced by a partial or a complete obstruction of the airflow by the constriction of the speech organs ( Ladefoged and Disner, 2012:201). The production of consonants involves bringing two of the speech organs close enough together to shut off or restrict the flow of air (Bennett, 1998: 7). Consonant sounds may be voiced or voiceless which are produced with an obstruction or occlusion at some points in the vocal tract, this obstruction of airflow could be complete or partial (Al-Hamad, 2002: 75 and Erwin, 2004:5). Consonant sounds are not produced or formed only if there is an occlusion in the flow of air or when there is a close articulation of two organs of speech or when they are fully pressed together to form the consonant sound (Ahmed, 2004:16).
McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: I. An account of basic findings. Psychological review, 88(5), 375.