How can we help?

You can also find more resources in our Help Center.

252 terms

hearing test one

Chapters one through five
STUDY
PLAY
spondees
are two syllable words spoken with equal stress on each syllable
parameters of hearing loss (4)
1. what is the severity
2. when did the loss begin
3. what is the cause
4. how quickly has the loss progressed
types of configurations
1. flat
2. high frequency
3. low frequency
4. saucer shaped
flat configuration
thresholds are within a 20 db range of each other across the span of frequencies. Looks like a flat line.
precipitous
high frequency hearing loss. often called sloping. reduced hearing at 4000-8000Hz
low frequency hearing loss
cannot hear sounds in frequencies 2000 Hz and below. Also known as a "reverse slope".
saucer shaped configuration
cannot hear frequencies in the midrange.
perilingual
Refers to a hearing loss acquired during the stage of acquired language
microtia
small external ear
atresia
closure of the external ear canal
aural rehab goal is to
minimizing and alleviating the communication difficulties associated with hearing loss
hearing related disability
loss of function because of an impairment (hearing loss), for example the inability to understand conversation at the office
impairment
structural or functional damage to the auditory system
handicap
social or vocational consequences of the disability
audiologic rehabilitation
Is a term most often used synonymously with aural rehabilitation; it entails greater emphasis on the provision and follow-up listening devices and less emphasis on communication strategies and auditory and speechreading training.
pure tone average
is the average of the thresholds at 500, 1,000 and 2,000 Hz
mediCARE
for older people, 65
mediCAID
for poor people
level of evidence 1a
systematic meta-analysis
level of evidence 1b
randomized controlled
level of evidence 2a
controlled without randomization
level of evidence 2b
quasi-experimental, cohort, pre/post measures
level of evidence 3
non-exeperimental, correlational and case studies
level of evidence 4
expert, consensus, authority
5 steps to EBP decision making
1. generate question
2. find current best evidence
3. evaluates evidence
4. makes a recommendation
5. follows up
11 knowledge and skills; SLP's who provide AR services
1. general knowledge
2. basic communication processes
3. auditory system function/disorders
4. developmental status, cognition, and sensory perception
5. audiologic assessment procedures
6. assessment of communication performance
7. devices and tech for those with hearing loss
8. effects of HL on psychosocial, edu, and vocational functioning
9. intervention/case management
10. interdisciplinary/advocacy
11. acoustic environments
threshold
the level at which sound can be detected 50% of the time
Normal hearing dB
0-15 dB especially in children
mild hearing loss dB
26 - 40 dB. in the presence of noise, speech recognition may decrease by 50%
moderate hearing loss dB
41-55 dB. dose well only face to face and/or in quiet environments
borderline normal hearing dB
15-25 dB.
moderate-to-severe hearing loss dB
56-70 dB. difficulty conversing face to face and in groups. may miss all or most of the message
severe hearing loss dB
71-90 dB. cannot hear voices, unless speech is loud
profound hearing loss dB
90 or greater. perceives sounds as vibrations
speech recognition threshold
The lowest level at which spondee words can be recognized accurately 50% of the time
speech discrimination score
a term that is not used very often anymore, refers to the percentage of monosyllabic words presented at a comfortable listening level that can be correctly repeated.
most comfortable loudness
(MCL) level sound is comfortable
uncomfortable loudness level
(UCL) level at which sound becomes uncomfortable
dynamic range
the difference in dB between SRT and UCL. this will often influence selection and programming of a hearing aid
loudspeaker azimuth
the position of the loudspeaker relative to the listener. If it's directly in front of the patient it is 0 degrees, if it is behind the patient its 180 degrees.
sound field testing
Often, speech recognition tests are presented in a sound field as opposed to under headphones. If the patient wears a hearing aid, they should wear it during testing. The clinician will indicate the loudspeaker azimuth. This helps us know how the patient hears speech in a normal everyday environment.
dead air space
unventilated air spaces. This is where audiological testing should be done to obtain the patients best performance. The room should be sound proof, insulated, and have a tight sealing door. If this is not available, you should use the quietest room you can.
purpose of speech recognition testing (9)
1. determine the need for amplification.
2. compare performance w/ and w/o hearing
3. compare different listening devices.
4. demonstrate that their ability to recognize speech is diminished. They may be unaware because they use compensatory visual cues
5. demonstrate the need or benefits of visual speech performance and speechreading training.
6. obtain info that might clarify environment related listening issues. e.g., do testing w/ and w/o background noise
7. determine placement within a training curriculum.
8. evaluate the appropriateness of an educational placement setting.
9. determine if expected benefit has been achieved. To see if the hearing aid is working.
communication mode
how a sender shares info with a receiver. e.g., speech, writing, ASL
Information transmission analysis
a statistical procedure that analyzes speech features by scoring confusion between test stimuli that are group based on features. For example, if the patient confuses eepee for eetee, they will receive credit for correctly utilizing the voicing but not the place feature.
Multidimensional scaling
a statistical procedure whereby data points are represented in a geometric space, for example, two phonemes that sound similar to a patient will be plotted near each other and two that don't will be plotted far from each other.
Cluster analysis
a statistical approach to information in a database that aims to determine which data points fall into groups or clusters; for example, it is common for, b, d, g, to be cluster together because they often sound similar to people with hearing loss
Six simple questions that may indicate a hearing loss.
1. Can you hear on the phone?
2. Do people tell you that you set the TV volume too high?
3. Do you often ask people to repeat?
4. Do you have problems in noisy rooms?
5. Do people seem to mumble?
6. Are women and children especially difficult to hear?
phonetically balanced word list
presents a set of words that contain speech sounds with the same frequency in which they occur in everyday conversations
acoustic lexical neighborhood
comprised of a set of words that are acoustically similar and have approximately the same frequency of occurrence.
Frequency of occurrence
refers to the frequency in which a word is likely to occur in everyday conversation.
A dense neighborhood
refers to words that share acoustic characteristics with a lot of other words, or rhythms with many other words. For example, fat, cat, mat and many more.
A sparse neighborhood
refers to words that don't rhythm with many other words. For example, the words lost and cost.
Advantages and Disadvantages of nonsense syllables stimuli
Advantages; performance is unaffected by vocabulary and a feature analysis can be performed.
Disadvantages; not appropriate for some children and they have poor face validity.
Face validity means it "looks like" it is going to measure what it is supposed to measure. When you are asking the client to repeat nonsense they may look at you like your crazy and think, how is this going to help me hear speech better?
Advantages and Disadvantages of word stimuli
Advantages; they have higher face validity, easy to score, and permit fine grained scoring, which mean comparing word score to phoneme score.
Disadvantages; it may not index everyday listening, because we typically listen to connected discourse, and it may not be appropriate for some who have limited vocabulary.
Advantages and Disadvantages of sentence stimuli
Advantages; it has high face validity and it's likely to reflect the real world.
Disadvantages; performance may be influenced by linguistic knowledge and by familiarity with the topic.
Test of speech recognition can be administered in one of three conditions.
Audition only;
Vision only;
Audition plus vision;
Audition only condition
only one auditory signal is presented, usually at a normal or moderately loud conversation level, 60 to 70 decibels. Alternatively, the level might be set at 30 to 40 decibels about their speech recognition threshold.
vision only condition
only the visual signal is presented, usually showing the head and neck of the test talker, this is lipreading condition.
Audition plus vision condition
both auditory and visual signals are presented, this is speechreading condition.
sensational level (SL)
is the level of a sound in dB above a person's threshold. For example, setting the audiometer 30 to 40 dB about their speech recognition threshold.
white noise
is broadband noise that has equal energy at all frequencies.
Signal to noise ratio
is the level of a signal relative to a background of noise. For example, if the signal is presented at 40 dB, and the noise is presented at 30 dB, the SNR is 10.
speechreading enhancement
the advantage afforded by adding hearing to a vision only condition.
open set task
no choices
closed set task
multiple choice, easier than open.
Disadvantages of live testing
variability from one test to another, different clinicians have different speaking styles, e.g., they may have different frequency, male or female, intonation, rate, and clarity. Also, during vision conditions, clinicians may have different physical characteristics, e.g., bigger lips, pronounced jaw, facial hair or no facial hair, expressive facial movements or Botox.
Time-compressed speech
is one way is could be altered. Speech has been accelerated by removing segments and then compressing the remaining segments together without changing its frequency.
Expanded speech
is altered by duplicating small segments so that is sounds like a slow speaking rate.
Filtered speech removes or amplifies frequency bands in the signal. This may help determine how their hearing aid should be programmed.
Low pass filtered speech
removes all high frequencies.
High pass filtered speech
removes all low frequencies.
Learning effect
occur when performance on a test improves as a function of familiarity with the test, not a change in ability.
Equivalent list
contain items that are presumed to be equally difficult to recognize. This is one way to address the learning effect problem.
Test retest variability
is a measure of the consistency of a test from one presentation to the next. this may be low because people, especially children, have different moods, motivation levels, energy, and interest, from day to day.
Reliability
is the degree to which a group of test takers will achieve the same score with repeated administrations of a test. Changes in these things can also shift test scores; live versus recording, change location, change clinician, or repeating the test item more times versus last time.
validity
is the extent to which a test measures what it is assumed to measure.
Synthetic sentences
are syntactically correct buy meaningless, and usually include a noun, verb, and object.
Two major trends are evident in modern hearing aid design;
miniaturization and enhanced signal processing.
Signal processing
involves manipulation of various parameters of a signal.
multiple memories
that allow the speech signal to be processed in more than one way.
Noise reduction
is the difference in the sound pressure level (SPL) of a noise measured at two different locations.
Acoustic feedback cancellation
is a feature that avoids the annoying squeal produced by hearing aid when the microphone picks up the amplified sound from the hearing aid and re-amplifies it.
Programmability in a hearing aid means
that several parameters of the instrument, such as gain, are controlled by a computer.
A hearing aid that uses digital processing
converts the signal from analog to digital form, processes the signal to achieve target, and then converts the signal back to an analog form.
Analog means
what the acoustic signal turns into after going into a microphone.
A hearing aid with multiple channels
filters the signal into frequency bands so that some bands, usually the high frequency ones, can receive more gain than others.
Directional microphones
are more sensitive to sound originating from in front of the user than to sound coming from behind them.
Omni directional microphones
are sensitive to sound coming from all directions.
Automatic directional microphones (ADM)
automatically switch between an Omni directional and directional mode according to environmental conditions.
gain
the difference in decibels between the input and output level.
the preamplifier stage
the signal from the microphone is amplified.
In the signal processing stage
the signal is manipulated to enhance or extract component information.
In the output stage
the process signal is boosted.
Maximum power output (MPO)
is the maximum intensity level that a hearing aid can produce.
Peak clipping
is a method of limiting hearing aid output in which a constant or linear amount of gain is provided across a range of input levels until it reaches a saturation level, at which time the amplifier begins to, clip, off the peaks of the signal.
Saturation level
is the point at which an amplifier can no longer increase output compared to input. The hearing aid can't make the sound any louder because it has reached its (MPO).
Compression
is a nonlinear form of amplifier gain used to determine and limit output gain as a function of input gain. This helps limit the max sound so it's not uncomfortable to the client. This also helps soft sounds become more amplified than loud sounds.
Kneepoint
is the point on an input-output function when compression is activated.
Compression ratio
is the dB ratio of acoustic input to amplifier output. For example, if a sound enters the amplifier and increase to 20 dB, but leaves at 10 dB, because 20 is uncomfortable for the client. That compression ratio is 2 to 1.
attack time
The amount of time that it takes a compression amplifier to react to a loud sound and compress it
The release time
is the time it takes for the compression amplifier to increase its gain again once the really loud sound has stopped.
Multiband compression
is a method to maximize speech recognition. It permits different degrees of compression for different frequencies.
An audio boot
also called a shoe, is a device that is used with a behind the ear hearing aid for coupling to a direct audio input cord, such as a TV or radio. they can block out surrounding environment noise and strictly listen to the TV.
A telecoil
sometimes called a t-coil or an audiocoil, in an inductive coil, which is a coil of wire wrapped around a magnetized metal rod, within a hearing aid. It enhances telephone communication. The phone emits electromagnetic signals, which are picked up and the hearing aid is bypassed.
T and M stand for...
The hearing aid may include a T for telecoil and M for microphone by the on off switch.
Body hearing aids
include a box worn on the torso and a cord connecting it to an ear-level receiver. It is the size of a deck of cards. It provides much more powerful amplification and use for those who have severe hearing loss. Despite the amplification advantage, body aids are not used often today, unless the client has a pinna that cannot support a behind the ear hearing aid, atresia, microtia, or chronic otitis media. The body aid may be attached to a bone conductor.
A bone conductor
is a vibrator or oscillator used to transmit sound to the bone of the skull. This bypasses the middle ear and is used for those with obstruction of the middle ear. Clients with atresia, microtia, or chronic otitis media can also benefit from this type of hearing aid.
A behind the ear (BTE) hearing aid
is worn over the pinna and coupled to the ear by means of an earmold, which must be custom made, or a tube, which fits anyone. The tube option reduces the annoying occlusion effect.
In the occlusion effect
low frequency sounds in bone conducted signals are enhanced as a result of closing of the ear canal. For example, you can hear your heart beat and hear yourself chew.
In the ear (ITE) hearing aid
fits into the concha of the ear.
In the canal (ITC) hearing aid
fits in the external ear canal.
most popular hearing aids in todays market
ITE and ITC hearing aids must be custom fit. They are less susceptible to wind noise. They are somewhat hidden and more cosmetically appealing.
Completely in the canal (CIC) hearing aid
are completely hidden. They are so small that they do not have on off switches on them and require the use of a remote control.
Disadvantages to CIC
they are high maintenance, ear was builds up, needs cleaned often, do not have directional microphones.
Middle ear implants
convert sound into a micro-mechanical vibration and transmits it directly to the ossicular chain. This device has not been widely used.
Binaural versus monaural fitting
Often the clinician will recommend that the client will need two hearing aids instead of just one, even though its more expensive
Why you should have 2 hearing aids (4)
1. It will eliminate the head shadow effect. If you only have one hearing aid and the sound is coming from the unaided side, you still can't hear well enough.
2. Loudness summation; when sound is received by both ears a summing of the two signals results. Thresholds for sound may improve by 3 dB or more.
3. Binaural squelch is an improvement in listening in noisy environments. This improvement in signal to noise ratio may be 2 or 3 dB.
4. Localization; the ability to perceive the direction and location of sound when using both ears.
When selecting a hearing aid you want to consider (5)
1. degree of loss; eg. those with profound hearing loss would benefit from a body aid
2. user preference; eg. some are too self conscious to wear BTE
3. cost; eg. CIC may be too expensive
4. lifestyle; eg. A nurse who uses a stethoscope everyday may need a CIC
5. physical status; eg. gross and fine motor skill may determine how well they can manipulate the controls and change the batteries
The output sound pressure level (OSPL)
once called the saturation sound pressure level, refers to the max level of sound that can be delivered to the ear when the volume is turned full on and the input signal is 90 dB. This value is determine to make sure that the hearing aids max power does not exceed the user's loudness discomfort level.
Loudness discomfort level (LDL)
is the level at which sound is perceived to be uncomfortable loud. AKA, uncomfortable loudness levels (UCLs), but Others prefer threshold of discomfort (TDs). Regardless of what you call it, LDLs, UCLs, and TDs all refer to the same thing. Something a little different, however, are ULCs, which stands for the upper level of comfort. This is the level right before uncomfortable.
OSPL 90 curve
is an assessment of a hearing aid's maximum level of output signal with volume set to full on. This assessment is done in a hearing aid test box.
A hearing aid test box
is a chamber that simulates the human external ear canal volume.
Total harmonic distortion (THD)
is unwanted signals created by the hearing aid.
Prescription procedures
are strategies for fitting hearing aid by using a formula to calculate the desired gain and frequency response. The audiogram will indicate the degree of hearing loss and how the hearing aid needs to be configured.
Verification
means to determine that the hearing aid meets a set of standards, including standards of basic electroacoustics, real ear performance, and comfortable fit. You can administer a speech recognition test with and without the new hearing aid. You can also use a probe microphone.
A probe microphone
is a microphone transducer that is connected to a flexible tube and inserted in the external ear canal for the purpose of measuring sound near the tympanic membrane. The test is done with and without the hearing aid.
Real ear measures
uses a probe microphone to measure hearing aid gain and frequency response delivered. If this test reveals we have reach our prescribed gain, we have reached are target gain.
Target gain
is the prescribe gain we wanted to achieve with the hearing aid.
A clinician can also subjectively assess the hearing aid by using a questionnaire or inventory.
Hearing aid orientation (HAO)
is the process of instructing a patient and or family member, to handle, use, and maintain a new hearing aid.
Troubleshoot
refers to a series of steps to follow when the hearing aid will not turn on, if the sound if faint or distorted, or feedback occurs, to correct the malfunction. Steps to try first; change the battery and clean off wax.
A cochlear implant is effective because
it replaces the hair cell transducer system by stimulating the auditory nerve directly, bypassing the damaged or missing hair cells.
Tonotopic organization
high frequency bands are delivered to the basilar end of the cochlea and low frequencies are delivered to the apical end.
basal end
the base
apical end
the tip
parts of the cochlear implant
comprised of internal and external parts, although soon manufactures are offering completely implantable. The internal components are in the skull by the inner ear. The receiver is on the mastoid bone, and an electrode array, which is inserted into the cochlea through the round window. The external parts include a microphone, connecting cables, a speech processor, and transmitter. The microphone and transmitter are typically worn behind the ear. The speech processor attaches to the mastoid bone magnetically.
The term multichannel is used to describe a cochlear implant because
it presents different channels of information to different parts of the cochlea. These channels help the implant to act like a normal cochlea would with Tonotopic organization.
Interleaved pulsatile stimulation
is an algorithm that many current cochlear implants use. It is a processing strategy. This is the newest one that represents different frequency bands tonotopicly like a normal cochlea would.
Candidacy for cochlear implant
the presence of bilateral, irreversible, severe or profound hearing loss, and good general health.
is there an age limit for cochlear implants
There is no upper age limit, however the younger the client is, the better the success. The lower age limit is at least 12 months, although children as young as 6 have been implanted. The cochlea is adult size at birth so implantation in babies is feasible in the future.
what must children do before getting a cochlear implant
Children must demonstrate that they receive no benefits from hearing aids and they must try for 6 months.
Mapping
is a term used to describe the process of programming the speech processor of a cochlear implant. Mapping includes programming dynamic range, loudness balancing, and pitch ranking.
Dynamic range for a cochlear implant
is determined by finding the clients electrical threshold, (T-Level) and Maximum comfort level, (C-Level). The difference between these two ranges is the dynamic range. These levels will vary depending on the clients neuronal survival in the auditory nerve.
cochlear implants; the T level
is the amount of current that must be passed through an electrode so the client is just aware of a sound sensation.
The C level
is the maximum current level that can be introduced before the client experiences discomfort.
If the electrodes are not balanced in the cochlear implant
the client may hear popping sounds or may not hear some speech information.
Pitch ranking for cochlear implants
determines the ability to discriminate pitch from the basal to the apical electrodes. During pitch ranking, two electrodes are stimulated, one right after the other. The clients task is to determine which has a higher or lower pitch.
Assistive listening devices, (ALD)
are usually only used in specific situations, as opposed to hearing aid that are used during all waking hours. They basically collect sound from the talkers mouth and deliver it to the users ear, no matter how big the room is.
Assistive listening devices can be more useful than hearing aids
when you need to hear a distant speaker, TV, church services, or in a classroom. This is because in hearing aids the microphone is in the ear and can only pick up sounds close by.
Assistive listening devices also come in handy when you experience ambient noise, reverberation, and background noise.
Ambient noise
is present in a room when it is unoccupied. This may emanate from open windows, vents, computers, or lights buzzing.
Reverberation
happens when echoes rebound off surfaces such as walls, floors, and ceilings. Rooms that have high ceilings, hardwood floors, and plaster walls tend to be the worst.
Frequency modulation (FM)
utilizes radio waves to transmit sound from the source to the user, in assistive listening devices.
A personal FM trainer
is a wireless microphone that the speaker wears. When they speak, the child, who is wearing the receiver, will hear it.
A direct audio input (DAI)
is a hardwired connection that leads directly from the sound source to the hearing aid. Not wireless.
An FM boot
is a device that houses a FM receiver. It attaches to the base of a behind the ear hearing aid.
A neckloop
is a transducer worn around the neck, often as part of an FM assistive listening device. It consists of a cord from a receiver and transmits signals via magnetic induction to the telecoil of the user's hearing aid.
A sound field FM system
is a listening system, similar to the FM trainer, in which sounds from a microphone are transmitted to loudspeakers that are positioned throughout the room.
An infrared system
is an assistive listening device that broadcast from the sound source to a receiver or amplifier by means of light waves.
An induction loop system
is a system that works by running a wire around the circumference of a room that conducts electrical energy from an amplifier and thus creates a magnetic field, which induces the telecoil in a hearing aid to provide amplified sound to the user.
Simple amplification systems
simply make things louder; the most common is telephone amplifiers.
Hearing assistive technology (H.A.T.)
uses visual or tactile stimulation to indicate something is making a noise. For example, phones that vibrate, flashing lights that indicate; door bells, smoke detection, baby is crying, and phone is ringing.
Tactile aids
use vibration to indicate sound. These can be used to maximize lipreading skills. Now that we have cochlear implants, they are not as popular.
A relay system
is for phone access; an individual contact a relay operator who serves to transmit messages between caller and the person called by means of teletype and voice.
Text telephones (TT), or telecommunication devices for the deaf (TDD),
is a telephone terminal comprised of a phone and keyboard. Now in days every cell phone has texting capabilities, so these are outdated.
Hearing aids have 3 major components
1. microphone
2. amplifier
3. receiver
Hearing aid benefit can be assessed 3 ways
1. behavioral measures; include doing a speech recognition test with and without hearing aid.
2. probe microphones
3. self assessment scales
Candidacy for auditory training; children, adults, others?
1. Children who incurred a hearing loss prelingual and postlingual
2. Adults rarely receive auditory training
3. Those who do have experienced a recent change in hearing status. eg, cochlear implant, just started using a hearing aid, and sudden hearing loss from ototoxic drugs.
during AR, prelingual child must learn what first
must first learn to attend to the auditory signal, and eventually they must learn to relate it to their vocabulary. because they cannot draw on memories of how speech should sound nor utilize knowledge about how to decode the auditory signal.
During AR, postlingual child will learn first
start with more difficult task than prelingual children, because have a larger vocabulary and greater familiarity with grammar.
Four design principles
1. Auditory Skill
2. Stimuli
3. Activity Type
4. Difficulty Level
1. Auditory Skill, which includes,
- Sound Awareness or detection.
- Sound Discrimination.
- Identification.
- Comprehension.
2. Stimuli, which includes,
- Phonetic level, which can be done with analytic training.
- Sentence level, which can be done with synthetic training.
3. Activity Type, which includes,
- Formal.
- Informal.
4. Difficulty Level, which includes (6)
1. Response type, this can be either,
•Closed, limited, or open.
2. Stimulus unit, this can be either,
•Words, phrases, or sentences.
3. Stimulus similarity.
4. Contextual support.
5. Task structure, this can be either,
•Highly structured or spontaneous.
6. Listening conditions.
Sound awareness
the most basic auditory skill level, awareness of when a sound is present and when it's not. eg, have the child march to the beat of a drum.
Sound discrimination
is a basic auditory skill level in which the listener is able to tell whether two sounds are different or the same. eg, play a game with toy animals and say, the cow says moo!
Pattern perception
is a kind of discrimination that requires a listener to distinguish between words or phrases that differ in the number of syllables.
Pattern perception
is a basic auditory skill level in which the listener is able to label some auditory stimuli. For example, say to the child, show me the cat.
Comprehension
is a higher auditory skill level in which the listener is able to understand the meaning of spoken messages. For example, can the child play the game, I spy.
Analytic training
emphasizes the recognition of individual speech sounds or syllables.
Synthetic training
emphasizes the understanding of meaning and not necessarily the identification and comprehension of every word spoken in an utterance.
Formal training
presents highly structured activities that may involve drill; it usually is scheduled to occur during designated times of the day, either in a one to one lesson format or in a small group.
Informal training
activities occur during the daily routine and are often incorporated into other activities, such as conversation or academic learning.
A limited set of stimuli
is defined by situational or contextual cues.
An auditory training objective
leads to a measurable result, expected within a particular time period and or after a particular lesson, the accomplishment of which represents a milestone toward achieving the corresponding goal.
As a general rule of thumb, a clinician will want to alter the level of difficulty if a student ...
responds correctly to training stimuli 80 percent of the time or more, and if they respond correctly to less than 50 percent.
this typically is developed at the onset of a student's auditory training program.
A hierarchy of specific training objectives
Analytic vowel objectives
are often designed to contrast vowels with different vowel formants.
Consonant objectives
are designed to contrast features of articulation.
Alternative strategies for designing analytic training objectives
focus on the frequency of occurrence of sounds and words.
Manner of articulation (6)
is a classification of speech sound as a function of how it is produced in the oral cavity. Manners are; stops, fricatives, affricates, nasals, glides, and liquids.
Synthetic training objectives begin with
simple discrimination activities that involve Suprasegmental aspects of speech.
Suprasegmentals (4)
are prosodic aspects of speech, including variations in pitch, rate, intensity, and duration, which are superimposed on phonemes and words.
Synthetic training objectives (using the design principles)
1. The student will discriminate multiword utterances from single word utterances, using a closed response set. Later, they can discriminate long words from short words.
2. the student will discriminate a spondee from a one syllable word. Later, they can discriminate spondee words from two-syllable words.
3. the student will discriminate between two words with the same amount of syllables.
4. the student will identify simple words from a four or six item response set.
5. the student will identify picture illustrations from a closed set, after hearing one sentence description.
6. the student will follow simple directions and answer simple questions, using a closed response set.
7. the student will listen to two related sentences, and then draw a picture about them.
General guidelines for conducting formal auditory training (9)
1. stimuli should become more challenging over time.
2. a variety of talkers should speak training items.
3. a lot of stimuli should be presented in a short amount of time, in order to keep their attention.
4. non-speech training should be used only with young students who are prelingually deaf, and only for a short amount of time.
5. training exercise can include both analytic and synthetic level stimuli.
6. training should progress from closed set to open set responses.
7. 15 minutes everyday, at the same time everyday, should be devoted to formal (drill) training.
8. formal training objectives should be done informally throughout the day.
9. training must be engaging and interesting.
Materials must be appropriate for the students age, gender, language level, and interest.
To make formal training stimulating reinforcements are used. Principles to follow when using reinforcers (6)
1. It should be quick; the student should not spend more time on reinforcement activity than training.
2. reinforcement activities should not be too challenging or too absorbing.
3. activities should vary, so they do not get board with them.
4. activities should interest the child, if the child loves the toy story movie, then use them stickers.
5. reinforcement activity should be done immediately after the correct response is made.
6. reinforcements should be age and gender appropriate.
informal training enhances...
the student's confidence in their abilities to engage in conversation and increase their motivation to rely on hearing or communication.
Informal training is especially important for babies and young children because
listening skills will be second nature. Parents and caregivers play an important role in listening practice. They may be encouraged to reduce background noise at home, speak closer to the child hearing aid, and sometimes to challenge them by speaking without their mouth in view.
Two reasons to incorporate speech production practice into auditory training.
1. as a child's ability to produce sounds improve, the phonological representations of words become more developed and refined, which in turn, may affect speech perception.
2. help children will develop the habit of monitoring their own speech as they talk. They can learn to attend to the Suprasegmental qualities of their speech and attend to the clarity and accuracy of their sound production.
2 steps in starting the auditory training program
1. determine placement within the program. 2. determine goals. In most programs, the goals represent global speech perception skills and are ordered hierarchically in terms of difficulty. They often often refer to the four levels of auditory skill
- Sound Awareness or detection.
- Sound Discrimination
- Identification
- Comprehension.
Cycling
involves presenting a skill within a specified time period and then moving on to another, then returning to the original goal, so they don't forget about it.
computerized auditory training
the LACE program is for adults. Training presents words and sentences in the presence of speech babble, and requires patients to identify missing words in sentences, recognize time compresses speech, to perform a short term memory task, and to attend to competing talkers. It also includes communication strategies training in the form of "tips."
Tips for cochlear implant users about listening to music (4)
1. Listen to music that was familiar prior to hearing loss.
2. Begin with music played with fewer instruments or solos, rather than orchestras.
3. Try songs that have many repetitions or musical patterns or words.
4. Watch the singer's lips or the rhythm of the piano player's fingers to help make sense of it.
Lipreading
is only using the visual cues.
Some visual cues include facial expressions and gestures.
Speechreading
is using both the visual cues and auditory cues.
Some visual cues include facial expressions and gestures.
CHARATERISTICS OF A GOOD LIPREADER
Research revealed that it is difficult to predict lipreading performance. Performance cannot be predicted by intelligence, education, hearing loss, age of onset, gender, or socioeconomic status. Some research suggests that cognitive skill correlate with lipreading. These skills involve visual decoding, working memory, lexical identification speed; (identify the letters are a word), phonological processing; (rhyming), and verbal inference making; (finish the sentence).
other factors that make some better lipreaders
- age, young adults are better than old adults, older children are better than younger
-congenital hearing loss are better than those born with normal hearing.
-ability to use contextual cues, willingness to guess, and mental agility.
-Limited Vocabulary and world knowledge.
does the amount of practice affect lipreading
no
Lipreaders look for the presence of
bilabials, lip rounding, and eyebrows rise when asking a question. They also look for head nods, body posture, and hand gestures.
Factors that influence the difficulty of lipreading
• visibility of sounds,
• rapidity of speech,
• coarticulation and stress effects,
• visemes and homophones,
• talker effects.
VISIBILITY OF SOUNDS
60% of speech sounds are not visible on the mouth or can not be seen readily.
Most visible sounds include
Bilabials, Labial dentals, or lingual dentals.
Most limited visible sounds include
/k, g, t, n/.
Sounds not visible at all include; voiced versus unvoiced
Vowels are not highly visible, but they can look for other features, such as, lip rounding.
Fortunately, Vowels tend to be acoustically salient for those who have hearing loss.
RAPIDITY OF SPEECH; what is the typical WPM; effects lipreading because...
When speaking conversationally, a talker may speak 150-250 words per minute, or roughly 4-7 syllables per second.
A typical talker may produce 15 phonemes per second. However, the human eye cannot go this fast, so they have little time to ponder what the word is and they may have difficultly determining when one word ends and the other begins, because boundaries are not visual.
COARTICULATION; effects lipreading because...
can result in the same sound looking different. eg, the sound /b/ looks different in "boot" versus "beet"
• The word, boot, the lips begin to round in anticipation of the /u/ sound.
• The word, beet, the lips begin to spread in anticipation of the /i/ sound.
Stress; effects lipreading because
can result in the same sound looking different. eg, "what did ya do?" compared to "what did you do?"
VISEMES ; effect lipreading
are groups of speech sounds that appear identical of the lips.
• For example, /p, m, b/.
Homophenes; effect lipreading
are words that look identical of the mouth.
• For example, the word "grade" and "yes"
• Because of the visemes these words are homophenes, "bat" and "mat"
Around 56% of the words in English are honophenous, however, grammatical sentence cues and other situational cues will decrease the confusion.
TALKER EFFECT; effects lipreading
-Two people may differ in the degree of mouth opening used for vowels.
-A person with an accent may appear different.
can babies speechread
yes, Infants attend to the visual speech signals as they learn their native language. They lose this ability at 8 months because they really have no reason to hold on to this skill, if their hearing is okay.
How does this integration happen? Do we process the two signals independently and then combine them, or do we process them interactively?
2 Models that have been proposed to explain these questions
1. Audiovisual integration
2. Neighborhood activation model
IMPORTANCE OF RESIDUAL HEARING
The ability to speechread is enhanced by even the most minimal auditory information. Residual hearing can help extract Suprasegmental patterns, which can convey information about syllable structure and word boundaries as well as information about syntax, (question or statement), and semantics, (words spoken with emphasis may have a different meaning). Residual hearing may also help determine whether a sound is voiced or not.
FACTORS THAT AFFECT THE SPEECHREADING PROCESS (4)
1. The talker
2. The message
3. The environment
4. The speachreader
TALKER Behaviors that make speechreading difficult (7)
1. Shouting
2. Mumbling
3. Turning away
4. Speaking rapidly
5. Covering mouth with your hand
6. Smiling while talking
7. The speakers facial expressions should match the context of their speech, if they are talking about something happy with a sad face, this could confuse the speech-reader. Facial expressions can also help then determine if the talker is asking a question.
Clear speech
is a way of speaking to enhance one's intelligibility; it entails speaking with a slowed rate and good but not exaggerated enunciation of words.
It is easier to recognized speech of someone familiar because
they are accustomed to their mouth and speech patterns.
gender differences in speechreading
Females tend to be easier to read than males in vision only condition, however, in audition plus vision conditions, the females higher frequency voice is harder for most persons with hearing loss to hear. For males, the presence of facial hair can make it harder.
The average fundamental frequency for males and females
males is about 117 hertz and females is 217 hertz
Structure of the message can affect speachreading difficultly because...
• Length, the longer the more difficult.
• Syntactic complexity, the more complex the more difficult.
o However, in isolation, two-syllable words are easier than one.
• Frequency of use
• Similarity to other words
• Linguistic context
FREQUENCY OF USAGE & NEIGHBORHOODS make speechreading difficult because...
Words with high freq, will have a dense neighborhood, words with low freq, will have a sparse neighborhood, therefore the speechreader will more likely recognize the word with low freq usage, because there are less choices.
LINGUIST CONTEXT affects speachreading because...
For example, if a talker said "the elephant is big" the speech-reader would understand them especially if they were at a zoo next to the elephants.
Simply knowing the topic can help. Then you can use you knowledge about the topic as contextual cues.
environmental factors that affect speechreading
•Viewing angle
•Distance form the talker
•Room conditions
o Lighting
•Background noise
the best viewing angle for speechreading
The best is frontal viewing angle (0 degrees) and a 30-degree angle is better than 90 degree angle. The speech reader may miss the beginning of the conversation when sitting in a meeting at a rectangular table, because every time someone new interject, the speech reader has to locate the talker. Also if the talker turns their head while talking the angle can go from good to bad.
DISTANCE FROM THE TALKER; Favorable seating for speechreading
includes being close enough to see the talkers lip movements, being able to see their full face rather than their profile, and having their face well lit.
ROOM CONDITIONS; how lighting affects speechreading
A poorly lit talker or a talker who has shadows on their face, will be difficult to speech-read. Also if the talker is standing in front of a bright window and light is shining in the speech-readers eye, this also can make it difficult.
Luminance is the intensity of light per unit of area of the source.
ROOM CONDITIONS; Common background noise that affect difficulty
• Running water
• Washer/dryer
• Refrigerator
• AC
• TV
• Furnace
• Vacuum
• Radio
• Others talking
• Open window, allow other noise in;
-Lawn mower
-Leaf blower
-Traffic
Other room conditions that affect speechreading
• room reverberation, when echoes rebound off surfaces
-high ceilings,
-hardwood floors,
-plaster walls are the worst
• availability of assistive devices
• the presence of others moving in the room can be distracting
How the THE SPEECHREADER affects speechreading (2)
1. INNATE SKILL AND HEARING ABILITY
Generally the better the lipreading skill and the greater the amount of residual hearing, the better the speechreading.
-Those with sensorineural loss may perform more poorly than those with conductive.
-Those who use appropriate eyeglasses and hearing aide will perform better.
2. EMOTIONAL AND PHYSICAL STATE
An individuals level of stress, fatigue, and attentivesness can affect performance. eg, during a job interview, anxiety may impair performance. A businessman may not perform well at home because he is fatigue from working all day.
ORAL INTERPRETERS
sits in clear view of a person who has hearing loss and silently repeats a talker's message as it is spoken.
Oral transliteration
is the act of lagging a talker by a few words, mouthing or speaking the words with a normal speaking rate and good enunciation. Although oral transliteration usually does not entail the use of sign language, natural body language, expressions, and gestures are typically presented that support the content of the words.
How to become an oral interpreter
Oral interpreters must certified through the registry of interpreters for the deaf and they must have a certificate of transliterating (C.T.) and have the oral interpreter certificate (O.I.C.).
code of ethics oral interpreters must follow
• They cant share information they learn during interpreting
• They cant change the meaning
• They cant add their opinion or personal commentary to the message
when we lipread, our eyes...
fixate and perform quick shifts. They often focus on the talkers eyes, nose, and mouth.
clear speech
Produce sounds more completely without missing certain elements or dropping word endings. Naturally, speech becomes slower and louder and the stress on certain key words or syllables becomes more obvious. The speaker attempts to express every word and sentence in a precise, accurate and fully formed manner.
• Lively, with a full range of voice intonation (tone)
• pauses between all phrases and sentences.
It's not a substitute for other well-known communication habits. You still need to reduce background noise, and avoid trying to communicate from a different room or with your back turned. You should also make sure that your face is well lit.
Type A
Normal Tympanogram
The peak of the pressure curve at 0 or falls between +50 and -150 millimeters of pressure. Peak compliance falls between .2 and 1.8 mm
Type As
(shallow): Abnormal Tympanogram
The peak of the pressure curve falls (WNL)
Peak compliance very low (well below .2 mm)
Often associated with ossicular fixation or TM scarring.
May result in a fairly flat, non-fluctuating hearing loss.
Eustachian tube function is normal
Type Ad
(disarticulation): Abnormal Tympanogram
The peak of the pressure curve falls (WNL)
Peak compliance very high or off chart
Associated with ossicular disarticulation
May result in a fairly flat, non-fluctuating hearing loss.
Eustachian tube function is normal
Type B
tympanograms are a flat line. fluid or infection behind the ear drum. In some cases, these tympanograms are seen when there is a hole in the ear drum; the difference lies in the ear canal volume: a larger ear canal volume indicates a perforation in the ear drum.
Type C
There is a clearly defined peak, but it falls on the negative side of the chart, indicating negative middle ear pressure.
Peak pressure is seen at greater than -150 mm (moved to left).
Peak compliance may be normal
Diagnosis: Eustachian tube dysfunction, may cause a very mild conductive loss, or hearing can be WNL
conductive hearing loss audiogram
there is an air bone gap, with the air conduction within the hearing loss range
SNHL audiogram
both air and bone conduction are within the hearing loss range (along the SAME points)
mixed hearing loss audiogram
both air and bone conduction are within the hearing loss range (along DIFFERENT points)
6 ling sounds
/m/ 1,000-1,500
/oo/ - [u] 1,170
/ee/ - [i] 3,200
/ah/ - [a] 1,750
/sh/ 4,500-5,500
/s/ 5,000-6,000