Here's a little info.
Directory > Science > Science and Technology Encyclopedia neuroscience
vertebrate
Hearing (neuroscience)
The general perceptual behavior and the specific responses that are made in relation to sound stimuli. The auditory system consists of the ear and the auditory nervous system. The ear comprises outer, middle, and inner ear. The outer ear, visible on the surface of the body, directs sounds to the middle ear, which converts sounds into vibrations of the fluid that fills the inner ear. The inner ear contains the vestibular and the auditory sensory organs. See also Ear (vertebrate).
The auditory part of the inner ear, known as the cochlea because of its snaillike shape, analyzes sound in a way that resembles spectral analysis. It contains the sensory cells that convert sounds into nerve signals to be conducted through the auditory portion of the eighth cranial nerve to higher brain centers. The neural code in the auditory nerve is transformed as the information travels through a complex system of nuclei connected by fiber tracts, known as the ascending auditory pathways. They carry auditory information to the auditory cortex, which is the part of the sensory cortex where perception and interpretation of sounds are believed to take place. Interaction between the neural pathways of the two ears makes it possible for a person to determine the direction of a sound's source. See also Binaural sound system; Brain.
Role of the ear
The pinna, the projecting part of the outer ear, collects sound, but because it is small in relation to the wavelengths of sound that are important for human hearing, the pinna plays only a minor role in hearing. The ear canal acts as a resonator: it increases the sound pressure at the tympanic membrane in the frequency range between 1500 and 5000 Hz. The difference between the arrival time of a sound at each of the two ears and the difference in the intensity of the sound that reaches each ear are used by the auditory nervous system to determine the location of the sound source.
Sound that reaches the tympanic membrane causes the membrane to vibrate, and these vibrations set in motion the three small bones of the middle ear: the malleus, the incus, and the stapes. The footplate of the stapes is located in an opening of the cochlear bone—the oval window. Moving in a pistonlike fashion, the stapes sets the cochlear fluid into motion and thereby converts sound (pressure fluctuations in the air) into motion of the cochlear fluid. Motion of the fluid in the cochlea begins the neural process known as hearing.
There are two small muscles in the middle ear: the tensor tympani and the stapedius muscles. The former pulls the manubrium of the malleus inward, while the latter is attached to the stapes and pulls the stapes in a direction that is perpendicular to its pistonlike motion. The stapedius muscle is the smallest striated muscle in the body, and it contracts in response to an intense sound. This is known as the acoustic middle-ear reflex. The muscle's contraction reduces sound transmission through the middle ear and thus acts as a regulator of input to the cochlea. Perhaps a more important function of the stapedius muscle is that it contracts immediately before and during a person's own vocalization, reducing the sensitivity of the speaker's ears to his or her own voice and possibly reducing the masking effect of an individual's own voice. The role of the tensor tympani muscle is less well understood, but it is thought that contraction of the tensor tympani muscle facilitates proper ventilation of the middle-ear cavity. These two muscles are innervated by the facial (VIIth) nerve for the stapedius and the trigeminal (Vth) nerve for the tensor tympani. The acoustic stapedius reflex plays an important role in the clinical diagnosis of disorders affecting the middle ear, the cochlea, and the auditory nerve.
Vibrations in the cochlear fluid set up a traveling wave on the basilar membrane of the cochlea. When tones are used to set the cochlear fluid into vibration, one specific point on the basilar membrane will vibrate with a higher amplitude than any other. Therefore, a frequency scale can be laid out along the basilar membrane, with low frequencies near the apex and high frequencies near the base of the cochlea.
The sensory cells that convert the motion of the basilar membrane into a neural code in individual auditory nerve fibers are located along the basilar membrane. They are also known as hair cells, because they have hairlike structures on their surfaces. The hair cells in the mammalian cochlea function as mechanoreceptors: motion of the basilar membrane causes deflection of the hairs, starting a process that eventually results in a change in the discharge rate of the nerve fiber connected to each hair cell. This process includes the release of a chemical transmitter substance at the base of the hair cells that controls the discharge rate of the nerve fiber.
The frequency selectivity of the basilar membrane provides the central nervous system with information about the frequency or spectrum of a sound, because each auditory nerve fiber is “tuned” to a specific frequency. The frequency of a sound is also represented in the time pattern of the neural code, at least for frequencies up to 5 kHz. Thus, the frequency or spectrum of a sound can be coded for place and time in the neural activity in the auditory nervous system. See also Audiometry; Pitch.
Auditory nervous system
The ascending auditory nervous system consists of a complex chain of clusters of nerve cells (nuclei), connected by nerve fibers (nerve tracts). The chain of nuclei relays and transforms auditory information from the periphery of the auditory system, the ear, to the central structures, or auditory cortex, which is believed to be associated with the ability to interpret different sounds. Neurons in the entire auditory nervous system are, in general, organized anatomically according to the frequency of a tone to which they respond best, which suggests a tonotopical organization in the auditory nervous system and underscores the importance of representations of frequency in that system. However, when more complex sounds were used to study the auditory system, qualities of sounds other than frequency or spectrum were found to be represented differently in different neurons in the ascending auditory pathway, with more complex representation in the more centrally located nuclei. Thus, the response patterns of the cells in each division of the cochlear nucleus are different, which indicates that extensive signal processing is taking place. Although the details of that processing remain to be determined, the cells appear to sort the information and then relay different aspects of it through different channels to more centrally located parts of the ascending auditory pathway. As a result, some neurons seem to respond only if more than one sound is presented at the same time, others respond best if the frequency or intensity of a sound changes rapidly, and so on.
Another important feature of the ascending auditory pathway is the ability of particular neurons to signal the direction of sound origination, which is based on the physical differences in the sound reaching the two ears. Certain centers in the ascending auditory pathway seem to have the ability to compute the direction to the sound source on the basis of such differences in the sounds that reach the ears.
Knowledge of the descending auditory pathway is limited to the fact that the most peripheral portion can control the sensitivity of the hair cells. See also Hearing impairment; Loudness; Masking of sound; Phonoreception; Signal detection theory; Sound.
Hearing (vertebrate)
The ability to perceive sound arriving from distant vibrating sources through the environmental medium (such as air, water, or ground). The primary function of hearing is to detect the presence, identity, location, and activity of distant sound sources. Sound detection is accomplished using structures that collect sound from the environment (outer ears), transmit sound efficiently to the inner ears (via middle ears), transform mechanical motion to electrical and chemical processes in the inner ears (hair cells), and then transmit the coded information to various specialized areas within the brain. These processes lead to perception and other behaviors appropriate to sound sources, and probably arose early in vertebrate evolution.
Sound is gathered from the environment by structures that are variable among species. In many fishes, sound pressure reaching the swim bladder or another gas-filled chamber in the abdomen or head causes fluctuations in volume that reach the inner ears as movements. In addition, the vibration of water particles that normally accompany underwater sound reaches the inner ears to cause direct, inertial stimulation. In land animals, sound causes motion of the tympanic membrane (eardrum). In amphibians, reptiles, and birds, a single bone (the columella) transmits tympanic membrane motion to the inner ears. In mammals, there are three interlinked bones (malleus, incus, and stapes). Mammals that live underground may detect ground-borne sound via bone conduction. In whales and other sea mammals, sound reaches the inner ears via tissue and bone conduction.
The inner ears of all vertebrates contain hair-cell mechanoreceptors that transform motion of their cilia to electrochemical events resulting in action potentials in cells of the eighth cranial nerve. Patterns of action potentials reaching the brain represent sound wave features in all vertebrates. All vertebrates have an analogous set of auditory brain centers. See also Ear (vertebrate); Physiological acoustics.
Experiments show that vertebrates have more commonalities than differences in their sense of hearing. The major difference between species is in the frequency range of hearing, from below 1 Hz to over 100,000 Hz. In other fundamental hearing functions (such as best sensitivity, sound intensity and frequency discrimination acuity, time and frequency analysis, and source localization), vertebrates have much in common. All detect sound within a restricted frequency range. All species are able to detect sounds in the presence of interfering sounds (noise), discriminate between different sound features, and locate the sources of sound with varying degrees of accuracy.
The sensitivity range is similar among all groups, with some species in all groups having a best sensitivity in the region of ?20 to 0 dB. Fishes, amphibians, reptiles, and birds hear best between 100 and 5000 Hz. Only mammals hear at frequencies above 10,000 Hz. Humans and elephants have the poorest high-frequency hearing.
--------------------------------------------------------------------------------
Self-Adjust Digital Aids
FDA approved hearing aids. Easy to use and designed to meet your needs
www.AmericaHears.com
Criminal Justice Programs
100% Web Based Multimedia Courses & Accelerated Associates Degrees.
www.AIUonline.edu
Thesaurus Directory > Words > Thesaurus hearing
noun
The sense by which sound is perceived: audition, ear. See sounds/pleasant sounds/unpleasant sounds/neutral sounds or silence.
Range of audibility: earshot, sound1. See sounds/pleasant sounds/unpleasant sounds/neutral sounds or silence.
A chance to be heard: audience, audition. See sounds/pleasant sounds/unpleasant sounds/neutral sounds or silence.
The examination and deciding upon evidence, charges, and claims in court: trial. See law.
Helpful Hearing Gadgets
Hearing impaired solutions from First Street's catalog. Buy online!
www.FirstStreetOnline.com
Digital Hearing Aids
Top Digital Hearing Aids Site Big Choice - Save Online Here!
www.Faster-Results.com
Britannica Directory > Reference > Britannica Concise hearing
Physiological process of perceiving sound. Hearing entails the transformation of sound vibrations into nerve impulses, which travel to the brain and are interpreted as sounds. Members of two animal groups, arthropods and vertebrates, are capable of sound reception. Hearing enables an animal to sense danger, locate food, find mates, and, in more complex creatures, engage in communication (see animal communication). All vertebrates have two ears, often with an inner chamber housing auditory hair cells (papillae) and an outer eardrum that receives and transmits sound vibrations. Localization of sound depends on the recognition of minute differences in intensity and in the time of arrival of the sound at the two ears. Sound reception in mammals is generally well developed and often highly specialized, as in bats and dolphins, which use echolocation, and whales and elephants, which can hear mating calls from tens or even hundreds of miles away. Dogs and other canines can similarly detect faraway sounds. The human ear can detect frequencies of 20–20,000 hertz (Hz); it is most sensitive to those between 1,000 and 3,000 Hz. Impulses travel along the central auditory pathway from the cochlear nerve to the medulla to the cerebral cortex. Hearing may be impaired by disease, injury, or old age; some disorders, including deafness, may be congenital. See also hearing aid.
For more information on hearing, visit Britannica.com.
--------------------------------------------------------------------------------
Digital Hearing Aids
High Quality low cost Ready to Wear Hearing aids "Free shipping" No tax
www.hearinglosspros.com
Your source on hearing
Advice to help your hearing - large website on hearing and hearing loss
www.hear-it.org
Medical Directory > Health > Medical Dictionary hearing
n.
The sense by which sound is perceived; the capacity to hear.
Hearing
Directory for Hearing Aid Deals. Find Hearing Aids Quickly.
HearingAids101.com
EAR3 Protects Your Ears
personal hearing threat detector use it everywhere you go!
www.ear3.info
Word Tutor Directory > Words > Word Tutor hearing
IN BRIEF: n. - The act of perceiving sound; (law) a proceeding (usually by a court) where evidence is taken for the purpose of determining an issue of fact and reaching a decision based on that evidence.
Praise does wonders for our sense of hearing.
Improve Your Hearing
Having Problems Hearing? Try All Natural Supplement. 100% Guaranteed
www.1800WellMed.com/HearWell
Hearing Insider Info
How To Afford New Hearing Aids - Inside Info On Hearing Aid Industry
SaveOnDigitalHearing.com/
WordNet Directory > Reference > WordNet Note: click on a word meaning below to see its connections and related words.
The noun hearing has 6 meanings:
Meaning #1: a proceeding (usually by a court of law) where evidence is taken for the purpose of determining an issue of fact and reaching a decision based on that evidence
Meaning #2: an opportunity to state your case and be heard
Synonym: audience
Meaning #3: the range within which a voice can be heard
Synonyms: earshot, earreach
Meaning #4: the act of hearing attentively
Synonym: listening
Meaning #5: a session (of a committee or grand jury) in which witnesses are called and testimony is taken
Meaning #6: the ability to hear; the auditory faculty
Synonyms: audition, auditory sense, sense of hearing, auditory modality
--------------------------------------------------------------------------------
The adjective hearing has one meaning:
Meaning #1: able to perceive sound
Antonym: deaf (meaning #1)
--------------------------------------------------------------------------------
Half.com ® -Official Site
Hearing Save on exactly what you need!
www.half.com
Hearing at Amazon.com
Buy books at Amazon.com and save. Qualified orders over $25 ship free
Amazon.com/books
Wikipedia Directory > Reference > Wikipedia hearing (sense)
It has been suggested that this article or section be merged with Auditory system. (Discuss)
Hearing, is one of the traditional five senses and refers to the ability to detect sound. In humans and other vertebrates, hearing is performed primarily by the auditory system: sound is detected by the ear and transduced into nerve impulses that are perceived by the brain.
All sounds are not normally audible to all animals. Each species has a range of normal hearing for both loudness (amplitude) and pitch (frequency). Many animals use sound in order to communicate with each other and hearing in these species is particularly important for survival and reproduction. In species using sound as a primary means of communication, hearing is typically most acute for the range of pitches produced in calls and speech.
Frequencies capable of being heard by humans are called audio or referred to as sonic. Frequencies higher than audio are referred to as ultrasonic, while frequencies below audio are referred to as infrasonic. Some bats use ultrasound for echo location while in flight. Dogs are able to hear ultrasound, which is the principle of 'silent' dog whistles. Snakes sense infrasound through their bellies, and whales, giraffes and elephants use it for communication.
Humans can generally hear sounds with frequencies between 20 Hz and 20 kHz. Human hearing is able to discriminate tiny differences in loudness (intensity) and pitch (frequency) over that large range of audible sound. This healthy human range of frequency detection varies significantly with age, occupational hearing damage, and gender; some individuals are able to hear pitches up to 22 kHz and perhaps beyond, while others are limited to about 16 kHz. Most adults' ability to hear frequencies higher than the frequency range between 200 and 8,000 hertz, where most human communication takes place, begins to deteriorate in early middle age.[1]
Human beings develop spoken language within the first few years of life, and hearing impairment can not only prevent the ability to talk but also the ability to understand the spoken word. By the time it is apparent that a severely hearing impaired (deaf) child has a hearing deficit, problems with communication may have already caused issues within the family and hindered social skills, unless the child is part of a Deaf community where sign language is used instead of spoken language (see Deaf Culture). In many developed countries, hearing is evaluated during the newborn period in an effort to prevent the inadvertent isolation of a deaf child in a hearing family. Although sign language is a full means of communication, literacy depends on understanding spoken language. In the great majority of written language, the sound of the word is coded in symbols. Although an individual who hears and learns to speak and read will retain the ability to read even if hearing becomes too impaired to hear voices, a person who never heard well enough to learn to speak is rarely able to read proficiently.[2] Most evidence points to early identification of hearing impairment as key if a child with very insensitive hearing is to learn spoken language.
Hearing can be measured by behavioral tests using an audiometer. Electrophysiological tests of hearing can provide accurate measurements of hearing thresholds even in unconscious subjects. Such tests include auditory brainstem evoked potentials (ABR), otoacoustic emissions and electrocochleography (EchoG). Technical advances in these tests have allowed hearing screening for infants to become widespread.
The physiology of hearing in vertebrates is not fully understood at this time. The molecular mechanism of sound transduction within the cochlea and the processing of sound by the brain, (the auditory cortex) are two areas that remain largely unknown.
The rest of this article describes the functioning of human hearing, from the ear to the primary auditory cortex. Like the sense of touch, audition requires sensitivity to the movement of molecules in the world outside the organism. Both hearing and touch are types of mechanosensation.[3]
The Outer Ear captures sound
The visible portion of the outer ear in humans is called the auricle, a convoluted cup that arises from the opening of the ear canal on either side of the head. The auricle helps direct sound to the ear canal, and these two components of the outer ear (auricle and ear canal) both amplify and guide sound waves to the tympanic membrane (eardrum). In humans, amplification of sound ranges from 5 to 20 dB for frequencies within the speech range (about 1.5–7 kHz). Since the shape and length of the human external ear preferentially amplifies sound in the speech frequencies, the external ear also improves signal to noise ratio for speech sounds.[4]
The Middle Ear formats sound for the cochlea (impedance matching)
The ear drum is stretched across the front of a bony air-filled cavity called the middle-ear. Just as the tympanic membrane is rather like the drum head, the middle ear cavity is something like the drum body. This middle ear cavity is also quite like a specializehe back of the nose (nasopharynx) by the Eustachian tube. Pressure equilibrium with the atmosphere is maintained through intermittent opening of the Eustachian tube in response to swallowing and other maneuvers.
The deepest aspect of the middle ear is at the oval window, where the footplate of the stapes divides the middle ear from the inner ear. The ossicular chain is the articulated bridge of 3 ossicles (ear bones) that span the depth of the middle ear cavity. The most superficial portion of the ossicular chain is the malleus, an angled ossicle that has its long process embedded in the center of the taut portion (pars tensa) of the ear drum (tympanic membrane). The head of the malleus sits above in the uppermost middle ear, at the top of the middle ear space in the epitympanum (or attic). It articulates with the body of the second ear bone (ossicle), the incus. The incus tapers to a thin long process that descends down to the stapes, the third ossicle. This horseshoe shaped bone's footplate sits in the oval window, an opening to the fluid filled inner ear.
Much of the middle ear's function in hearing has to do with processing sound waves in air surrounding the body into the vibrations of fluid within the cochlea of the inner ear. Sound waves move the tympanic membrane which moves the long process of the malleus, which moves the incus, which depresses the stapes footplate into the oval window, which moves the fluid of the cochlea. Ordinarily, when sound waves in air strike fluid, more than 99% of the energy is reflected off the surface of the fluid. Most people have an intuitive understanding of this, having experienced hearing underwater. The sounds of the shore or poolside are almost inaudible, remaining in the air.
When submerged in fluid, hearing is by bone conduction through the skull ([mastoid]), and air conduction of sound performed by the middle ear is severely reduced. The middle ear allows the impedance matching of sound traveling in air and sound traveling in fluid, overcoming the interface between them.
There are several specific ways in which the middle ear amplifies sound from air to the fluid in the oval window. The first of these is the "hydraulic principle". The vibratory portion of the tympanic membrane is many times the surface area of the footplate of the stapes. The collected pressure of sound vibration that strikes the tympanic membrane is concentrated down to this much smaller area of the footplate, increasing the force and thereby amplifying the sound.
The second way in which the middle ear amplifies air-conducted sound to the fluid of the cochlea is called the "lever principle". The shape of the articulated ossicular chain is like a lever, the long arm being the long process of the malleus, and the body of the incus being the fulcrum and the short arm being the lenticular process of the incus.
The third way in which the middle ear amplifies sound is by "round window protection". The cochlea is a fluid filled tube, wound about on itself. One end of this tube is at the oval window, the other end is at the round window. Both of these windows, opening in the bone, lay on the deep wall of the middle ear space. Whereas the tympanic membrane and ossicular chain preferentially direct sound to the oval window, the round window is protected by middle ear structures from having sound waves impinge on its surface. The middle ear structures that protect the round window are the intact tympanic membrane and the round window niche. If there were no tympanic membrane or ossicular chain, soundwaves would strike the round window and oval window at the same time - and much of the energy would be lost. Fluid movement inside the cochlea, in regards to both the oval and round window and the stimulation of hair cells, is more fully explained in the next section on the cochlea.
The middle-ear is able to dampen sound conduction a bit when faced with very loud sound by noise induced reflex contraction of the middle ear muscles.
The Cochlea of the Inner Ear Transforms Sound Energy into Nerve Impulses
The cochlea is a snail shaped fluid-filled chamber, divided along almost its entire length by a membranous partition. The walls of the hollow cochlea are made of bone, with a thin delicate lining of epithelial tissue. This coiled tube is divided through most of its length by a membrane partition. Two fluid-filled spaces (scalae) are formed by this dividing membrane. lo The fluid in both is called perilymph: a clear solution of electrolytes and proteins. The two scalae (fluid-filled chambers) communicate with each other through an opening at the top (apex) of the cochlea called the helicotrema, a common space that is the one part of the cochlea that lacks the lengthwise dividing membrane.
At the base of the cochlea each scala ends in a membrane that faces the middle ear cavity. The scala vestibuli ends at the oval window, where the footplate of the stapes sits. The footplate rocks when the ear drum moves the ossicular chain; sending the perilymph rippling with the motion, the waves moving away from footplate and towards helicotrema. Those fluid waves then continue in the perilymph of the scala tympani. The scala tympani ends at the round window, which bulges out when the waves reach it -providing pressure relief. This one-way movement of waves from oval window to round window occurs because the middle ear directs sound to the oval window, but shields the round window from being struck by sound waves from the external ear. It is important, because waves coming from both directions, from the round and oval window would cancel each other out. In fact, when the middle ear is damaged such that there is no tympanic membrane or ossicular chain, and the round window is oriented outward rather than set under a bit of a ledge in the round window niche, there is a maximal conductive hearing loss of about 60 dB.
The lengthwise partition that divides most of the cochlea is itself a fluid-filled tube, the third scalae. This central column is called the scala media or cochlear duct. Its fluid, endolymph, also contains electrolytes and proteins, but is chemically quite different from perilymph. Whereas the perilymph is rich in sodium salts, the endolymph is rich in potassium salts.
The cochlear duct is supported on three sides by a rich bed of capillaries and secretory cells (the stria vascularis), a layer of simple squamous epithelial cells (Reissner's membrane), and the basilar membrane, on which rests the receptor organ for hearing - the organ of Corti. The cochlear duct is almost as complex on its own as the ear itself.
The ear is a very active organ. Not only does the cochlea "receive" sound, it generates it! Some of the hair cells of the cochlear duct can change their shape enough to move the basilar membrane and produce sound. This process is important in fine tuning the ability of the cochlea to accurately detect differences in incoming acoustic information and is described more fully in the following section. The sound produced by the inner ear is called an otoacoustic emission (OAC), and can be recorded by a microphone in the ear canal. Otoacoustic emissions are important is some types of tests for hearing impairment.
The Basilar Membrane and the Tectorial Membrane
The basilar membrane is a resonant structure that, like strings on an instrument, varies in width and stiffness. The "string" of the basilar membrane is not a set of parallel strings, as in a guitar, but a long structure that has different resonant properties at different points along its length. The motion of the basilar membrane has been described as a travelling wave, and is greatest at the point where the frequency of the incoming sound matches that of the movement of the membrane. Like the string of a string instrument, just how stiff and how wide this membrane is at a given point along its length determines its resonant freuency (pitch). The Basilar membrane is widest (0.42–0.65 mm) and least taut at the apex of the cochlea, and thinnest (0.08–0.16 mm) and most taut at the base. [5]
The Inner Hair Cell and the Organ of Corti
Sound waves set the basilar membrane into motion. When the basilar membrane moves, the tectorial membrane shears the tops of the hair cells, displacing the stereocilia of those cells. Much like a switch, the kinocilum of the inner hair cell "turns on" when there is enough movement of the perilymph to fully displace it. That amount of movement is very small, as little as the size of a hydrogen atom! When the kinocilium is moved, the hair cell depolarizes and releases neurotransmitter that crosses the synapse and stimulates dendrites of the neurons of the spiral ganglion.
The Spiral Ganglion and the Cochlear Nucleus
Spiral ganglion cells are strung along the bony core of the cochlea, and are part of the central nervous system. These spiral ganglion cells are bipolar first-order neurons of the auditory pathway of the brain. Their dendrites make synaptic contact with the base of hair cells, and their axons are bundled together to form the auditory portion of the VIII Cranial Nerve. In humans, the central axons number about 35,000 on each the left and right side. The acoustic information sent by this cranial nerve is very focused.
Central Auditory Pathways- From Ear to Brain (and back!)
Afferent conduction of sound-generated nerve impulses includes the pathways from receiving ends of the eighth cranial nerve to the auditory areas of the cerbral cortex. Surprisingly, this is not the only direction that impulses travel in the auditory pathways. Properties of the inner ear are actually set by the higher centers of the brain, and the pathways that carry impulses from these higher centers down to the cochlea are efferent pathways. This section initially summarizes the afferent pathways. There is a special circuit in the human brain that is dedicated to sound localization that is also described here. Then, the efferent pathways are described.
The Cochlear Nucleus
The first station in the Central Auditory route to the cerebral cortex is the Cochlear Nucleus (CN). Like nearly all structures involved in audition, there are two of these: left and right. At the level of the cochlear nuclei, the input from the two ears, for the most part, remains separated. Just as the inner hair cells are arranged according to the pitch best "heard", so is the cochlea nucleus. This so-called tonotopic organization is preserved because only a few inner hair cells synapse on the dendrites of a nerve cell in the spiral ganglion, and the axon from that nerve cell synapes on only a very few dendrites in the cochlear nuclues.
Each cochlear nucleus has two parts, dorsal (DCN) and ventral (VCN). The Cochlear Nucleus receives input from each spiral ganglion, and also receives input from other parts of the brain. How the inputs from other areas of the brain affect hearing is unknown.
The Stria, and The Trapezoid Body
Axons leaving the VCN form a broad pathway that crosses under the brain stem in a tract known as the ventral stria or trapezoid body. A thin pathway, the intermediate acoustic stria, also leaves the VCN, merging with the trapezoid body close to the superior olivary complex, where many of its axons synapse. Axons leaving the DCN form the dorsal acoustic stria, which reaches primarily the contralateral dorsal nucleus of the lateral lemniscus and the central nucleus of the inferior colliculus.
Superior Olivary Complex: Integration of sound information from the Right and Left
The next level of processing takes place in the superior olivary complex, where the inputs from the two ears converge and interact to encode sound direction.
Localization of sound by humans: a brain circuit
Main article: sound localization
Humans are normally able to hear a variety of sound frequencies, from about 20 Hz to 20 kHz. Our ability to estimate just where the sound is coming from, sound localiation, is dependent on both hearing ability of each of the two ears, and the exact quality of the sound. Since each ear lies on an opposite side of the head, a sound will reach the closest ear first, and its amplitide will be loudest in that ear. Much of the brain's ability to localize sound depends on interaural (between ears) intensity differences and interaural temporal or phase differences.
Human echolocation is a technique involving echolocation used by some blind humans to navigate within their
2007-01-11 14:15:18
·
answer #5
·
answered by golden rider 6
·
0⤊
0⤋