When it comes to hearing, precision is important say researchers. Due to the fact vertebrates, such as birds and humans, have two ears, and sound-waves from either side travel different distances to arrive at each one, localizing sound involves discerning subtle differences. The brain has to keep time better than a Swiss watch to locate where sound is coming from. In fact, the quality of this sound processing precision is a limiting factor in how well a person detects the location of sound and perceives speech. Now, a study from researchers at Lehigh’s University identifies the specific synaptic and post-synaptic characteristics that allow auditory neurons to compute with raised temporal precision. The team state that their findings ultimately reveal the optimal arrangement of both input and electrical properties needed for neurons to process their ‘preferred’ frequency with maximum precision. The study is published in The Journal of Neuroscience.
Previous studies show that hair cells in the cochlea, the auditory portion of the inner ear, vibrate in response to sounds and thereby convert sound into electrical activity. Each hair cell in the cochlea is partnered with several neurons that convey information from the ear to the brain in an orderly way. Timing precision is important to cochlear neurons because their firing pattern is specific for each sound frequency. Earlier studies from the lab demonstrated for the first time that synaptic input, the messages being sent between neurons, are distinct across frequencies and that these different impulse patterns are ‘mapped’ onto the neurons of the cochlear nucleus. However, the mechanisms that allow neurons to respond properly to these frequency-specific incoming messages remained poorly understood. The current study investigates auditory brain cell membrane selectivity to show that the neurons tuned to receive high-frequency sound preferentially select faster input than their low-frequency-processing counterparts, and that this preference is tolerant of changes to the inputs being received.
The current study develops a computer simulation of low frequency and high frequency neurons, based on observations of physiological activity. These computational models tested which combinations of properties are crucial to phase-locking, where neurons fire in synchrony with the phase of a stimulus. Results show that the model predicted the optimal arrangement of synaptic properties for phase-locking, which is specific to stimulus frequency. These computational predictions were then tested physiologically in the neurons.
The researchers state that they investigated properties which contribute to temporal processing both physiologically and in a computational model. Results show that neurons processing low-frequency input benefit from integration of many weak inputs, whereas those processing higher frequencies progressively lose precision by integration of multiple inputs. They conclude that they have revealed general features of input-output optimization that apply to all neurons that process time varying input.
The team surmise that they have assembled what is known about the optimal electrical properties and synaptic inputs into a single cohesive model, laying the groundwork needed to investigate some of the big questions in the field of auditory neuroscience. For the future, the researchers state that resolving these questions may someday lead the global medical community to a better understanding of how to preserve the natural organization of the auditory structures in the brain for those who are born with profound hearing loss.
Michelle is a health industry veteran who taught and worked in the field before training as a science journalist.
Featured by numerous prestigious brands and publishers, she specializes in clinical trial innovation--expertise she gained while working in multiple positions within the private sector, the NHS, and Oxford University.