Some science behind the scenes
We rely less on hearing to build up an image of the world, but much the same observations can be made about our sense of hearing as they can about our sight – it is limited to only that which we need to know in order to operate as we were designed.
Filtering by the sensory organs.
Just like light, the range of sounds far exceeds that which the ear can detect. The diagram below obtained from Wikipedia should help to illustrate this.
The upper frequency limit in humans (approximately 20 kHz) is determined by the middle ear, which acts as a low-pass filter. Ultrasonic hearing can occur if ultrasound is fed directly into the skull bone and reaches the cochlea without passing through the middle ear.
Infrasound is sound that is lower in frequency than 20 cycles per second, the normal limit of human hearing. The ear filters out infrasound, but it is possible to feel infrasound vibrations in various parts of the body.
Some animals – such as dogs, cats, dolphins, bats, and mice – have an upper frequency limit that is greater than that of the human ear and thus can hear ultrasound. To illustrate this it may be interesting to compare our hearing with a dog’s.
The frequency range of dog hearing is approximately 40 Hz to 60,000 Hz, which means that dogs can detect sounds outside the human auditory spectrum. The average human auditory system is sensitive to frequencies from 12/20 Hz [depending on age etc] to a maximum of around 20,000 Hz (20 kHz) , although these limits are not definite. A dog can identify a sound's location much faster than a human can, as well as hear sounds at four times the distance.
The processing of sound
Just like sight, the processing of sound cannot be explained by simple mechanical mechanisms. Although the ear is wonderfully designed to accept sound, it does not process it so that sounds can be ‘recognised’, just like sight there is a recognition that ‘software’ is involved in processing this sound.
The sounds are combined, and information is deduced about the location and the distance of the sound. There is then processing to recognise the sound by pattern matching in the mind - sound recognition systems.
We can perhaps understand something of the complexity of this software of the mind when we compare the processing in our mind to recognise sounds, with the amount of processing needed in computer systems that are speech recognition systems. These extremely complex systems are only capable at the moment of recognising certain commands in certain contexts, whereas our mind is capable of recognising voices and numerous other sounds.
What can we conclude from this short summary of sound and how it works?
First, we hear only a fraction of the sound spectrum, but we may experience via the resonances being produced by sound far more than we are actually aware.
Secondly, the information gathered by the ear is processed, much as it was with sight, so that it is recognisable and can be acted on. So that we know the roar of a lion of the sound of an oncoming car, or the joy of a Bruch violin concerto. It is possible given the results from vision [where more of Reality seeps through] that sound is also filtered considerably down to only ‘immediately useful’ information.
Thirdly, we do not ‘hear’ what other creatures ‘hear’, our ‘reality’ from hearing is not their ‘reality’.
Overall, we do not hear Reality.
For iPad/iPhone users: tap letter twice to get list of items.