history

Humans have been trying to improve their hearing since the discovery of the 'hand cupped behind the ear' method.  Ear trumpets appeared in the 17the century, and by 1790 they were quite common. The first commercial ear trumpets appeared in 1800, and were produced by the F.C.Rein Company.  These were simply cones that collected sound and delivered it to the ear.  As they evolved throughout the 1800’s the advances in ‘technology’ consisted of hiding the apparatus in hats, hairstyles, and even furniture. Thus began the drive towards cosmetic acceptance which still drives the market to this day. Often disguising an individuals handicap would override the need to cope with the disability in the engineering of hearing instruments.

The technology behind the telephone was soon utilized in the manufacture of hearing instruments. One of the first manufacturers of hearing aids was the Siemens Company, and they are still around today. Their first commercially available hearing instrument came out in 1913. It was not portable.


The invention of vacuum tubes started the march toward portable electronic hearing aids, and by 1930 they were down to about one cubic foot in size and weighed about eight pounds.


Enter the transistor.  First invented in 1948 by Bell labs, the transistor was incorporated into hearing instruments by 1950. The transistor allowed for the first truly ‘wearable' hearing aids. Wearing hearing aids caused condensation in the instruments, and the the first transistors were very susceptible to moisture.  Subsequently transistor hearing aids proved undependable. By the end of the decade though, the integrated circuit had largely solved the condensation problem. At about this same time computer scientist were discovering how to ‘process’ speech signals on mainframes. This was the start of digital hearing aids, but since this  ‘shaping’ of sound was very complex, it was only done on mainframes, and it was still too slow to be of practical value.


By the end of the decade, again, a revolution had begun. By 1970, the  microprocessor arose to change many aspects of our daily life, and those  sound processing techniques previously performed on mainframes could now be brought to wearable devices at real time speeds. This very basic (by todays standards) processing is known as compression, and it allowed for varying amounts of gain (added volume)  to be applied to the incoming signal. Prior to compression, incoming sound was amplified by a fixed amount. This often led to quiet sounds becoming unnaturally loud, and loud sounds being distorted. Compression allowed for differing levels of amplification, dependent on the loudness of the incoming sound. This led to more legible, and natural sounding amplification. By the 1980’s microprocessor were allowing compression to be applied separately to differing ‘pitches' of sound. These were called ‘bands’ or ‘channels’.  By the middle of the decade 6 channel hearing aids were available, meaning that a hearing aid could now amplify incoming sound (add gain) discreetly, depending on the incoming pitch (frequency) and volume (amplitude).  Over the next 30 years, we were to learn very sophisticated methods of processing sound, and the innovation continues today. One of the latest innovations is the ‘Brain Hearing' technology  introduced by Oticon. The speed of modern processors has finally increased to the point that these tiny machines are starting to mimic the listening processes of our auditory cortex.