The Limited Times

Now you can see non-English news...

The sound of the future is here: audio adapted to each ear


Ears are like fingerprints: no two are the same. For this reason, many headphone companies are developing systems to adjust their sound to each person. that's how they work

If no two people have the same ears (not even the left is the same as the right), why are the headphones?

To what extent does this homogeneity of the designs and the equalization influence the sound quality that is obtained with them?

For experts, a lot.

Until now, manufacturers have focused on offering different fit options with eartip sizes and materials in an effort to ensure the best possible listening experience: if the ear canal doesn't seal properly, noise from the environment will continue to enter and won't be heard. They will distinguish the nuances of music well.

But in recent times they have gone a step further: now they can also adapt how they sound based on the shape of the ears or head.

It is customization at its best.

The Apple Case: Spatial Audio

There are already many examples, each one with different technologies and applications.

In the case of Apple, for example, this feature has been called spatial audio and is used to make the listening experience more immersive, so that each user perceives sound based on the size and shape of their head and ears.

Hence, to use it, you have to do a previous scan with the iPhone camera of this area of ​​the body.

What does it translate to?

In that when listening to music or watching movies and series on the iPhone, iPad, Mac and Apple TV, the sound seems to come from everywhere and, thanks to the gyroscope and accelerometer installed in each earphone, they detect how we move and the sound too It is perceived differently depending on the position of the head.

For this, yes, 1st or 2nd generation AirPods Pro headphones, AirPods Max, 3rd generation AirPods or Beats Fit Pro are required. And also, the app from which you are going to listen to the sound (music, multimedia content…) must be compatible with this function, which only works on Apple phones and tablets with operating system 15.1 or higher, and on Mac computers with M1 processor or higher.

Something similar does Sony.

In this case, applying its 360 Reality Audio technology to music and recording live music videos: it places voices, instruments and even the sound of the audience in a spherical sound field and at a personalized distance and angle.

Thus, the feeling is as if you were in the center of the music.

The difference with Apple is that this technology is integrated into multiple sound systems;

not only from Sony, but also from Amazon, Denon, Marantz, Audio Technica or Sennheiser.

And music with this surround sound is available from platforms like Amazon Music Unlimited or Tidal.

Again, an essential part of its success is that the devices are capable of adapting the sound to the physical characteristics of each person.

In the case of Sony headphones, for example, the shape of the ear and cheeks is analyzed from the Headphones Connect app (the one used to manage the device from the mobile) and adjusts the equalization accordingly.

Bose: hearing scanner at every use

The Bose example, finally, is somewhat different for three reasons: Instead of scanning the ears, it focuses on the ear canal.

The headphones themselves repeat that analysis every time a user puts them on (with no action required) and use the results primarily to improve the performance of the active noise-canceling system.

The goal is to achieve as much silence as possible during sound reproduction.

Bose has implemented this in its latest wireless headphones, the QuietComfort Earbuds II.

To do this, the firm's engineers have been inspired by the AdaptiQ technology of its sound bars and home theater systems, which take into account the size of the room, the shape, the furniture, the carpet and the material of the floor. to tune system performance.

His approach is that, just as the sound from a speaker is affected by the room it is in and where it is placed, the sound from a headset is modified differently in each person's ear canal.

And those differences limit the effectiveness of noise cancellation.

In practice, every time the headphones are placed, this auditory scanner starts emitting a sound that comes and goes: it bounces inside the ear and is picked up by the integrated microphones.

It lasts just a second, but it is enough to


the user's ear and thus be able to optimize both noise cancellation and sound performance for each person.

You can follow

EL PAÍS Tecnología





or sign up here to receive our

weekly newsletter


Source: elparis

All tech articles on 2023-03-20

You may like

Trends 24h


© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.