The Limited Times

Now you can see non-English news...

AI learns words through a child's experience - Frontiers

2024-02-02T08:11:41.361Z

Highlights: AI learns words through a child's experience - Frontiers. A machine learning system was trained using video and audio recordings taken from the perspective of a little girl, thanks to a camera mounted on a helmet. The findings, published in the journal Science, will help develop artificial intelligence systems that can learn language more like humans. For this type of research, children represent the ideal model to study. Just think that from six months of age they begin to acquire their first words, connecting them to objects and concepts in the real world.


A machine learning system was trained using video and audio recordings taken from the perspective of a little girl, thanks to a camera mounted on a helmet (ANSA)


Artificial intelligence can reveal how early word learning occurs through children's eyes and ears.

This is demonstrated by a curious experiment from New York University, in which researchers trained a machine learning system using video and audio recordings made from the perspective of a little girl, thanks to a camera mounted on a helmet worn during usual daily activities in the period included between 6 months and two years of age.

The findings, published in the journal Science, will help develop artificial intelligence systems that can learn language more like humans.



For this type of research, children represent the ideal model to study: just think that from six months of age they begin to acquire their first words, connecting them to objects and concepts in the real world, and by two years of age they begin to understand them. on average 300. To understand how these words are learned and how they are associated with their visual counterparts, the researchers decided to use an innovative approach, using artificial intelligence.



For their experiment they chose a relatively generic neural network and trained it by feeding it 61 hours of video and audio recordings filmed from the perspective of a little girl engaged in daily activities (such as playing on the slide, having tea with soft toys or browsing a book in her mother's arms) in order to associate what the child saw in front of her with the words spoken to her by the adults.



The results demonstrate that the AI ​​model managed to learn the word-object mapping present in the child's daily experience;

was also able to generalize concepts beyond the specific objects seen during training and align their visual and linguistic representations.

According to the researchers, the model (with limited sensory input and relatively generic learning mechanisms) provides a computational basis for studying how infants acquire their first words and how those words can be associated with what they see.

Reproduction reserved © Copyright ANSA

Source: ansa

All news articles on 2024-02-02

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.