The Limited Times

Now you can see non-English news...

Artificial intelligence at home: 4 basic steps so that parents can guide their children

2024-03-15T09:45:51.808Z

Highlights: Artificial intelligence (AI) opened a new panorama in the way of living. UNESCO has set the minimum age for children to use it at 13 years. Parents need to understand how it works and monitor its limited use. Let the kids not "teach" us about technology, but rather find us prepared and learn from us, says psychologist Maritchu Seitún. The U.N. points out the importance of preparing people to live, study and work in a world affected by these emerging technologies.


An engineer specialized in AI and a psychologist reflect on how to accompany children and young people. What is the minimum age set by UNESCO for children to use it.


Artificial intelligence

opened

a new panorama in the way of living.

Being an emerging technology, today we can only estimate part of its scope in the daily lives of childhood and adolescence.

What can parents do? Is it better to prohibit the use of chatbots at home? Can they be used ethically?

The first thing is to understand what we are talking about when we talk about artificial intelligence (AI).

MIT, the most important university in the fields of engineering and technological innovation, defines it as "the ability of computers to

imitate human cognitive functions

such as learning and problem solving." Part of the AI ​​was already present in the attendees voice or GPS for years. What distinguishes this new moment in its development (with chats like ChatGPT) is its generative capacity.

The AI ​​that is being developed right now, the generative one, in addition to imitating these human faculties very well, can

synthesize information

and

create content

: images, texts, music, videos and codes through 'prompts' (simple commands given by a user). .

Engineer

Mario Cwi

, director of teacher training at ORT Argentina, explains to

Clarín

that if previous attendees already used part of the technology (in facial or voice recognition, for example), “they had much more limited capabilities, performing preconfigured actions in based on its specific programming.”

“Since 2019, in international forums where public policies linked to education are debated, the incorporation of AI in educational systems began to be promoted.

In fact, the United Nations, for its 2030 Agenda, points out the importance of preparing people to live, study and work in a world strongly affected by these

emerging

technologies ,” indicates Cwi.

Step 1: Understand how they work before kids use them

“We parents have to

lose our fear of the GPT Chat

(or others) and know how to handle them to be able to accompany the kids,”

the author and psychologist specialized in parenting

Maritchu Seitún warns

Clarín

.

Generative AI

offers

great opportunities, as happened before with search engines or, before them, with encyclopedias.

“The idea of ​​quickly having bibliography and other resources that would take years to collect by 'human' means, and being able to use that time to create, imagine, draw conclusions, is fascinating,” explains the psychologist.

But, she adds, to evaluate its advantages and disadvantages you need to understand how it works.

Let the kids not "teach" us about technology, but rather find us prepared and learn from us.

Photo: Shutterstock illustration

Cwi explains that, so far and in simple terms, the generative AI of a chatbot like Chat-GPT

responds to an algorithm that has been trained

with billions of pieces of language, finds patterns and predicts the probability that a word will follow to another to respond to those 'prompts' that a user asks for.

“Families have a key role, offering opportunities for children and young people, when interacting with generative artificial intelligences, to develop critical perspectives that allow them to recognize that, despite expressing themselves with a language similar to that of humans, AI does not maintain a “ conversation” (even though it seems like it)”, warns the engineer.

Step 2: monitor its limited use and only from the recommended age

Just as in cybersecurity, it is always recommended that mothers and fathers ask a key question:

What did you do on the Internet today?

, conveying that intelligent chatbots collect data and often make suggestions based on our own browsing patterns is essential.

Even more so if you take into account that in Argentina,

on average, children access their first mobile device at the age of 9

.

In September of last year, UNESCO published a policy guidance guide on generative AI in education and research in which it urges governments to create regulatory frameworks and sets the

minimum age for its use at 13 years

.

Not giving out personal data is one of the security tips when using generative AI chatbots.

Photo: Shutterstock illustration

Kamila, a sixth grade girl who will turn this age in three months, has already used generative chat on several occasions, but not all of her teachers talked to her about it in class.

Her mother, Carolina Brazón, a digital marketing assistant, tells this medium that she herself uses ChatGPT for some of her work tasks.

Brazón knows how it works and consulted it to explain a mathematical concept to her daughter: “I didn't know how to explain something to her, I'm not very good at understanding mathematics, so I was looking for a way that was simpler to explain things to her.”

Turned out.

Cwi indicates that at the secondary level of the institution where he works, students begin to explore how “recommendation systems work that, based on AI, suggest what music to listen to, what series to watch or what products to buy.”

A teenager who understands the “behind the scenes” of her user experience and understands

how these systems are trained

will be able to question and cultivate a critical approach to what the screen shows them.

This more open approach is advisable from adolescence

onwards

.

The engineer clarifies: “The use of these technologies does not seem to be avoided by prohibiting or discouraging their use in educational contexts.”

In fact, Cwi did exactly the opposite and studied throughout 2023

how to leverage the tool

for his new book

From him AI Came to School.

Exploring with chatbots in the classroom

(Editorial Noveduc).

Instead of prohibiting its use in adolescents, it can be supervised. Photo: Shutterstock illustration

For the author, school can become an ideal environment for students to experiment with AI if they also have the guidance and supervision of adults trained in these technologies.

Step 3: Use this technology as an assistant, not a substitute

In any case, Cwi recognizes that the big question:

how to prevent students from delegating these activities to technology?

, is the new challenge for educational systems.

“At ORT, driven by the school's Management, we are exploring the potential and limitations that ChatGPT offers as a tool to diversify and personalize students' learning paths,” he says.

The challenge is to get our students to recognize the value of using these technologies as

a support or scaffolding

for carrying out their tasks,

not as substitutes

to whom they can delegate their work.”

In the US, tools were promoted to detect texts generated by AI but their accuracy is low in many cases.

Photo: Shutterstock illustration

For his part, Seitún points out: “Teachers will have to ask questions in a way that forces children to

draw conclusions

, integrate concepts, imagine, create, and so that they cannot respond only with the information that the AI ​​provided them.”

The oral defense of content could be a good way to apply this principle;

Last year, tools were promoted that promised to detect AI-generated text, but their efficiency was found to be low and unreliable.

Alessandra Hernández, editor and mother of a 15-year-old girl, tells

Clarín

that she would not like her to depend too much on that tool.

“Last year she asked me if she could use it to suggest how to organize the topic of an essay.

She told me that the information was not of much use to her because she had confusing data that she had to corroborate, but

it helped her to realize what she should not do

,” she says.

And, although her daughter dismissed the chatbot as a study tool at the time, the technology has continued to improve.

In January of this year, OpenAI, the company that created ChatGPT, presented improvements in its update and stated that “the new model is designed to be a more attentive listener” and

the database now runs until April 2023

.

Step 4: Clarify that AI is not free from biases and prejudices

Although they are machines and systems,

the patterns that AI incorporates are human

.

Technology is not exempt from reproducing or amplifying discriminatory or prejudiced discourses, warns Mario Cwi.

“The algorithms are trained with data collected from the Internet, reflecting the

social values, prejudices and stereotypes

that can be found there.

Furthermore, the people who design the algorithms and programs can also contribute to

reproducing certain biases

when they prioritize, consider or ignore certain attributes over others (gender, age, socioeconomic level, for example),” details the engineer.

Talking to teenagers so that they question and investigate what they consume is essential.

Photo: Shutterstock illustration

Cultivating in children a

reasonable skepticism

for what they see or hear will now be part of their own digital training.

Identifying biases together when interacting with generative artificial intelligence (text or images) will help them

build their critical sense

and prepare for what follows.

Source: clarin

All news articles on 2024-03-15

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.