The Limited Times

Now you can see non-English news...

Nina da Hora: "Technology reinforces the problem of structural racism in Brazil"

2023-03-24T10:42:11.962Z


The anti-racist 'hacker' is one of the most active young voices in the movement that seeks to increase the participation of black women in technology


EL PAÍS offers the América Futura section open for its daily and global informative contribution on sustainable development.

If you want to support our journalism, subscribe

here

.

The scientist Nina da Hora (Rio de Janeiro, 27 years old) is one of the most active young voices in the Brazilian movement that seeks to increase the participation of black women in the field of technology and innovation in Brazil.

Born on the outskirts of Rio and raised by five women teachers, including a mother, aunts and grandmother, she says that at the age of 6 she discovered that she would dedicate herself to computing, fascinated by the possibilities of play and creation offered by her first computer.

Along the way, however, she observed that very few black people had access to the thriving Brazilian technological universe.

And black women, even less.

According to various investigations cited by the Preta Lab laboratory, they represent 28% of the total population of the country,

To start changing this reality, Nina da Hora, computer scientist, researcher, teacher and

hacker

Anti-racist, she proposes democratizing access to technology and making its operation transparent with an accessible language, like the one she herself uses with her grandmother to explain the reason for algorithms.

“We have to take some time to reflect on artificial intelligence and what it can generate,” she tells América Futura after having participated in the KHIPU Latin American meeting on artificial intelligence, which took place in Montevideo at the beginning of March.

Convinced, she defends the idea of ​​a plural science, open to society, and especially advocates for the prohibition of facial recognition technology, as has happened in cities like San Francisco, because she considers that it is inefficient and reinforces the structural racism that persists in Brazil.

Question:

What does an anti-racist

hacker

do ?

Answer:

It is someone who uses their cybersecurity or programming skills to combat racism and promote equality.

For example, exposing racist individuals, removing discriminatory content online, protecting marginalized communities from cyber attacks.

Also, in my case, when I started to study computing, I found myself in a universe that marginalized people like me: a woman, black, from the outskirts of Rio de Janeiro.

So I set out to find ways to bring this group closer to technology.

I am a hacker to break those social patterns and mitigate the damage of racism in society.

Q.

How did you do on that task?

R.

I have created some initiatives, such as the Ogunhê podcast, in which I present the history of black scientists and their contributions to the world.

In addition, he started a research institute with a team made up entirely of indigenous and black people.

With them we are opening ways to make technologies accessible to marginalized communities in Brazil.

Q.

What have you explained to your grandmother about algorithms or artificial intelligence?

A.

I taught him what an algorithm is using a cake recipe as an example.

Both can be similar because they are sets of instructions that, if followed correctly, will produce a specific result.

The algorithm is used to solve a problem or perform a specific task in the field of computing or mathematics.

Artificial intelligence is a developing area that studies the possibilities of creating machines that use algorithms to perform repetitive tasks that can help society.

Q.

Generally speaking, do you think you are helping us or making us lazier and more uncritical?

R.

We need more critical thinking.

We don't reason about what we are wearing, we do repetitive tasks, like machines.

That is why we have to talk with children, with young people, to make concepts related to technology more accessible and take time to reflect on artificial intelligence and what it can generate.

But the opening of science to society takes time.

Q.

What areas of artificial intelligence are most problematic?

A.

Artificial vision (from English

computer vision

) is one of them, because it is invasive and there is no privacy for those who use it.

For example, when I unlock my phone with a face photo, it's invading my privacy.

The risk is that we don't know where that image is going, where it is going to be stored, or what it is capable of doing with the reconstruction of that face.

In Brazil, many blacks have been mistakenly detained, because in image banks people with dark skin are labeled as dangerous or more likely to commit a crime.

The Security Observatory Network monitored facial recognition technology in five states in 2019 and found that it exacerbates the incarceration of blacks, as well as being inefficient.

Nina da Hora at the Latin American Artificial Intelligence Meeting, in Montevideo. Courtesy

Q.

But this racial bias of the machines is not magic, it comes in any case from those who program them.

R.

Brazil uses imported technologies and uses them in our society, which has a problem of structural racism.

Technology reinforces that when a person is detained for facial recognition, through cameras that are in public space.

Those cameras have an algorithm that recognizes faces and searches its photo bank for who that person might be.

We do not have access to that base, there is no transparency.

We do not know the stages of its development and we do not understand its associations.

The movement

Tire meu rosto da sua mira

(

Get my face out of your sight

, in Spanish), of which I participate, tries to ban the use of that tool in Brazil.

Q.

Do you rule out that it can be improved?

R.

From my point of view, facial recognition technology has no chance of improving, it is extremely dangerous and as a society we do not have the necessary maturity to have technology like that, without first discussing racism, violence against women or the LGBTI community.

We're trying to combat these problems, and that technology only reinforces them.

Q.

You talk about decolonizing technology to improve the use of artificial intelligence.

Implying?

R.

The first step is to listen and observe the territory where we live, from a vision of Brazil and not of Silicon Valley, in the US.

I have looked for references in technology in Mexico, Chile, Uruguay or Argentina, which are closer to our culture and social movements.

For example, learning languages ​​other than English is one way of putting that decolonization into practice.

If I only learn English, I will think of reference people in English and I will do research in that language, with which I am already being directed, to agree and not to disagree.

There is a lot of power concentrated in technology, a few dominate many countries.

The decentralization of that power would imply having more digital sovereignty and creating our own technologies instead of importing them.

But today we don't have a strategy to organize and govern our own data.

Q.

According to the UN, of the 15 most important digital platforms, 11 are from the US and four are from China.

A.

Of those 15 large companies, there are five, Amazon, Google, Apple, Google and Microsoft, that share with each other what we talk about, how we exchange ideas, how we investigate.

My proposal is to develop more open programs that are transparent as to the way they were made.

In other words, that science is more accessible to reduce concentration and control.

Of course, these companies do not want it and develop a more aggressive surveillance capitalism, in which people do not matter, what matters is the data.

Q.

However, these companies raise the flag of diversity.

Don't you see it that way?

R.

They seek to adapt to what we claim;

for example, that there are more black people in the technology sector.

But the average profile of those who research and develop these technologies is that of a white man, a middle- or upper-class researcher, who speaks several languages ​​and does not know how to listen.

Q.

What possibilities does Brazil have to develop its own technology?

R.

Various representatives of civil society and research centers are in dialogue with this government (of Lula da Silva), which is more democratic, to develop an internet governance strategy in Brazil.

We have excellent researchers and professionals who are organizing a digital sovereignty strategy so that our data stays in the country, with an investment from the State and not from the private sector.

Q.

That sounds complex in these times of diffuse digital limits

A.

If we start little by little, in stages, it is possible.

And everything we do today, someone is going to continue.



Source: elparis

All news articles on 2023-03-24

You may like

News/Politics 2024-03-07T04:36:56.722Z
News/Politics 2024-03-27T18:55:04.792Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.