The Limited Times

Now you can see non-English news...

Kate Darling, robot expert: "We shouldn't laugh at people who fall in love with a machine. It will happen to all of us."

2023-06-07T10:48:32.195Z

Highlights: The MIT researcher has been working for years on the consequences of human-machine relationships. Kate Darling researches the legal, social and ethical effects of robots at the MIT Media Lab. He has been observing how humans and robots relate to each other for years. In mid-June she will visit Barcelona to participate in activities of the Sónar+D festival, invited by the consultancy Seidor. To understand what a robot is, he says that it is better to compare it with an animal than with a human.


The MIT researcher has been working for years on the consequences of human-machine relationships and now analyzes the explosion of artificial intelligence


Kate Darling researches the legal, social and ethical effects of robots at the MIT Media Lab (Rhode Island, USA, 1982). He has been observing how humans and robots relate to each other for years. He has several in his house. With the advent of the revolution in artificial intelligence (AI), he responds about the future with evasiveness: "It's all so speculative," he says, "that it's hard to decipher." Even so, there is no better time for her work, because we have never been so close to living with robots: "It is an exciting time, I feel very lucky to be able to live it."

Darling (41-year-old American) is the author of the book The New Breed where she states that the best comparison to understand what a robot is is with animals, not humans. In mid-June she will visit Barcelona to participate in activities of the Sónar+D festival, invited by the consultancy Seidor. In this conversation with EL PAÍS, which he made by video call from his home, he tries to explain the enormous novelty represented by the language models headed by ChatGPT.

Question. How has ChatGPT's success changed the way you see the future of robots?

Answer. It's a huge change. Many people did not anticipate it. If you had asked me a few years ago if we would have this kind of sophistication, I would have said no, never. This changes the game in many ways. What's going to happen now? No one knows. For me one of the big questions is: will the capabilities we are seeing in generative AI translate into being able to control and program physical robots? That kind of intelligence and learning would be really incredible. I'm not sure it's going to happen.

Learn MoreNo, you're not going to become a millionaire in a day with ChatGPT

Q. There is no definition of robot. Why is it so difficult?

A. There is no universal definition. Depending on the scope, they will give one definition or another. Throughout history, something new has been called a robot, a new technology that people don't understand, that has something magical about it. Then, once it becomes more common, people stop calling it robot and start calling it dishwasher or vending machine.

Q. There is a lot of debate now with a possible extinction caused by an AI capable of deciding for itself.

A. I'm a very practical person and I don't know how something like this can develop. There's not much we can do to predict if it will happen and there's nothing that can protect us, other than stopping AI research, which won't happen. I'm more interested in the fact that people will believe that AI is conscious, regardless of whether it actually is or not. That is something we must face as a society.

A new technology that people don't understand has been called a robot. Then, they start calling it a dishwasher or vending machine.

Q. To understand what a robot is, he says that it is better to compare it with an animal than with a human. Do you keep that idea after ChatGPT?

A. Yes. I know it's harder to compare with an AI that uses human language now. But even more important is the reason, which is to say that it is not so valuable or useful to create something that we already have, that we can already do. It is more valuable to have machines that can complement us or be partners in what we are trying to achieve. Many tasks that generative AI will do now are made by humans, but I think the true potential of the technology is that it is a tool that combines with other human skills and not just a replacement.

Q. See robots soon as members of our families. What will they be like?

A. In much research on human-robot interaction, people already treat robots as living beings, even though they know they are just machines. But people love doing it. People even anthropomorphize robots and we project ourselves onto them, we give them crazy human qualities, emotions. People also understand that they are interacting not with one person, but with something different. Robots will be a new kind of social relationship: it can be like a pet or it can be something totally different, that's why my book is called new caste. But I don't think it necessarily replaces human relationships. It will be something different, but it's definitely going to happen.

The true potential of technology is that it is a tool that combines with other human skills and not just a replacement.

Q. He has robots at home. What are they like? What are you doing?

A. I have a couple of different types. We have a baby seal, a dinosaur robot, a robot dog and then we have other robots that are more to help around the house, like an assistant or a vacuum cleaner. Everyone does different things and my kids interact with them differently depending on whether they see them as a tool or companion.

Q. Can companion robots be turned off or are they always on?

A. We turn them off. Although some are designed to always be on. The dog, for example, when the battery is low, looks for its place of charge and lies down as if it were going to sleep to charge.

Q. Are these pet robots ready to enter millions of homes yet?

A. We have already seen, with this primitive and very expensive technology, that people who have it develop meaningful connections. Technology isn't going to get worse. The barrier to home robots is not the complexity of the robot, but that people do not yet know the social value that having one would provide them. Once they get enough positive effects from a home robot that many households will have, there will be a tipping point and more people will want them.

'Her' is about an application launched by a company, what is your business model? What are they trying to do?

Q. What do you mean by "positive effects"?

A. People didn't see the value of having a pet before. The animal had to fulfill a function: the dog would take care of the house and the cat would catch the mice, but then people realized that the relationship with the pet and the emotional connection were the true value. Now they have pets for that reason. The same thing will happen with robots. Right now they have a function: assistants, vacuuming the floor. But once there are an adequate number of robots that people interact with, they will see value in social connection and want them for that reason as well.

Q. He has said that the film Her, about a human who falls in love with a machine, worries and excites him in equal measure. What ethical problems do you see?

A. Her is about an app launched by a company. There are many questions: what is the company's business model? What are they trying to do? Probably, they try to maximize their profits. They are people in a very vulnerable position because they already have a very strong emotional connection with an application, a device, a robot. This is already happening. The Replika app, which already has millions of users, has emotionally attached people. I am also concerned that there are privacy and data collection issues. You could emotionally manipulate people into buying products and services or changing their behavior, not in your own interest, but in that of a company.

Still from the film 'Her', with Joaquin Phoenix.

Q. He has said that it can be imagined that a sexual app could exploit a user's weakness in the heat of climax.

A. Yes.

Q. Isn't marketing bad?

A. Maybe it's a little more subtle. But Replika has in-app purchases that people buy and so it's easy to manipulate, spend money, or show advertising. They are consumer protection problems because it is persuasive but too manipulative.

Q. Will there be a reasonable way to monetize these apps?

A. Yes, when consumers realize the value of buying an artificial companion and pay enough money for it. They will be able to sell it and that's it. Do I think that will happen? No. But it would be the best way to protect privacy and not have to emotionally manipulate anyone.

Q. Many people will be surprised that someone humanizes these machines. But we're programmed for that.

A. Yes. And it won't go away. If something moves around us, it is because it has life. That's how our brains think and there's this subconscious projection that not only occurs with moving objects, but with a chatbot or whatever that mimics human behavior, things that we recognize as signals, sounds, and scientific evidence shows that we do it from a young age. It is deeply rooted and will continue to be there.

[What worries me most is] business, the incentive structure, and political and economic issues. It's a matter of governance, not technology.

Q. The robots will die. Could it be that we divorce or abandon a robot in a ditch because of a software update?

A. Yes, probably. Relationships can end anyway and we will have real relationships with robots, whether they are human relationships, human-pet relationships or new relationships. As such, they can end up in different ways, both with death and with someone deciding that they no longer want to continue. All sorts of things will happen. It is easy to foresee because people develop emotional relationships with artificial entities. But there are still many people who don't understand it.

Q. Don't you understand that you can sneak through a machine?

A. Yes. There are already stories about people falling in love with your chatbot. Most people think they don't. That those people who fall in love are sad and lonely, but they are not. We are all susceptible to linking with machines, especially when they are somewhat more interesting and available. We need to take this more seriously instead of laughing at someone falling in love with a robot because it will happen to all of us.

Q. Isn't it surprising that the machine we fall in love with is just a screen?

A. Not too much. Even with the most primitive chatbots, people opened up. At MIT they created Eliza in the seventies and people told her their stuff. We are papanatas with everything that gives us signals that we recognize, even if it is only a screen. The reason I love physical robots is because it adds that more visceral layer that makes them more attractive.

Q. But he doesn't like humanoid robots.

A. No, they are boring.

Q. He prefers R2-D2, a "garbage can on wheels."

A. I like robots that are designed to be monkeys and that people identify with, but they don't have to look human. It's much more interesting to create a shape and sometimes it works even better because if it looks too humanoid, then the expectations about how it should behave and what it should do end up disappointing. Whereas with something that looks like an animated garbage can there are not the same expectations.

Q. Are you more excited or concerned about these developments?

A. Both.

Q. What are you most concerned about?

A. Business, incentive structure and political and economic problems. It is a matter of governance, not technology.

You can follow EL PAÍS Tecnología on Facebook and Twitter or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

Read more

I'm already a subscriber

Source: elparis

All tech articles on 2023-06-07

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.