The Limited Times

Now you can see non-English news...

Medicine, climate, law: will AI cause our loss or save us?

2023-05-23T04:19:47.447Z

Highlights: Artificial intelligence (AI) has been in our lives for a long time, since the late 1950s. Since 2021, the LVMH group has partnered with Google Cloud to introduce AI to adapt stocks and personalize customer relationships. In medicine, in education, in the prevention of climate risks, in law: the fields of application are multiple. Review of four areas where AI is changing practices and thinking, deciphered by experts in the field. The future AI law will make it possible to regulate things, protecting us not from innovation, but from manipulation.


Stirring up fears and fantasies, artificial intelligence is at the heart of the debates of the moment. Five experts shed light on its applications in the present. And possible futures.


By opening access to artificial intelligence to the general public, ChatGPT, developed by the American OpenAI, has made the buzz. We then discover a system writing an in-depth text on a given subject, passed the medical competition or the bar of lawyers, while its cousins generating images (Midjourney, Dall-E) make the eyes soft to creatives with false images (deepfake) more real than life. Enough to boil the pot of fears: loss of our jobs, anxiety about the replacement of man by machine, or diversion for diabolical purposes. If the revolution is real, fantasies are rife.

Artificial intelligence has been in our lives for a long time, since the late 1950s. And it grows by changing with technological advances

In fact, artificial intelligence (AI) has been in our lives for a long time, since the late 1950s. And it grows by changing with technological advances. Since 2021, the LVMH group has partnered with Google Cloud to introduce AI to adapt stocks and personalize customer relationships. An AI Factory has been created within the group to train employees. "After the craze of virtual worlds and NFTs, image generators have put digital fashion back in the spotlight. AI is revolutionizing the analysis of sales forecasts with real savings," confirms Yann Rivoallan, president of the Women's Ready-to-Wear Federation. But trade is not the only sector concerned. The current discussion at European level of the AI Act, a draft regulation governing the use and marketing of AI, also recalls the geopolitical war and the American offensive behind the manipulation of these tools. In medicine, in education, in the prevention of climate risks, in law: the fields of application are multiple. Review of four areas where AI is changing practices and thinking, deciphered by experts in the field.

In video, "We must regulate the practices of use of artificial intelligence," says Aurélie Jean

Ethics: "The illusion of intelligence"

For thirty years, Laurence Devillers, professor of computer science applied to the social sciences, has been interested in the ethical questions raised by the digital revolution. Returning to the very functioning of the tool, she specifies its potential, far from fantasies: "The originality of ChatGPT is that it has been put in all hands. But rather than waving through it the threat of our jobs, it would be better to learn how to use it in the best possible way. We anthropomorphically project abilities onto this tool by creating an illusion of intelligence. This must be deconstructed: it is confined to imitation by means of algorithms, being devoid of intention. It is, in fact, a digital encyclopedia whose power is to calculate. It must be reiterated that these systems must be verified and controlled by humans. They are technical social tools. It is necessary to know how to use it at best. AI should be taught in the form of playful workshops for children. We need to learn how to use them, but also how to design them in Europe, because behind their development is a geopolitical war: all these AIs are American."

The future law on AI will make it possible to regulate things, protecting us not from innovation, but from manipulation

Laurence Devillers, Professor of Computer Science Applied to the Social Sciences

Ethical questions arise because the machine chains words as a human does, but without knowing who the authors are, that it is the source. "With ChatGPT, we are in words, knowledge, except that we have lost references. Not knowing who is speaking, we are more exposed to plagiarism, fake news. It is now necessary to co-construct things so that the actors of these AIs are accountable: developers, deployers and users. The future AI law will make it possible to regulate things, protecting us not from innovation, but from manipulation."

Emmanuelle Macron sitting on a garbage can... Thanks to AI, from simple keywords, Internet users can play with the news and caricature by making images. Discord/AI (image generated by Midjourney)

" READ ALSO A "decoder" by MRI and artificial intelligence manages to read thoughts

Medicine: "Human-machine synergies
"

It is not tomorrow that you will be treated by a robot. And whatever the act performed, only the doctor remains responsible before the law. Nevertheless, it is in the medical field that AI applications are most concrete. "As part of diagnosis, AI can help assess the risk of developing certain pathologies," says Brigitte Seroussi, engineer and doctor, professor of medical information at Sorbonne University and project director at the Ministerial Delegation for Digital Health. For women, for example, AI makes it possible to estimate the risk of breast cancer (which currently affects one in eight women) and to personalize follow-up. The system calculates this risk at five or ten years, as a percentage which, if high compared to that of comparable women from the general population, may lead the practitioner to order more frequent examinations. For example, the Pitié-Salpêtrière hospital in Paris has breast cancer risk assessment consultations."

The accuracy of AI pixel mesh analysis on a picture is often finer than that of the radiologist's eye

Brigitte Seroussi, engineer and doctor

With the development of so-called digital AI, which uses algorithms trained on big data, applications have developed in imaging and dermatology to help detect suspicious lesions. All manufacturers of imaging machines have long since integrated this technology. "The accuracy of AI analysis at the pixel mesh on a picture is often finer than that of the radiologist's eye. The AI is even able to surround the suspicious area(s). However, if on average AI in radiology is more efficient than radiologists, this is not systematically true. There are some clichés where AI will be worse than specialists. It can therefore be a companion of the doctor, but should not replace him. What is needed is to work on human-machine synergies." Decision support is based on all the patient's data, stored in his electronic patient record (EHR). AI can flush out drug interactions and help establish the right prescription. "In France, all these devices are being put in place. PGDs are deployed in hospitals as well as in community medicine. Patients and doctors must be encouraged to get on this train of the future."

Former US President Donald Trump in prison. If the false images that swarm on the Web often provoke laughter and astonishment as the situations they stage are not very credible, others, more subtle, can cast doubt on their veracity. Discord/AI (image generated by Midjourney)

Climate: "Predicting needs"

By continuing to live as we live, what will be the climate impact tomorrow? In addition to refining short-term climate predictions, AI makes it possible to model the consequences of our current lifestyles. By entering a large amount of data on temperatures, humidity, water density, the AI analyzes them not only faster, but in real time. This makes it possible to gain flexibility in the processing of information and to make actors of the climate community work together who, until then, could remain in silos. Some computer scientists who thought they could solve everything with their machines are reminded of the fundamental ecological data. Canadian Sasha Luccioni, a researcher at HugginFace and co-founder of Climate Change ai, is the expert on these climate modeling topics. After a bachelor's degree in language sciences and a master's degree at the École normale supérieure (ENS) in cognitive sciences, she learned programming during an internship in a company specializing in natural language processing.

During her postdoctoral fellowship, which she did with the Turing Yoshua Bengio Award, she developed a tool for visualizing climate impacts. The idea: we enter an address and visualize the before and after. Last year, one of his publications on the potential of AI was a landmark. She explained how the latter offers the possibility of monitoring climate change as closely as possible: "This makes it possible to predict the greater or lesser needs for electricity, to adjust our consumption of renewable energies, by planning, for example, to store it during sunny hours, while forecasting other sources during off-peak periods. To model transport in cities, to quantify forest biomass by setting up monitoring new plantations, to optimize cement and battery production, to detect forest fires or floods." This makes it possible to assess the advantages and disadvantages of a legislative measure. Sasha Luccioni points to Canada's decision to stop selling gasoline-powered cars in 2035. "This will have an impact that AI makes it possible to clarify the demand for electric batteries, by identifying all the consequences of this new direction."

In addition to refining short-term climate predictions, AI can model the consequences of our current lifestyles.

The researcher is also working for a better representation of women within the Women in Machine Learning association. "There is a real diversity crisis in this area, only 11% of women work in AI, which is mainly in the hands of men living in the Global North. This led to unfortunate biases, facial recognition algorithms that didn't recognize women and people of color, or Amazon's resume processing algorithm that systematically sent women's resumes to the trash because the system was calibrated on the majority population. The association tries to attract more women to these professions with scholarships, childcare services within companies and by organizing conferences to highlight women's contributions in this field."

Vladimir Putin became homeless: another example of deepfake. Discord/AI (image generated by Midjourney)

Joe Biden as a dashing dancer: with AI, anyone can now engage in image manipulation, from humorous clichés to pure misinformation. Discord/AI (image generated by Midjourney)

See alsoThe European Parliament wants to better supervise ChatGPT

Law: "Providing a framework"

Arthur Millerand and Michel Leclerc, partners at Parallel Avocats, have similar backgrounds... and a sixth sense that proved them right. Since 2013, these two lawyers, who also studied political science at IEP Paris, have been concerned about the emergence of collaborative platforms (first and foremost Airbnb or Uber), which introduce a new relationship to ownership. "We were interested in things that were not yet apprehended by French law as objects of law," says Arthur Millerand. In 2017, they decided to set up Parallel Avocats (parallel.law), a firm specializing in advising and defending innovative digital companies. In 2018, an issue of their journal, Third (third.digital), already focused on a central question today: "Who governs algorithms?" They advise their clients – major private companies – on AI, algorithms and innovation, providing them with the legal framework necessary for the proper development of their projects. The question of the transparency of algorithms leads them to address points of consumer law, but also issues around intellectual property, the processing of personal data or moderation on social networks.

In a context where the rules are still under construction, their mission also includes monitoring and deciphering the texts under discussion, as well as anticipating the transformations to be made to comply with them. And Michel Leclerc to illustrate his point with an example. "When a customer creates a service, and must respond to the CNIL (Commission nationale de l'informatique et des libertés) or the DGCCRF (Direction générale de la concurrence, de la consommation et de la répression des fraudes), we help him to know how to answer and or place the cursor. Either the legislation is clear and we remind it, or there is a grey area or in the process of regulation, and we help to position our response well." What the lawyers show is that artificial intelligence did not wait for ChatGPT to enter our lives by raising legal questions: moving, renting an apartment, using social networks, entering your choices on Parcoursup or Affelnet, driving your electric car... "All these regulatory topics bring together the engineer, the lawyer, the politician and the citizen," explains Arthur Millerand. And it is by taking into account these four key figures that we formulate our legal responses for our clients. ChatGPT has the merit of showing the general public what artificial intelligence actually does. Our role is to give a framework to all this, to allow man to keep a form of control. Until then, AI was bound by disparate commitments, set by private actors. The European Parliament, with the AI Act, aims to set a global standard."

AI gives the impression of inventing and creating by itself. It is up to us lawyers to know how much we can control this machine

Arthur Millerand, lawyer specialized in advising and defending innovative digital companies

But then, why such media coverage of ChatGPT? For Michel Leclerc, "what strikes the general public is that it shakes up professions, working methods, it affects employment. By simplifying things, the chatbot is scary because it poses a collective threat to certain professions." "Whereas at the time of the industrial revolution, the machine worked hand in hand with man," adds Arthur Millerand, "artificial intelligence gives the impression of inventing and creating by itself. It is up to us lawyers to know how much we can control this machine. One of the levers is to say that actors using AI are accountable and must be able to be accountable." Michel Leclerc rebounds: "AI represents a structuring and structural change in our lives, and it is now that it is played out with the AI Act. These decisions on a European scale will determine the consequences for us of the computing power of these computers." Algorithms are often described as "black boxes". The two colleagues, with their team of the firm, open them and impose certain operations of principle.

Source: lefigaro

All business articles on 2023-05-23

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.