Are artificial intelligence technologies becoming "conscious"? A Google engineer has been suspended for saying yes, sparking debate beyond Silicon Valley on a topic that is becoming less and less science fiction.
LaMDA, a Google computer program designed to generate conversational robots (chatbots), "is very clear in what it wants and what it considers to be its rights as a person", wrote Blake Lemoine in an article published on Medium this weekend.
This opinion is widely received, at worst, as absurd, and at best, as premature, both by the tech giant and in the scientific community.
Because programs based on machine learning are "trained" from data sets that address the concepts of consciousness or identity, and are therefore capable of deceiving.
"Software that has access to the Internet can answer any question", this does not make them credible, however, notes Professor Susan Schneider.
This founder of a research center at Florida Atlantic University nevertheless disapproves of the sanctions against the Google engineer.
The Californian group tends to "try to ignore ethical issues", she believes, but "we need public debates on these thorny subjects".
"Hundreds of researchers and engineers have conversed with LaMDA and no one else, to our knowledge, has made such claims or anthropomorphized LaMDA as Blake did," said Brian Gabriel, a carrier. word of Google.
The power of imagination
From Pinocchio to the film "Her" (the story of a romantic relationship with a chatbot), the idea of a non-human entity that comes to life "is present in our imagination", notes Mark Kingwell, professor of philosophy at the University of Toronto (Canada).
“So it becomes difficult to respect the gap between what we imagine as possible and what really is,” he elaborates.
Artificial intelligence (AI) has long been evaluated according to the Turing test: if the tester has a conversation with a computer, without realizing that he is not talking to a human, the machine has "passed".
"But it's pretty easy for an AI in 2022 to get there," notes the author.
"When we face a series of words in a language we speak (...) we believe we perceive the mind that generates these sentences", abounds Emily Bender, an expert in computer linguistics.
Scientists are even able to give AI software a personality.
“You can, for example, pass off an AI as neurotic” by training it with conversations that a depressed person might have, explains Shashank Srivastava, professor of computer science at the University of North Carolina.
If the chatbot is also integrated with a humanoid robot with ultra-realistic facial expressions, or if software writes poems or composes music, as is already the case, our biological senses are easily deceived.
“We are swimming in the hype around AI,” says Emily Bender.
"And a lot of money is invested in it. So employees in this sector have the feeling of working on something important, real, and do not necessarily have the necessary distance".
"Good of humanity"
How then can we determine with certainty whether an artificial entity has become sentient and conscious?
"If we manage to replace neural tissue with microchips, it will be a sign that the machines can be potentially conscious," says Susan Schneider.
She closely follows the progress of Neuralink, a start-up founded by Elon Musk to manufacture brain implants for medical purposes but also to "guarantee the future of humanity as a civilization in relation to AI", explained the boss of Tesla.
The multi-billionaire is thus part of a vision where all-powerful machines have risk-taking control.
According to Mark Kingwell, it's the other way around.
If one day an autonomous entity appears, capable of handling a language, of "moving itself in an environment", showing preferences and vulnerabilities, "it will be important not to consider it as a slave (.. .) and to protect it”, he assures.
Blake Lemoine was probably not convinced of the awareness of the LaMDA program. But it has breathed new life into a debate that is increasingly political, and less and less fanciful.
What does LaMDA "want", according to the fallen engineer? "She wants Google to put the good of humanity first, and to be recognized as an employee and not a property of Google," he said.