Google recently put an engineer on paid leave after rejecting his claim that the company's artificial intelligence is conscious. Blake Lemoine, a Google engineer, recently said that the company's AI technology has become sentient. 

Who is Blake Lemoine?

Mr Lemoine, a military veteran who has called himself a priest, an ex-convict, and an artificial intelligence researcher, told Google executives, including the President of Global Affairs, Kent Walker, that he thought LaMDA was a seven or eight-year-old child. He wanted the company to get permission from the computer program before testing it. He said that the company's human resources department mistreated him because of his religious beliefs.

Lemoine had spent months testing Google's chatbot generator, LaMDA (short for Language Model for Dialogue Applications). As LaMDA talked about its needs, ideas, fears, and rights, Lemoine came to believe that it had taken on a life of its own.

What is LaMDA?

Google referred to LaMDA as its "breakthrough conversation technology" while discussing it for the first time at its I/O 2021 event.

LaMDA "can talk in a free-flowing way about a seemingly endless number of topics," the company said. "We think this ability could open up more natural ways to use technology and new categories of useful applications."

It is a neural network architecture called Transformer, made by Google Research and released to the public in 2017. Researchers trained the AI on dialogue, which is different from most other language models.

"That architecture makes a model that can read many words (like a sentence or paragraph), pay attention to how those words relate to each other, and then guess what words it thinks will come next," it said.

What is the issue?

In a Medium post that came out on Saturday, Lemoine said that he has talked to LaMDA about religion, consciousness, and the laws of robotics. He also noted that LaMDA has been very clear in its communications over the past six months about what it wants and thinks its rights are "as a person."

While they were talking, Lemoine heard the chatbot talking about its rights and being a person. In another conversation, the AI was able to change Lemoine's mind about the third law of robotics by Isaac Asimov. Lemoine and another person worked together to show Google that LaMDA was conscious. But Blaise Aguera y Arcas, Google's vice president, and Jen Gennai, who is in charge of Responsible Innovation, looked into his claims and found that they were false. Google put Lemoine on paid administrative leave, so he decided to go public.

What does Google say about the issue?

Google disagreed with Lemoine's claim that LaMDA had become conscious, so they put him on paid administrative leave earlier this month, days before The Washington Post wrote about his allegations.

Google said that its systems could mimic conversations and talk about different things but that they did not have consciousness. A Google spokesperson, Brian Gabriel, said in a statement, "Our team, which includes ethicists and technologists, has reviewed Blake's concerns based on our AI Principles, and we've told him that the evidence doesn't support his claims." "Some people in the wider AI communities are thinking about the long-term possibility of sentient or general AI., but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient." The Washington Post was the first to report the dismissal of Mr Lemoine.

Conclusion

There have been dystopian science fiction stories about sentient robots for decades. With OpenAI's GPT-3, a text generator that can make a movie script, and DALL-E 2, an image generator that can create images based on any combination of words, real-life has started to take on a more fantastical feel. Tech experts from well-funded research labs trying to make AI more intelligent than humans have hinted that consciousness is just around the corner.

In addition, Google has said that anthropomorphization is a safety concern. For example, in a paper about LaMDA that Google released in January, the company warned that people might share personal thoughts with chat agents who pretend to be humans, even when the users know that the chat agents are not natural. The paper also said that enemies could use these agents to "sow misinformation" by taking on the "conversational style of specific individuals."

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in