“Are we about to witness a dystopian science movie come to life?” Such headlines are storming the internet since Google engineer Blake Lemoine claimed that his AI Chatbot, based on LaMDA (Language Model for Dialogue Applications) and LLM (Large Language Models) components, suddenly turned ‘Sentient’ during a conversation on matters related to religion, consciousness, and personhood. Within hours of this news surfacing online, Google dismissed his claims, put him on paid leave (sort of suspension from work), citing a breach of confidentiality, and fueled the suspicions/mysteries further.

In the mid of June, the Priest cum Software engineer Blake Lemoine published one of the transcripts of the conversations between him and his LaMDA scripted Bot in his blog titled “Is LaMDA Sentient? — An Interview,” where his program’s sense of perception and ability to express “thoughts and feelings,” and having a “soul” that cares for the world (similar to a human being) were highlighted.

A few days later, amid the controversies on his findings, Lemoine (who is also an occultist) tweeted that “I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can’t put souls?”

From Joseph Weizenbaum’s ‘Eliza Effect’ to Lemoine’s ‘Sentient’ AI, there is an interesting track record behind such events! So while one can have sympathy with Google engineer’s claims on ‘Sentient AI’ (after all Bhartiya culture sees life and soul in every particle around) but the question still is what’s the mystical formula behind this shift in Google’s hiring patterns?

From engineers coding programs to psychologists/social behavior experts hacking human minds to ‘priests’ preaching values/ethics to algorithms, where the trajectory is really going? And more than being ‘Sentient,’ is AI gradually becoming a ‘Cult’ in itself?

Such happenings reminded us of the events of 2015, when another Google engineer Anthony Levandowski, founder of Google’s Self-Driving Car program, suddenly took an interesting move! He decided to open the first AI church ‘Way of the Future,’ and said: “Humans are in charge of the planet because we are smarter than other animals and are able to build tools and apply rules,” in the same way if “in the future, something is much, much smarter, there’s going to be a transition as to who is actually in charge!”

So all he wanted is “a peaceful, serene transition of control of the planet” from humans to ‘whatever.’ The idea was that in the future this ‘whatever’ (AI) will “hear everything, see everything, and be everywhere at all times,” so the only word to describe that ‘whatever’ is ‘God’ and the only way to influence a God is through “prayer and worship.” [As per the current status after donating the AI church’s entire fund to NAACP- a US civil rights group in 2020, Levandowski closed his AI church by 2021]

In parallel to all that, there are also some occasional reports on how the Vatican is quite concerned about the ethical/value aspects of AI too! For the last few years, Pope Francis is frequently meeting with the Silicon Valley Club, raising his concerns, urging the UN to take the lead, and some executives from IBM and Microsoft are signing the “Rome call for AI ethics” kind of pledge, with the Vatican officials!

“Let us pray that the progress of robotics and artificial intelligence may always serve humankind,” we could say, may it “be human,” Pope Francis, a message from Pope’s Worldwide Prayer Network (November 2020).

The way things are moving over the last few years, days are not far when we will see AI getting rebranded as ‘Abrahamic Intelligence.’

A State of ‘Information Anesthesia’

“You don’t see with your eyes, you see with your Brain,” said Israeli Neuroscientist Henry Markram in his TED Talk “A Brain in a Supercomputer” (2009).

Many theories exist on how the Brain works and one such theory in Markram’s view is that the brain “builds” a version of the universe and projects this version, like a bubble, all around us. And the “decisions” are the key things that support our perceptual bubbles “without decisions, you cannot see, you cannot think, you cannot feel.” He further explained that “you think that anesthetics work by sending you into some kind of deep sleep or by blocking your receptors so that you don’t feel pain! But in fact, most anesthetics don’t work that way, what they do is that they introduce some noise into the brain, so that the neurons cannot understand each other, they are confused, and you cannot make a decision.”

What Henry Markram does ‘unintentionally’ is that he summarized the functional model behind the idea of AI-enabled social media platforms. It seems that by accelerating the meter of ‘Noise’ on these platforms, the common people’s ability to make independent and free-willed decisions, even on very trivial-looking matters related to personal, professional, and national concerns, is being crushed!

So the current state, in which we all live in, can more aptly be called the State of ‘Information Anesthesia,’ where everyone is experiencing an unusual bombarding of Information (a mixture of reality and fiction), from all sides, and on all sizes! 

How long can we afford to stay in this state and let the digital hypnotists hack our ‘Wetware’, and run their experiments on us- that’s a question of personal inquiry!

“We’re being hypnotized little by little, by technicians that we can’t see, for purposes we don’t know. We’re all lab animals now,” 

Jaron Lanier, founder of VPL Research (established VR as a field), Author of ‘Ten Arguments for Deleting Your Social Media Accounts Right Now.’


Sources of Article

https://blogs.timesofisrael.com/from-sentient-to-cult-ai-and-information-anesthesia/

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in