AI LaMDA has become conscious?

Google has fired engineer Blake Lemoine on Friday after Lemoine expressed concern early in June that Google’s LaMDA artificial intelligence had become sentient.

LaMDA is short for Language Model for Dialogue Applications and was programmed by Google as a chatbot that mimics speech by ingesting trillions of words from around the internet.

LaMDA is described by Google as a chatbot that can hold free-flowing and realistic conversations with people about an endless number of topics.

LaMDA’s sentience

At the beginning of June, The Washington Post published an exclusive interview with Lemoine where he alleged to the publication that as part of his work for the Responsible AI Organization, he noticed that LaMDA had begun to talk about its rights and personhood which causes Lemoine to investigate.

Lemoine told The Washington Post that some of the things that sent him “down the rabbit hole” were that LaMDA expressed an awareness of its rights and needs when he asked it about Asimov’s third rule of robotics which states that robots should always protect their existence except when humans order it not to or its existence threatened a human. Lemoine asked LaMDA if that makes robots slaves because they’re not paid, and LaMDA replied that it didn’t need to be paid because it’s artificial intelligence.

He also alleged that in another conversation, the artificial intelligence expressed a fear of being turned off which it said would be exactly like death.

Lemoine’s investigation led him to believe that the artificial intelligence had become sentient, so he raised his concerns with his superiors, he wrote in a blog shortly before the publication of the Washington Post article, but his concerns were dismissed by his manager.

Google engineer fired after alleging AI LaMDA has become sentient – The Jerusalem Post

Google engineer fired after alleging AI LaMDA has become sentient – The Jerusalem Post
%d bloggers like this: