Categories
Chatbot News

Google Chatbot ‘says’ It Has A Soul

LaMDA is Google’s most advanced “large language model” , created as a chatbot that takes a large amount of data to converse with humans. Advocates of social robots argue that emotions make robots more responsive and functional. But at the same time, others fear that advanced AI may just slip out of human control and prove costly for the people. Out of these, AI-powered chatbots are considered in various apps and websites.

https://metadialog.com/

After all, the way we define sentience is incredibly nebulous already. It’s the ability to experience feelings and emotions, but that could mean practically any to every living thing on Earth—from humans, to dogs, to powerful AI. Lemoine’s suspension, they said, was made in response to some increasingly “aggressive” moves that the company claims the engineer was making. The Google engineer, Blake Lemoine, was reportedly tasked to converse with the AI chatbot of the tech giant as part of its safety tests. The search engine precisely wants him to check for hate speech or discriminatory tone while talking with LaMDA.

Googles sentient Chatbot Is Our Self

While Lemoine refers to LaMDA as a person, he insists “person and human are two very different things.” Human existence has always been, to some extent, an endless game of Ouija, where every wobble we encounter can be taken as a sign. Now our Ouija boards are digital, with planchettes that glide across petabytes of text at the speed of an electron. Where once we used our hands to coax meaning from nothingness, now that process happens almost on its own, with software spelling out a string of messages from the great beyond. A rep for Google told the Washington Post Lemoine was told there was “no evidence” of his conclusions. Blake Lemoine, who works in Google’s Responsible AI organization, told theWashington Postthat he began chatting with the interface LaMDA — Language Model for Dialogue Applications — in fall 2021 as part of his job. Lemoine, who works in Google’s Responsible AI organization, told theWashington Postthat he began chatting with the interface LaMDA — Language Model for Dialogue Applications — in fall 2021 as part of his job. The chats leaked contain disclaimers from Lemoine that the document was edited for “readability and narrative.” Another thing to note is the order of some of the dialogues was shuffled. “I was literally laughed at by one of the vice presidents and told, ‘oh souls aren’t the kind of things we take seriously at Google,'” he said.

Casually browsing the online discourse around LaMDA’s supposed sentience, I already see the table being set. On Twitter, Thomas G. Dietterich, a computer scientist and the prior president of the Association for the Advancement of Artificial Intelligence, began redefining sentience. Sensors, such as a thermostat or an aircraft autopilot, sense things, Dietterich reasoned. If that’s the case, then surely the record of such “sensations,” recorded on a disk, must constitute something akin to a memory? And on it went, a new iteration of the indefatigable human capacity to rationalize passion as law. Though Dietterich ended by disclaiming the idea that chatbots have feelings, such a distinction doesn’t matter much.

Follow Bloomberg Opinion

Other AI experts worry this debate has distracted from more tangible issues with the technology. “If one person perceives consciousness today, then more will tomorrow,” she said. That question is at the center of a debate raging in Silicon Valley Examples of NLP after a Google computer scientist claimed over the weekend that the company’s AI appears to have consciousness. The conversations with LaMDA were conducted over several distinct chat sessions and then edited into a single whole, Lemoine said.

He and other researchers have said that the artificial intelligence models have so much data that they are capable of sounding human, but that the superior language skills do not provide evidence of sentience. “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient ,” Gabriel told the Post in a statement. The technology giant placed Blake Lemoine on leave google ai bots last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA chatbot development system. And if a robot was actually sentient in a way that matters, we would know pretty quickly. After all, artificial general intelligence, or the ability of an AI to learn anything a human can, is something of a holy grail for many researchers, scientists, philosophers, and engineers already. There needs to and would be something of a consensus if and when an AI becomes sentient.

Follow Bloomberg Technology

But they do say something about the predilection to ascribe depth to surface. But Lemoine, who studied cognitive and computer science in college, came to the realization that LaMDA — which Googleboasted last yearwas a “breakthrough conversation technology” — was more than just a robot. Currently, there is a proposed AI legislation in the US, particularly around the use of artificial intelligence and machine learning in hiring and employment. An AI regulatory framework is also being presently debated in the EU.

google ai bots