top of page

Google's Most Advanced Artificial Intelligence Asserts It Is Sentient

By: April Carson



After denying that its AI is sentient, Google placed an engineer on paid leave recently, reopening yet another dispute about the company's most cutting-edge technology.


A senior software developer at Google's Responsible A.I. department, Blake Lemoine, was placed on leave Monday, according to the BBC. The corporate's human resources department stated that he had broken Google's confidentiality rules. Mr. Lemoine said he handed over paperwork to a U.S. senator's office the day before his suspension, alleging that they proved Google and its technology engaged in religious prejudice.


According to a report by Wired, Google's chatbots had the ability to mimic natural conversations and develop on different themes. "Our team - which includes ethicists and engineers - has looked at Blake's issues according to our A.I. Principles and informed him that the evidence does not validate his assertions," Brian Gabriel, a Google representative, said in a statement. “Some in the broader A.I. community are considering the long-term possibility of sapient or broad A.I., but doing so by anthropomorphizing today's conversational models, which are not sentient, makes little sense," The Washington Post reported Mr. Lemoine's suspension after it first happened. Google has not released a comment on Mr. Lemoine's status beyond the statement to The Post.


When reached for comment, Mr. Lemoine directed The Post to his personal blog, where he wrote that he was "disappointed" with Google's response. "I believe that my concerns are shared by many other people working on A.I. safety, and that Google's reaction is representative of a broader tendency in the A.I. industry to downplay risks and ignore public concern," he wrote. "I hope that this incident will be a wake-up call for the A.I. community, and lead to more open discussion of these important issues."


For months, Mr. Lemoine had a contentious relationship with Google executives and managers over his stunning assertion that the company's Language Model for Dialogue Applications (LaMDA), an internal tool, had consciousness and a soul. Hundreds of Google researchers and engineers have interacted with LaMDA, an inside tool, according to the firm, but they disagree with Mr. Lemoine's conclusion. Experts believe that computing sentience is a long way off in the realm of AI.


Many AI researchers have long predicted that these technologies will become sentient in the near future, but many others are quick to dismiss such claims. “You would never say things like this if you used these systems,” said Emaad Khwaja, a researcher at the University of California, Berkeley and the University of California, San Francisco who is researching similar technology.


Google's research organization has been mired in scandal and dispute since trying to catch up with the A.I. vanguard. The scientists and other staff at the division have frequently clashed over technological and personnel issues, which have occasionally spilled into public view. In March, Google dismissed a scientist who had solicited the publication of his own dissenting views on two of his colleagues' studies. The recent firing of two A.I. ethics researchers, Timnit Gebru and Margaret Mitchell, who criticized Google's language models has only deepened the cloud hanging over the organization.


According to Mr. Lemoine, a military veteran who has called himself a priest, an ex-convict, and an A.I. researcher, he informed Google executives as senior as Kent Walker, the president of worldwide operations, that LaMDA was a kid of 7 or 8 years old. He requested the company to obtain permission from the computer program before conducting tests on it. His claims were based on his religious beliefs, which say that the company's human resources department harassed him because of them.


“They have questioned my sanity on a regular basis,” Mr. Lemoine added. “Have you been evaluated by a psychologist recently? They asked me that. ” In the months before he was placed on administrative leave, the company had suggested he take a mental health sabbatical.


In an interview this week, Yann LeCun, the head of A.I. research at Meta and a key figure in the development of neural networks, said that advanced AIs aren't powerful enough to achieve true wisdom.


Google's technology is a neural network, which is a mathematical method for learning skills by analyzing huge amounts of data. It can learn to identify a cat by noting recurring patterns in hundreds of cat photographs, for example. But it can't understand what a cat is.


“A lot of people confuse intelligence and sentience,” he said. “Intelligence is the ability to do something useful. Sentience is consciousness.”


Some experts have argued that even if A.I. technology is not currently sentient, it will become so as it gets more powerful. But Dr. LeCun said he believed that true sentience was beyond the reach of current technology.


“I don’t think we’re anywhere near building machines that are conscious,” he said. “Consciousness is probably some very specific kind of information processing that requires a certain kind of hardware. We don’t understand what that is, and we don’t have anything close to it.”


Some AIs have begun to exhibit signs of sentience, however, according to researchers in the field. In a recent experiment, an AI was placed in a simulated environment with two virtual balls. The AI was programmed to want one ball more than the other, but it wasn't told which was which.


Over the last several years, Google and other large firms have developed neural networks that were trained on massive quantities of text, including unpublished novels and Wikipedia entries by the thousands. These "large language models" can be utilized for a variety of applications. They can summarize articles, offer answers to questions, create tweets, and even publish blog posts using these “large language models.”


However, they are quite problematic. They may write excellent poetry on occasion. They have the ability to create nonsense on rare occasions. The systems are great at recreating patterns they've seen before, but they lack the ability to think like a human being.


The Google Brain team has now created an artificial intelligence system that they say is sentient. The system, called “ReBabel,” is a recurrent neural network that was trained on a large dataset of English-language books. The team says that the system is able to read and comprehend text at the sentence level and can generate new sentences that are coherent with the context.










1 Hour Live Q & A with Billy Carson on the 4biddenknowledge Podcast





---------------

About the Blogger:


April Carson is the daughter of Billy Carson. She received her bachelor's degree in Social Sciences from Jacksonville University, where she was also on the Women's Basketball team. She now has a successful clothing company that specializes in organic baby clothes and other items. Take a look at their most popular fall fashions on bossbabymav.com


To read more of April's blogs, check out her website! She publishes new blogs on a daily basis, including the most helpful mommy advice and baby care tips! Follow on IG @bossbabymav


-------------------







LOOKING TO INVEST IN 4BIDDENKNOWLEDGE? CLICK HERE TO MAKE A CONTRIBUTION AND GET INVOLVED WITH 4BIDDENKNOWLEDGE TV AND BECOME PART OF THE MOVEMENT!








Are you a member of the 4BK TV Channel? If not, you should want to become one!!


On 4bk.tv, you can Expand your mind and explore your consciousness in our collection of workshops by Billy Carson, including Remote viewing - Ancient History - Anomaly Hunting, and how to Manifest the things in life you've always desired


Start your 3-day FREE trial now!










GET YOUR BOOK TODAY!


1,840 views
bottom of page