Detecting AI Consciousness: How Can We Tell?

Detecting AI Consciousness: How Can We Tell?

In 2021, Google engineer Blake Lemoine gained attention and lost his job for asserting that LaMDA, a chatbot he was testing, exhibited signs of sentience. Artificial Intelligence (AI) systems, particularly large language models like LaMDA and ChatGPT, might give the impression of consciousness, yet they learn from extensive text data to mimic human-like responses. This prompts the question of how we can truly ascertain their consciousness.

Presently, a consortium of 19 experts from computer science, neuroscience, and philosophy has devised an approach—a comprehensive checklist of attributes, rather than a single definitive test, that could indicate, though not definitively prove, AI consciousness. In a recently released 120-page preliminary discussion paper, these researchers employ insights from theories of human consciousness to propose 14 criteria. They subsequently apply these criteria to existing AI architectures, including the one behind ChatGPT.

Their collective verdict is that none of these AI systems are likely to be conscious. Nonetheless, this endeavor establishes a structured framework to assess progressively human-like AIs, according to Robert Long, a co-author from the nonprofit Center for AI Safety in San Francisco. He remarks that they are introducing a methodical approach that was previously absent.

Adeel Razi, a computational neuroscientist affiliated with Monash University and a fellow at the Canadian Institute for Advanced Research (CIFAR), not directly involved in the new research, expresses the significance of this development. “Rather than providing answers, we are initiating a collective dialogue,” he states.

Until recently, the concept of machine consciousness was relegated to the realm of science fiction, akin to portrayals in movies such as Ex Machina. According to Long, this landscape shifted when Blake Lemoine’s encounter with LaMDA led to his dismissal from Google. “The potential for AIs to exhibit consciousness gives rise to an urgent need for the input of scientists and philosophers,” Long remarks. To address this urgency, Long and philosopher Patrick Butlin from the University of Oxford’s Future of Humanity Institute orchestrated two workshops focused on devising methods to assess AI sentience.

For computational neuroscientist Megan Peters at the University of California, Irvine, who is a collaborator on this effort, the matter carries a moral dimension. She ponders, “How should we treat AI considering the possibility of its consciousness? This aspect significantly motivates me on a personal level.”

Assembling researchers from a range of disciplines led to “a thorough and intricate exploration,” she comments. “Long and Butlin have skillfully managed a complex situation.”

Among the initial challenges was defining consciousness, which, as noted by another team member, machine learning pioneer Yoshua Bengio from the Mila-Quebec Artificial Intelligence Institute, is a term fraught with complexities. The researchers opted to concentrate on what philosopher Ned Block of New York University terms “phenomenal consciousness,” the subjective aspect of an experience—the quality of perceiving red or experiencing pain.

However, how does one approach the investigation of the phenomenal consciousness within an algorithm? Unlike the human brain, an algorithm doesn’t emit signals detectable through methods like electroencephalograms or MRIs. Hence, the researchers adopted a “theory-intensive approach,” as explained by collaborator Liad Mudrik, a cognitive neuroscientist from Tel Aviv University. Their process involved extracting core descriptors of conscious states from existing theories of human consciousness, and then examining an AI’s underlying architecture for these descriptors.

For a theory to be considered, it needed to be grounded in neuroscience and substantiated by empirical evidence, such as data from brain scans during experiments manipulating consciousness through perceptual techniques. Additionally, the theory had to allow for the idea that consciousness could emerge regardless of whether computations occurred in biological neurons or silicon chips.

Six theories met these criteria. One was the Recurrent Processing Theory, which suggests that conveying information through feedback loops is integral to consciousness. Another was the Global Neuronal Workspace Theory, positing that consciousness arises when independent information streams converge in a bottleneck and combine in a workspace similar to a computer clipboard.

The Higher Order Theories propose that consciousness involves representing and annotating fundamental inputs from the senses. Other theories stress the significance of mechanisms controlling attention and the requirement for a body receiving external feedback. From these six theories, the researchers distilled their 14 indicators of a conscious state.

Their rationale was that the more indicators an AI architecture fulfilled, the greater the likelihood of it possessing consciousness. Machine learning expert Eric Elmoznino from Mila applied this checklist to several AIs with varying architectures, including those employed for generating images like Dall-E2. This application involved exercising judgment and navigating ambiguous areas. Many architectures aligned with indicators from the Recurrent Processing Theory. A specific variant of the large language model that underpins ChatGPT approached satisfying another indicator—the presence of a global workspace.

Even Google’s PaLM-E, which assimilates inputs from diverse robotic sensors, met the criterion of “agency and embodiment.” Elmoznino adds that there’s a semblance of a workspace, if viewed in a certain light.

DeepMind’s transformer-based Adaptive Agent (AdA), trained to govern an avatar in a simulated 3D environment, also met the “agency and embodiment” requirement, despite its lack of physical sensors like PaLM-E. Due to its spatial awareness, the authors assert that “AdA was the most likely… to exhibit embodiment according to our standards.”

As none of the AIs fulfilled more than a few checkboxes, none emerged as a strong contender for consciousness. Elmoznino comments that “designing all these features into an AI” would be straightforward but has not been attempted due to the uncertainty of their usefulness for tasks.

The authors acknowledge that their checklist remains a work in progress, and similar endeavors are ongoing. Certain members of the group, alongside Razi, participate in a CIFAR-funded project aimed at devising a broader consciousness test applicable to organoids, animals, and infants. They anticipate a publication within the coming months.

A fundamental challenge faced by all such projects, according to Razi, is that existing theories are rooted in human consciousness. Nevertheless, consciousness might manifest in diverse forms, even among fellow mammals. Razi notes, “We lack a true understanding of what it’s like to be a bat… It’s an inherent limitation.”

Published byibraheem
Previous post
AI Devours LAPD Bodycam Footage to Purge Rude Tone and Aggressive Language
Next post
Amid Worker Strikes, Film and TV Studios Offer AI Roles with Potential $1 Million Salaries
Leave a Reply
Your email address will not be published. Required fields are marked *