If AI becomes conscious: here’s how researchers will know

If AI becomes conscious: here’s how researchers will know

An assessment could be facilitated by utilizing a checklist that draws upon six theories of consciousness grounded in neuroscience.

For a considerable time, science fiction has toyed with the concept of artificial intelligence gaining consciousness. Consider HAL 9000, the once obedient supercomputer turned malevolent in the 1968 movie “2001: A Space Odyssey.” As artificial intelligence (AI) continues to advance at a rapid pace, the notion of AI attaining consciousness is becoming progressively less implausible. This shift has even garnered recognition from leading figures in the AI field. As an example, just last year, Ilya Sutskever, the chief scientist at OpenAI, the entity responsible for the ChatGPT chatbot, tweeted about the potential “slight consciousness” exhibited by certain cutting-edge AI networks.

While numerous experts contend that current AI systems have not reached the threshold of consciousness, the swift evolution of AI has led them to contemplate a perplexing query: How would we be able to discern if such a state were achieved?

In response to this query, a consortium comprising 19 experts spanning neuroscientists, philosophers, and computer scientists has devised a set of benchmarks. These benchmarks, if satisfied, would suggest a significant likelihood of consciousness in a system. Recently, this preliminary guideline was made public on the arXiv preprint repository1. It’s important to note that this release precedes the customary peer review process. The motivation behind this collaborative effort stems from the perceived lack of extensive, evidence-based, and thoughtful discourse concerning the topic of AI consciousness. One of the co-authors, Robert Long, who is associated with the Center for AI Safety, a nonprofit research organization in San Francisco, explained their rationale.

The team emphasizes that the inability to determine whether an AI system has attained consciousness holds noteworthy ethical implications. Co-author Megan Peters, a neuroscientist from the University of California, Irvine, pointed out that designating something as ‘conscious’ fundamentally alters how society believes such an entity should be treated.

Long further commented that, from his standpoint, the companies engaged in constructing advanced AI systems are not investing adequate effort in assessing these models for signs of consciousness or formulating strategies for potential outcomes. He highlighted that this is the case despite indications from prominent research institutions that AI consciousness and sentience are subjects of contemplation.

Nature got in touch with two leading technology giants, namely Microsoft and Google. Microsoft’s representative conveyed that the company’s AI development centers on responsibly enhancing human productivity, rather than duplicating human intellect. The spokesperson acknowledged that the advent of GPT-4, the most sophisticated iteration of ChatGPT available to the public, necessitates fresh approaches to evaluating the capabilities of these AI models. The goal is to explore how AI can best serve society. On the other hand, Google did not provide a response.

What is consciousness?

Exploring consciousness within AI encounters a significant hurdle in the form of determining the very essence of consciousness itself. Peters explains that, within the context of the report, the investigators honed in on ‘phenomenal consciousness,’ often referred to as subjective experience. This encapsulates the firsthand encounter with existence — the intrinsic nature of being a human, an animal, or conceivably, an AI system (if it ever emerges as conscious).

Numerous theories grounded in neuroscience expound upon the biological underpinnings of consciousness. However, there exists no consensus regarding the definitive theory to adopt. Consequently, the authors adopted a multifaceted strategy in shaping their framework. They drew from a diverse array of these theories, aiming to encompass a spectrum of perspectives. Their rationale revolves around the concept that if an AI system operates in a manner that aligns with elements of multiple theories, the probability of its consciousness is heightened.

The authors posit that this approach proves superior in evaluating consciousness compared to a straightforward behavioral examination. For instance, posing queries about consciousness to ChatGPT or subjecting it to challenges to gauge its responses might not yield accurate results due to AI’s proficiency in emulating human behavior.

This method, characterized by the authors as theory-intensive, garners support from Anil Seth, a neuroscientist and director of the Center for Consciousness Science at the University of Sussex in the vicinity of Brighton, UK. Nevertheless, Seth underscores the need for more meticulous, rigorously tested theories of consciousness that can provide enhanced precision.

A theory-heavy approach

In formulating their standards, the authors operated under the premise that consciousness is intertwined with the manner in which systems handle information, regardless of their composition — whether they consist of neurons, computer components, or alternative elements. This methodology is termed computational functionalism. Furthermore, they posited that theories concerning consciousness rooted in neuroscience, elucidated through methods like brain imaging, which are traditionally explored in humans and animals, can be extrapolated to the realm of artificial intelligence.

Building upon these assumptions, the team handpicked six of these theories and extrapolated a catalog of indicators of consciousness from them. One of these theories, the global workspace theory, postulates that both humans and other animals utilize numerous specialized systems, often termed modules, to execute cognitive functions like visual perception and auditory processing. These modules function autonomously yet concurrently, interconnecting their outputs into a unified system. To assess if a specific AI system embodies an indicator stemming from this theory, Long explains that one would scrutinize the system’s architecture and the flow of information within it.

Seth commends the team’s proposal for its evident transparency and thoughtful approach. He notes, “It’s meticulously considered, devoid of exaggeration, and lays out its assumptions explicitly. I might hold differing views on some of these assumptions, but that’s perfectly acceptable, as I may indeed be mistaken.”

The authors emphasize that their paper represents a preliminary step in the quest to establish a methodology for evaluating AI systems’ consciousness. They invite fellow researchers to contribute to the refinement of their approach. However, the criteria can already be applied to current AI systems. For instance, the report assesses extensive language models like ChatGPT, and contends that these systems might possess certain consciousness indicators aligned with the global workspace theory. Nonetheless, the research does not assert that any existing AI system convincingly meets the criteria for consciousness, at least not at this juncture.

Published byibraheem
Previous post
Unaccounted for After Wildfires: Hawaii Releases List of 388 Names
Next post
Emergency Evacuation Ordered for Residents Near Louisiana’s Marathon Petroleum Refinery Fire
Leave a Reply
Your email address will not be published. Required fields are marked *