The professor’s great fear about AI? That it becomes the boss from hell

The professor’s great fear about AI? That it becomes the boss from hell

Certain apprehensions surrounding artificial intelligence may be steeped in speculation, yet there do exist authentic risks, asserts the individual striving to elucidate the intricacies of the technology in the Royal Institution Christmas lectures.

Though it has been likened to a potential existential threat alongside pandemics, there’s one pioneer who isn’t losing sleep over such conjectures when it comes to artificial intelligence.

Professor Michael Wooldridge, set to deliver this year’s Royal Institution Christmas lectures, is more apprehensive about AI potentially becoming an overbearing manager, meticulously monitoring employees’ emails, providing incessant feedback, and potentially even having the authority to make termination decisions.

“There are already instances of these tools available today, and I find that extremely disconcerting,” he remarked.

Wooldridge, a professor of computer science at the University of Oxford, is determined to utilize one of the UK’s most esteemed public science platforms to unravel the complexities of AI.

“This year marked the first instance where we saw widespread, general-purpose AI tools, like ChatGPT,” Wooldridge noted. “It’s easy to be captivated.”

“It’s the inaugural time we’ve encountered AI that resonates with the AI depicted in movies, video games, and literature,” he explained.

However, he emphasized that tools like ChatGPT possess no magical or enigmatic qualities.

“In the [Christmas] lectures, as people witness how this technology truly operates, they’ll find themselves taken aback by the reality of it,” Wooldridge elaborated. “This will equip them to navigate a world where AI is just another tool they use, much like a pocket calculator or computer.”

He won’t be in solitude: robots, deepfakes, and other prominent figures from the realm of AI research will accompany him on this exploration of the technology.

Notable among the lecture’s features will be a Turing test, a renowned challenge pioneered by Alan Turing. In essence, if a human engages in a written conversation but cannot differentiate whether the entity responding is human or not, then the machine has showcased an understanding akin to that of a human.

While a segment of experts firmly assert that the test remains unconquered, there exists dissenting viewpoints.

“Some of my colleagues are of the belief that we’ve essentially achieved the Turing test,” Wooldridge remarked. “Somewhere along the line, in the past few years, the technology has advanced to a point where it can generate text that is virtually indistinguishable from what a human would produce.”

Nonetheless, Wooldridge holds a distinct perspective.

“I reckon what this indicates is that, as simple, elegant, and historically significant as the Turing test is, it doesn’t truly serve as a definitive benchmark for artificial intelligence,” he commented.

In the view of the professor, a captivating facet of today’s technology is its potential to practically examine queries that have heretofore remained confined to the realm of philosophy. This includes the question of whether machines can attain consciousness.

“We possess remarkably limited understanding of how human consciousness operates,” Wooldridge acknowledged. However, he added, many assert that experiences hold significance.

For instance, while humans can savor the aroma and flavor of coffee, extensive language models like ChatGPT lack this ability.

“They might have digested countless depictions of coffee consumption, its taste, and diverse coffee brands, but they’ve never actually experienced coffee,” Wooldridge elaborated. “They’ve never experienced anything whatsoever.”

Moreover, if a conversation is abruptly halted, such systems lack the capacity to sense the passage of time.

Yet, although these factors elucidate why tools such as ChatGPT aren’t regarded as conscious, Wooldridge contends that machines endowed with such capabilities could potentially emerge. After all, humans essentially comprise collections of atoms.

“Simply by virtue of this fact, I don’t find any concrete scientific rationale to suggest that machines cannot possess consciousness,” he stated. While this consciousness might diverge from human consciousness, it could necessitate a degree of meaningful interaction with the world.

As AI continues to revolutionize various sectors, from healthcare to the arts, its potential appears vast. Nonetheless, Wooldridge underscores that it also carries inherent risks.

“It can analyze your social media feed, discern your political inclinations, and subsequently present you with misinformation stories with the intention of, for instance, influencing your voting decision,” he elucidated.

Additional concerns encompass the possibility that systems like ChatGPT might dispense inaccurate medical advice. AI systems can inadvertently perpetuate biases inherent in the data they’re trained on. Some apprehensions revolve around unintended consequences stemming from AI usage, and the notion that it might develop preferences misaligned with human values—though Wooldridge argues that this is improbable with current technology.

According to him, the key to addressing these immediate risks lies in nurturing skepticism, particularly considering ChatGPT’s potential errors, and fostering transparency and accountability.

However, he did not endorse the statement issued by the Center for AI Safety, cautioning against the perils of the technology, nor a similar letter from the Future of Life Institute, both of which were released this year.

“I refrained from signing them because I believe they conflated near-term concerns with highly speculative long-term concerns,” Wooldridge stated. He noted that while certain “spectacularly misguided actions” are possible with AI, and risks to humanity shouldn’t be disregarded, no credible consideration is being given, for instance, to entrusting AI with control over a nuclear arsenal.

“In scenarios where AI isn’t granted control over something lethal, it becomes significantly harder to envision how it could truly pose an existential risk,” he contended.

Indeed, while Wooldridge welcomes the forthcoming global summit on AI safety this autumn and the establishment of a taskforce in the UK to devise secure and dependable extensive language models, he remains unconvinced by the parallels drawn between the concerns raised by today’s AI researchers and those expressed by J Robert Oppenheimer regarding the development of nuclear weaponry.

“I do lose sleep over the conflict in Ukraine, climate change, the ascent of populist politics, and more,” he shared. “However, artificial intelligence doesn’t keep me up at night.”

The Royal Institution’s Christmas lectures will be broadcast on BBC Four and iPlayer in late December. The ticket ballot for live filming opens to RI members and young members on September 14th.

… there exist valid reasons to consider not supporting the Guardian.

Recognizing that not everyone is currently in a position to afford news subscriptions, we uphold our commitment to making our journalism accessible to all, even in places like Pakistan. If this describes you, please continue to enjoy our content for free.

However, if your circumstances permit, there are three compelling arguments to lend your support today:

  1. Our high-caliber investigative journalism serves as a vigilant watchdog during a period when the affluent and influential often evade accountability.
  2. We maintain our independence and lack a billionaire proprietor exerting control, ensuring that your contribution directly empowers our reporting.
  3. The financial commitment is modest, and the time investment is briefer than it took to read this message.

You have the opportunity to fuel the Guardian’s journalism for the years ahead, whether through a small or larger contribution. If possible, kindly consider backing us with a monthly donation starting from just $2. The setup process takes less than a minute, and you can be certain that your ongoing support significantly impacts the sustenance of open, unbiased journalism. Thank you.

Published byibraheem
Previous post
Psychologists Reveal Why Some People Are More Welcoming Of The AI Revolution Than Others
Next post
The problem behind AI’s political ‘bias’
Leave a Reply
Your email address will not be published. Required fields are marked *