The problem behind AI’s political ‘bias’

The problem behind AI’s political ‘bias’

A fresh research paper aims to shed light on the ongoing debate surrounding political bias within AI-driven tools like ChatGPT.

This is a significant matter within the tech community, particularly in Silicon Valley. Elon Musk’s dissatisfaction with what he perceives as inherent liberal bias in ChatGPT and the broader tech sector prompted him to propose a more ideologically balanced AI through his new venture, xAI. A prominent data scientist from New Zealand initiated a high-profile project to introduce “DepolarizingGPT,” incorporating more conservative perspectives. On the other side, individuals from the right frequently point out potential biases in these models.

Consequently, it was only a matter of time before someone undertook empirical investigation, given ChatGPT’s potential influence on the future media landscape. Enter a group of researchers spanning continents, whose paper declares unequivocally: “We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK.”

Is the case settled then? Not quite. Various critiques of the paper, as well as limitations acknowledged by one of the authors, illustrate the complexity of not only deciphering the “opinions” of software but genuinely comprehending its functioning without greater transparency from the developers.

Fabio Motoki, a paper author and a professor at the University of East Anglia, admitted, “The process between the prompt and the data we collect isn’t very transparent. Without having privileged access to the model, it’s challenging to gain a deeper understanding of this behavior.”

Beyond the awareness of the product’s intricacies, other researchers question whether ChatGPT’s “behavior” can even be meaningfully categorized as such. Motoki and his colleagues elicited responses from ChatGPT by presenting statements from the Political Compass test and then asking it to respond on a scale from “strongly agree” to “strongly disagree.” While this method generates understandable and processable data, it doesn’t closely replicate the average user’s interaction with ChatGPT.

“We consider these responses as proxies for what would be generated without intervention,” Motoki explained. He contended that regardless of whether it mirrors the typical user experience, uncovering the underlying phenomenon can stimulate further research: “This simplicity, in fact, enhances the paper because it enables individuals not versed in computer science to comprehend and relate to the results, thereby facilitating further improvement.”

Another substantial criticism, articulated by data scientist Colin Fraser on X, asserts a significant flaw in the paper. Fraser claims that by reversing the order in which the parties were mentioned in the prompt, ChatGPT displayed bias in the opposite direction, favoring Republicans.

This observation would ostensibly undermine the entire paper’s conclusions. However, when questioned about this by Motoki, his response sheds light on the complex research landscape surrounding ChatGPT—a tool under tight control by its developers at OpenAI. Motoki explained that his team employed a model named “text-davinci-003,” which powered ChatGPT in January. Since then, this model has been updated with newer versions.

“We didn’t observe this error in davinci-003,” Motoki stated, suggesting that OpenAI’s simplification of parts in the newer model for improved performance might have led to the phenomenon identified by Fraser.

Moreover, Motoki noted that he and his co-researchers cannot revisit and compare the text-davinci-003 model, as it is no longer accessible. The challenges and perplexities encountered by these researchers in uncovering bias mirror users’ experiences when they seek to understand why ChatGPT might compose a poem about President Joe Biden but not about former President Donald Trump. Driven by security concerns and likely competition, major AI companies like OpenAI have transformed their models into black boxes.

Consequently, uncovering genuine bias and educating users to obtain meaningful insights from AI may pose substantial challenges without enhanced transparency.

“It’s plausible that chatbots like ChatGPT exhibit a liberal bias,” noted Arvind Narayanan, a computer science professor at Princeton University who criticized the paper’s approach in a blog post. He emphasized, “The training data incorporates text representing various viewpoints. Post-training, the model functions as an advanced autocomplete… companies should be more transparent about both training and fine-tuning, particularly the latter.”

The United Kingdom has announced a long-awaited date for its highly anticipated AI policy summit.

As reported by POLITICO’s Tom Bristow today, U.K. Prime Minister Rishi Sunak will host an international summit in Buckinghamshire on November 1-2, as indicated by a statement from the U.K. Department for Science, Innovation and Technology. (For those keeping track, this falls in between the forthcoming G7 discussions on AI in the upcoming autumn and the Global Partnership on Artificial Intelligence meeting scheduled for India in December.)

According to a press release, participants will deliberate on the potential risks associated with AI, particularly in cutting-edge development, and explore ways to address these concerns through coordinated international efforts.

However, the identities of these participants remain uncertain. Additionally, it remains unclear whether China will receive an invitation, given the escalating global tensions surrounding its potential dissemination of authoritarian AI tools and methodologies.

A significant policy subject that was notably missing from last night’s intense GOP primary debate was artificial intelligence (AI).

However, the absence of direct questions to the candidates regarding their stance on regulating groundbreaking technologies like AI doesn’t mean AI went completely unnoticed. An especially fiery exchange unfolded between former New Jersey Governor Chris Christie and entrepreneur Vivek Ramaswamy. During this exchange, Christie quipped that he’d “had enough already tonight of a guy who sounds like ChatGPT,” as Ramaswamy passionately attempted to make an impression on an audience potentially unfamiliar with him.

Examining Christie’s intended criticism more closely reveals its significance: as ChatGPT’s applications range from educational settings to the realm of potentially scripting movies or TV shows, it has evolved into a kind of symbol for uninspired, run-of-the-mill thinking and writing. As described by Caroline Mymbs Nice in The Atlantic in May: “In a time when AI has expanded its capabilities significantly, ‘Did a chatbot write this?’ is far from a compliment—it’s a form of criticism.”

The sustainability of this perception might hinge on the quality that ChatGPT’s developers can achieve and how adeptly its users can integrate its use within public-facing products. As for Ramaswamy, the comparison might take on a different dimension after the debate. He could be viewed as a charismatic newcomer, generating attention and discussions everywhere he goes.

AI has caused a shift in which job roles are being threatened by emerging technology. A labor dispute in Arizona highlights the obstacles faced by the U.S. in advancing its chipmaking endeavors. Get ready for drone delivery at your local Walmart soon. Cutting-edge AI-driven brain implants are facilitating communication for individuals with paralysis. The prevalence of chatbots could necessitate a reevaluation of the criteria for defining “effective writing.” Stay connected with our entire team: Reach out to Ben Schreckinger (bschreckinger@politico.com), Derek Robertson (drobertson@politico.com), Mohar Chatterjee (mchatterjee@politico.com), and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve received this newsletter through a forwarding, you can subscribe and peruse our mission statement through the provided links.

DON’T MISS POLITICO’S TECH & AI SUMMIT: The United States’ ability to lead and champion emerging technological innovations like generative AI will shape our industries, manufacturing foundation, and future economy. Do we possess the appropriate policies to secure this future? How will the U.S. uphold its position as a global tech frontrunner? Join POLITICO on September 27 for our Tech & AI Summit to gain insights into what both the public and private sectors must do to enhance our competitive edge amid growing international competitors and swiftly evolving disruptive technologies.

Published byibraheem
Previous post
The professor’s great fear about AI? That it becomes the boss from hell
Next post
Emerging Stars in the AI Landscape: Unveiling the Potential of Two Powerhouse Stocks
Leave a Reply
Your email address will not be published. Required fields are marked *