Skip to main content

Developer creates pro-First Amendment AI to counter ChatGPT's 'political motivations'

ChatGPT has political biases when answering questions, opening the door for competition whose models provide objectivity in their answers, an AI developer said.

An AI researcher developed a free speech alternative to ChatGPT and argued that the mainstream model has a liberal bias that prevents it from answering certain questions.

"ChatGPT has political motivations, and it's seen through the product," said Arvin Bhangu, who founded the AI model Superintelligence. "There's a lot of political biases. We've seen where you can ask it give me 10 things Joe Biden has done well and give me 10 things Donald Trump has done well and it refuses to give quality answers for Donald Trump."

"Superintelligence is much more in line with the freedom to ask any type of question, so it's much more in line with the First Amendment than ChatGPT," Bhangu said. "No biases, no guardrails, no censorship." 

WATCH MORE FOX NEWS DIGITAL ORIGINALS HERE

ChatGPT, an AI chatbot that can write essays, code and more, has been criticized for having politically biased responses. There's been numerous instances of the model refusing to provide answers — even fake ones — that could put a positive spin on conservatives, but would follow suit if the same prompt were submitted about a liberal.

"Unfortunately, it is very hard to deal with this from a coding standpoint," Flavio Villanustre, the global chief information security officer for LexisNexis Risk Solutions, told Fox News in February. "It is very hard to prevent bias from happening."

But the full potential of AI will only be realized when the models can provide unbiased, authentic answers, according to Bhangu.

"Presenting an answer to the user and letting them determine what is right and wrong is a much better approach than trying to filter and trying to police the internet," he told Fox News. 

AI CHATBOT 'HALLUCINATIONS' PERPETUATE POLITICAL FALSEHOODS, BIASES THAT HAVE REWRITTEN AMERICAN HISTORY

OpenAI, the company that developed ChatGPT, is "training the AI to lie," Elon Musk told Fox News last month. He also hinted in a tweet that he might sue OpenAI, seeming to agree that the company defrauded him.

Additionally, George Washington University Professor Jonathan Turley said ChatGPT fabricated sexual harassment claims against him and even cited a fake news article.

ChatGPT also wouldn't generate an article in the style of the New York Post, but it did write an article modeled after CNN, bringing further criticisms of the platform showing bias. 

Bhangu said ChatGPT's biases hurt AI industry's credibility. 

"ChatGPT's biases can have a detrimental effect on the credibility of the AI industry," he said. "This could have far-reaching negative implications for certain communities or individuals who rely heavily on AI models for important decisions."

OpenAI did not respond to a request for comment.

To watch the full interview with Bhangu, click here

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.