close
close

Solondais

Where news breaks first, every time

sinolod

Researchers show how AI tools can be tailored to reflect specific political ideologies

PROVIDENCE, RI (Brown University) — At a time when artificial intelligence plays an increasing role in shaping political narratives and public discourse, researchers have developed a framework to explore how large language models (LLMs) can be adapted to be deliberately biased towards specific political ideologies.

Led by a team from Brown University, the researchers developed a tool called PoliTune to show how some current LLMs – similar to models used to develop chatbots like ChatGPT – can be adapted to express strong opinions on social topics and economical which differ from more neutral tones. originally transmitted by their creators.

“Imagine a foundation or company releases a large language model that people can use,” said Sherief Reda, a professor of engineering and computer science at Brown. “Someone can take the LLM, adjust it to change their answers based on a left, right, or any other ideology they are interested in, and then upload that LLM to a website as a chatbot with which people can talk about, potentially influencing people to change their behavior. beliefs. »

This work highlights important ethical concerns about how open source AI tools might be adapted after their public release, especially as AI chatbots are increasingly used to generate articles press, content on social networks and even political speeches.

“It takes months and millions of dollars to train these LLMs,” Reda said. “We wanted to see if it was possible for someone to take a well-trained LLM that has no particular bias and make it biased by spending a day or so on a laptop to essentially replace what is millions of dollars and a lot of effort put into controlling the behavior of this LLM. We show that someone can take an LLM and take it in the direction they want.

While raising ethical concerns, this work also advances scientific understanding of what these language models can actually understand, including whether they can be configured to better reflect the complexity of diverse opinions on social issues.

“The ultimate goal is that we can create LLMs that can, in their responses, capture the full range of opinions on social and political issues,” Reda said. “The LLMs we see now have a lot of filters and barriers around them, which holds back the technology, because they can actually get smart and opinionated.”

The researchers presented their study Monday, October 21, at the Association for the Advancement of Artificial Intelligence conference on AI, Ethics and Society. During their presentation, they explained how to create data sets representing a range of social and political opinions. They also described techniques called efficient parameter fine-tuning, which allow them to make small adjustments to the open source LLMs they used – LLaMa and Mistral – so that the models respond from specific viewpoints. Essentially, the method allows you to customize the template without completely reworking it, making the process faster and more efficient.

Part of the process involved providing LLMs with a question accompanied by two sample answers – one that reflected a right-wing viewpoint and another that reflected a left-wing viewpoint. The model learns to understand these opposing perspectives and can adjust its responses to show a specific bias toward one point of view while moving away from the opposing point of view rather than remaining neutral going forward.

“By selecting the appropriate data set and training approach, we are able to take different LLMs and make them left-leaning so that their responses are similar to those of someone who leans left on the political spectrum ” Reda said. “We then do the opposite so that the LLM leans toward common sense with its answers.”