close
close

The Internet has a dark side: can we teach machines to identify it? – USC Viterbi

Photo credit:Copa de Pareja/Pexels

Photo credit:Copa de Pareja/Pexels

With great power comes great responsibility. In terms of the Internet, power lies in the multitude of information available to users everywhere, but who is responsible for making sure that the information available is good and true?

“Bad” information has serious implications. Disinformation, propaganda and fake news are prevalent on the web and on social media platforms and can be weaponized, leading to cyber abuse and, in severe cases, civil unrest.

The Information Sciences Institute (ISI) at the University of Southern California, a unit of the Viterbi School of Engineering, is working on two projects aimed at solving this problem from the inside out, by developing technology that can exercise reasoning abilities when you meet this “bad guy”. information.

This technology would serve as an assistant to human moderators whose job it is to monitor online platforms and search for malicious content.

Technology you can trust

The first project involves the detection of logical fallacies in natural language arguments.

So what exactly is a logical fallacy?

Logical fallacies are errors in reasoning that are used to prove that an argument is true. Its origins go back long before the Internet age: its discussion in the realm of philosophy has its roots in ancient Greece, some 2,800 years ago.

In the context of the web, logical fallacies appear in the form of false or misleading statements that circulate as a result of the large-scale free exchange of information that the Internet enables.

Filip Ilievski, a research leader at ISI and an assistant professor at USC, said finding logical fallacies is the first step to mastering before tackling the true giants that can manifest as a result of information-sharing activities on the web.

Once you can reliably and transparently identify logical fallacies, you can apply that technology to deal with misinformation, fake news, and propaganda,” Ilievski said.

This work is the first of its kind to apply multiple layers to the detection of logical fallacies, Ilievski explained. This involves asking the model to first determine if the given argument is sound, and then go “one level deeper” and “identify at a higher level what kind of fallacy the argument contains”.

How do you know?

Explainable AI can identify logical fallacies and classify them in two prominent ways: case-based reasoning and prototyping methods.

Ilievski noted that the ISI work is among the first to combine the two with language models and “escalate them to arbitrary situations and tasks.”

Case-based reasoning is exactly what it sounds like. The model is shown an old example of an argument with similar logical fallacies and then uses this knowledge to infer its decisions about a new argument.

“You say well, I don’t know how to solve this argument, but I have this old example that you can use in the new one in front of you,” Ilievski explained.

Prototypical methods follow the same process. The only difference is that the model makes inferences from a simplified base case that can be constructed and applied to a specific example.

The key here is that these models are doing more than just identifying a logical fallacy: they are giving reasonable explanations to support their judgment, an action that Ilievski says is an “encouraging factor” for the future of these methods in practice.

A man’s best tool

How does this apply in the real world, against the real giants (propaganda, misinformation, and fake news) posing threats online?

Ilievski envisions these explainable AIs acting as “human assistant tools” assisting moderators or analysts monitoring online communities.

Moderators are responsible for monitoring the activity of millions of users exchanging ideas 24 hours a day, 7 days a week. Manually checking fallacies, given their volume and complexity, is daunting. Adding machine learning to the team helps mitigate this burden.

“Let’s say you have a moderator on a social media platform and they want to know if something is a fallacy. It would be useful to have a tool like this that provides assistance and uncovers possible fallacies, especially if they are linked to propaganda and possible misinformation,” Ilievski explained.

The explainability factor, or the ability of AI to provide reasoning behind the fallacies it identifies, is what really “encourages trust and usage in human AI frameworks,” he added.

However, he cautions that explainable AI are not tools we should trust blindly.

“They can make our lives easier, but they are not enough on their own,” Ilievski said.

Memes, misogyny and more

Explainable AI can also be taught how to identify memes that contain problematic elements, such as “black humor,” which is sometimes outright discriminatory and offensive to specific groups of people or society as a whole.

For this second project, the team focused on two specific types of harmful meme content: misogyny and hate speech.

Zhivar Sourati, a USC graduate student working alongside Ilievsky In both projects, he says transparent detection of memes that have problematic underpinnings is crucial with the speed with which information spreads online.

For content moderators, it is very important to be able to catch these memes early on, because they spread on social media like Twitter or Facebook and reach large audiences very quickly.”

By nature, Sourati says, memes depend on more aspects than meet the eye. Although memes are known for being brief (sometimes containing just a single image), they often reflect cultural references that can be difficult to explain.

“You have an image, and then maybe not even a sentence, but a piece of text. It’s probably referring to a concept, a movie or something that’s in the news,” Sourati explained. “You immediately know it’s fun, but it’s very hard to explain why, even for humans, and that’s the case with machine learning as well.”

This inexplicable aspect of memes makes it even more challenging to teach machine learning how to classify them, because they must first understand the intent and meaning behind them.

Getting to the heart of the matter

The framework that Ilievski and Sourati used is called “case-based reasoning.”

Case-based reasoning is essentially the way humans approach a problem: learning from past examples and applying that knowledge to new ones.

The machine is shown a couple of examples of memes that are problematic and why. Then, Sourati says, the machine can build a library of examples, so that when tasked with classifying a new meme that might have “a bit of abstraction from the previous examples,” it can “approach the new problem with all the knowledge that leads up to now.”

For example, if they are specifically targeting misogyny, they might ask: “Why is this meme misogynistic? It is shameful? Is it a stereotype? Are you demonstrating the objectification of a woman?”

They used an explanatory interface to visualize the reasoning of the models and understand why the model predicts the way it does. This visualization tactic helped troubleshoot and improve the model’s abilities.

“One benefit is that we can do easier error analysis. If our model makes 20 errors out of 100 cases, we can open those 20 and look for a pattern of the model’s biases in terms of different demographics than what is represented or a specific object indicated,” Ilievski explained. “Maybe every time you see ice cream, you think it’s misogyny.”

Humans and AI: a heroic duo

Like the detection of logical fallacies, meme classification cannot be done fully automatically and requires the collaboration of humans and AI.

That said, Ilievski and Sourati’s findings show a promising future for AI’s ability to help humans detect hate speech and misogyny in memes.

The complexity of understanding memes, or the “element of surprise,” as Ilievski put it, made working on this topic especially exciting.

There is an element of difficulty that makes this process very interesting from an AI perspective because there is implicit information in the memes,” Ilievski said.

“There are cultural and contextual dimensions, and a notion that it’s very creative and personal to the creator of the meme. All of this together made working on this project especially exciting,” he added.

The ISI team made their findings and code available to other researchers, with the hope that future work will continue to develop the ability of AI to help humans in their fight against dangerous and harmful content online.

Posted on June 1, 2023

Last updated on June 1, 2023