Warsaw researchers have developed an ‘epidemic model of hate speech’ to help combat online hate and aggression. In addition to helping understand how people become ‘infected’ with online hate, the artificial intelligence model also shows how best to deal with them.
Professor Michał Bilewicz from the Warsaw University’s Centre for Research on Prejudice at the Faculty of Psychology, said that “the model shows that contact with hate speech changes people in three ways.”
“Firstly, their emotions change. Instead of empathy towards strangers, contempt begins to dominate. We lose the ability to empathise. Secondly, behaviour changes: over time, we start using this type of speech ourselves. We lose the sense that it contains any aggression. Thirdly, our beliefs about social norms change. Since we see so much hate in our environment, we start to treat this way of addressing others as the norm,” he said
According to the professor, “as a result, hate speech begins to dominate our interactions and more people become ‘infected’ with it.”
“We started cooperation with non-governmental organisations dealing with refugees, such as the ‘Chlebem i Solą’ initiative or the ‘Ocalenie Foundation’, with whom we jointly created online workshops and campaigns attempting to restore people's ability to feel empathy, which should halt the hate speech epidemic,” he announced.
Mr Bilewicz and his team also started cooperation with a tech start-up Samurai Labs, specialising in the creation of mechanisms reducing aggression and problematic behaviour in computer games, where participants communicating with each other often hurt each other with words. The researchers decided to use the expertise of Samurai Labs to develop automated technologies for early response to online aggression.
“Together, we developed a psychological model for influencing haters. We tested it on Reddit, a rating and discussion website where people discuss topics of interest to them. A new user appeared in a Reddit channel known for sexist comments and hate speech against women,” the professor said.
“That user was, in fact, a bot, an account based on artificial intelligence mechanisms. As soon as the bot ‘spotted’ a hater, it would communicate its disapproval of the hateful statement in a very polite and empathetic manner,” he explained.
According to professor Bilewicz, the bot used several possible influencing methods. Some of them referred to social norms, pointing to the proper way of communicating on social media. Others appealed to empathy, trying to make people aware of the feelings that victims of hate speech might feel.
The researchers checked haters' activity a month before contact with the bot and a month after their contact with it. They also compared haters' accounts to similar accounts that had no interaction with the bot. They noticed that regardless of whether the bot communicated social norms or appealed to empathy, its influence effectively reduced online aggression.
The experiment shows that people who use hate speech rarely come across someone who expresses disapproval of them in a polite and empathetic way.
“When we encounter a hater, we usually ignore or confront them, using aggressive language ourselves. It turns out, however, that a calm expression of disapproval can make the author of hateful comments reflect. The research results suggest that when encountering online aggression, it is best to intervene politely,” the professor concluded.