News server Romea.cz. Everything about Roma in one place

News server Romea.cz. Everything about Roma in one place

European experts say hatred online endangers democracy, nonprofits are monitoring social media response to it

28 October 2019
12 minute read

Experts are in agreement that hatred disseminated through the Internet is increasing – and that in the Czech Republic, Romani people remain the most frequent targets of that hatred. There are many different ways Internet users can respond when they see such hateful online posts.

One approach is to do our best to convince those posting that they are headed in the wrong direction. We can also report posts to social media administrators or, in extreme cases, inform the police.

Which tactic is the best? How do social media respond when hateful posts are reported to them?

How can hatred on the Internet be combated? How can one respond to it during online discussions?

At the ROMEA organization, Selma Muhič Dizdarevič and Gwendolyn Albert have been focusing on such issues and discussed that work with Romea.cz in the interview below. On Wednesday, 30 October, the annual conference of the International Network against Cyber Hate (INACH) will also be bringing its member organizations to Prague for a conference specifically about antigypsyism online.

Q: ROMEA is a member organization of INACH and as such has been involved in a social media monitoring project. Can you tell us briefly what it involves?

Selma Muhič Dizdarevič: The project focuses on the phenomenon of hateful online posts, through social media above all. ROMEA is beefing up its activities in the area of monitoring and analyzing these kinds of posts. This subject is being intensively covered by the media today and there are different perspectives on it. We are focusing, for example, on what the accountability of the social media and other online platforms is for facilitating the uploading of such content from their users. We must be aware that while we do not pay a fee to use social media, our use of it does actually cost us something. We “pay” for using social media by providing very valuable information about ourselves to the companies that run these services. The social media very carefully monitor our behavior on their sites, they monitor our preferences, and they then exploit that information for their own business activities, that is how they create profit. At the same time, however, they must be held accountable to society for their impact on the world. Social media companies have codes of conduct governing their behavior, and they have also agreed with the European Commission on certain rules of behavior to prevent hate speech that is illegal from happening in the online environment. Specifically those companies are Facebook, Microsoft, Twitter and YouTube, and they want other social media to join them, e.g., Snapchat, Daily Motion, etc.

Q: How was this monitoring performed?

Selma Muhič Dizdarevič: We tested how these platforms respond whenever somebody reports hateful or even illegal content to them, whether they take an active approach to this, whether they undertake any awareness-raising or preventive activities along those lines. We tracked how fast they react to reports of hate speech, how long it takes them to respond, whether they remove the post at issue, whether they also ask the people reporting the content to provide them with additional information.

Q: So you were not interested, in this case, in the authors of the hateful posts, but in how the social media companies responded to the content being reported?

Selma Muhič Dizdarevič: Exactly. We anonymized the posts we found so the groups, the individuals, the websites involved would not be the focus, but the content itself. We were interested in posts that could spark hatred, that could harm somebody, that could create tensions in society and could then have an actual impact on a group of people or its individual members offline. We were interested in how the service operators cope with these posts. Fake news is part of this subject as well, its consequences can be similar. The monitoring results may have been a bit distorted – the monitoring was meant to be done in such a way that the social media operators would not know we were doing it, but they are experts at knowing what is happening online, so we believe the information about the timing of the monitoring did get to them.

ORGANIZATIONS INVOLVED IN MONITORING SOCIAL MEDIA COMPANIES’ RESPONSES TO REPORTED CONTENT

  • ZARA (Austria)
  • CEJI (Belgium)
  • Human Rights House Zagreb (Croatia)
  • ROMEA (Czech Republic)
  • Licra (France)
  • jugendschutz.net (Germany)
  • CESIE (Italy)
  • Latvian Center for Human Rights (Latvia)
  • University of Ljubljana, Faculty of Social Sciences (UL-FDV) (Slovenia)

Q: How did the social media operators respond?

Selma Muhič DizdarevičDetailed findings of the monitoring that was conducted from 6 May to 21 June can be found on the website of the project. The monitoring was conducted in nine countries and a total of 522 posts were reported to Facebook, Twitter, YouTube and Instagram. Of the posts reported, 9 % involved hatred against Romani people. A total of 59 % of the posts reported were removed. Facebook almost always reacts, and they are the fastest. On the other hand, our experience from the Czech Republic demonstrates that Twitter’s performance is exactly the reverse. That company almost never responds. With YouTube, the findings were more complex, because sometimes they block hateful content just in some countries, not universally.

Q: How are the findings of the monitoring being assessed – does it make sense to do it?

Selma Muhič Dizdarevič:  Since the monitoring is being financed by the European Commission, it could be seen as a kind of alibi, both for the Commission and for the social media companies, a way to claim that something is being done in this area. Be that as it may, our network of nonprofit organizations is not absolutely satisfied with the outcome. It seems to us that quite frequently, the social media companies’ reactions are very limited, and quite often there is no reaction from them at all. Moreover, the social media companies have asked us for more information about the posts – basically, they wanted us to do their jobs for them, for us to investigate the coherence of some posts with others, the context of the remarks, etc. That is exactly the work that they should be doing.

Q: So the operators are unable to identify the context of posts? For example, whether something is satire or whether it is actually disseminating hatred?

Selma Muhič Dizdarevič:  Yes, one of the problems is determining context. They need administrators who speak Czech, Hungarian, Polish, for each country they need to ascertain the context of the posts. For that reason, as part of the project, in the beginning we also created local language sets of hateful terms that one can only comprehend thanks to the local context. We can take the example of the Czech term “inadaptables” (nepřizpůsobiví), which in the Czech Republic has a hateful, negative character, but to a foreigner ignorant of the context, it might seem like a term that it would be acceptable to use. In all languages there are specific aspects of expressing such ideas that involve such codes. Moreover, the social media companies need as many users as possible, their business model is built on constantly growing the number of users. It’s better for them if anybody can say anything they like online. We have also noticed in general that the social media companies proceed differently if the speech that people report as objectionable has been authored by a “common user” than if it is coming from a famous politician. The chances are always higher that politicians’ posts will not be removed. This gives rise to a situation in which, paradoxically, for people who already have more influence over those around them, such as politicians, it is a more complex undertaking to remove their hateful posts.

Q: Critics could object that you are basically limiting freedom of speech.

Gwendolyn Albert: It’s very important to be aware that the same rules apply online as apply in the real world, though. In other words, what is illegal in the offline space is also illegal in the online environment. Clear laws apply everywhere that describe how we are and are not to behave. We’re giving the social media companies feedback about how well, in our view, they are taking care of an environment that is being used by several billion people. I would not frequent a business where people are murdering each other, or where crimes are being perpetrated, or where the atmosphere is unsafe. There are many people who are disgusted by the behavior of some of their fellow social media users online, and the social media operators should do something about it. Freedom of speech is not endangered by this at all in democratic countries. On the contrary, what is being endangered is civic coexistence, certain norms about what is considered a civilized exchange of opinions are at risk. It’s actually important that these platforms be held accountable for the consequences of providing their services.

Selma Muhič Dizdarevič:  Absolute freedom does not exist in any area of human coexistence. We live together and we are accountable for our actions. Before social media, it was quite a complex endeavor to share one’s opinion with the broader public. People had to expend their own resources to do it. Even an anonymous letter-writer had to go through the process of putting the letter in the mail, etc. Currently it’s very easy to share one’s opinion online. If somebody calls for murder online, or disseminates lies that can provoke hateful attacks, then that person is abusing the freedom to speak. Freedom must somehow be mutually agreed to with other people in society, in a community, at a specific historical moment – this is not some kind of revolutionary idea. I’ll give you an example:  Naturally, nobody can prevent anybody else from expressing the opinion that, for example, they are afraid of migration, for different reasons, and we can discuss that fear and those reasons. On the other hand, it is not possible to tolerate calls online to “kill all refugees”, or the statement that “I want to see them all die”, etc. That is the line between expressing an opinion and inciting hatred. We want to explain this to people – this is not some kind of counter-attack on those social media users who are already the victims of disinformation and their own fear. Rather, this is an effort to respond as mediators, to give a chance to people to understand why they are afraid, where they can find information that is relevant, where they can find allies so they won’t find themselves in extreme positions, so they don’t end up crossing the line.

Q: When content is actually illegal, i.e., the kinds of posts that could be suspected of breaking the law, what then?

Gwendolyn Albert: It seems the Czech Police have recently begun to take this area seriously, and I am very glad about that. For example, when there was that horrible attack in New Zealand, which was broadcast through the Internet live – which is just horrific – the Czech Police immediately announced they would not allow approval of that terrorist attack to be expressed online without consequences. That’s very good. We have to calm the discussions happening on social media, because this is actually about endangering democracy here, and that is also probably the reason the European Commission wants to monitor the social media companies’ responses to reports of hate speech – this is not about curtailing our freedom of speech, but about defending our democracy.

Q: In addition to reporting posts, whether to the social media operators or even to the police, what other instruments can an ordinary person use to combat hatred on the Internet?

Selma Muhič Dizdarevič:  One of our aims is to do our best to find a common way to identify, analyze, report and combat hatred online. As part of this project there are e-learning courses for activists, for online discussion moderators and for the public where one can learn how to respond to hatred on the Internet. For example, one learns that immediately arguing with the people who are posting is not recommended – we should not ourselves commit the same kinds of mistakes as the people have whom we believe are disseminating hatred. It always depends on whom we are discussing with. Some people can be disseminating hatred just because they are misinformed, or afraid, or have concerns about something. Naturally there are people who are perfectly informed and who are intentionally disseminating hatred as well, and there is no point in discussing anything with them.

Q: ROMEA is a member of INACH – can you briefly explain what that is?

Gwendolyn Albert:  INACH was established in 2002 and currently brings together two dozen or so organizations in 20 countries worldwide. They are headquartered in Amsterdam and focus on combating the discrimination and hatred that happen online. The member organizations do their best to raise this subject and to point out the danger associated with disseminating hatred through the Internet. INACH regularly holds an annual meeting, each year in a different place, and the meeting is always focused on a specific subject. This year, on 30 October, that meeting will take place in Prague, and ROMEA will be involved in organizing the accompanying conference, which this year will focus on antigypsyism in cyberspace. In addition to the INACH members, international members of the Romani community will be attending and presenting. We will discuss what the European Commission is or is not doing when it comes to the overlap between the Internet environment and the real world, and together we will be discussing strategies and exchanging experiences as to how important stakeholders from elsewhere in Europe (and elsewhere in the world) view these issues. I believe it is very important for these two worlds to come together – namely, people who are involved with combating cyber-hate in different countries and experts on antigypsyism in Europe.

DOCUMENT – MONITORING REPORT

Help us share the news about Romas
Trending now icon