Future Tense

How Facebook, Twitter, Instagram Have Failed on Palestinian Speech

At least 243 people, including children participating in a program intended to help them deal with trauma, were killed in Gaza. In Israel 12 people, including two children, were killed.


During the violence, social media platforms allowed some voices to be heard, while others were silenced. On May 11, Twitter temporarily restricted the account of Palestinian American writer Mariam Barghouti, who was reporting on protests against the expulsion of Palestinians from their homes in East Jerusalem. Twitter later said it was an accident. Twitter was also the platform where Israel’s official account tweeted more than 3,000 rocket emoji and said they represented “the total amount of rockets shot at Israeli civilians. Each one of these rockets is meant to kill.” A user replied with more than 100 children emoji, the number of Palestinian kids killed.

Instagram also made significant mistakes. The platform removed posts and blocked hashtags about the Al-Aqsa Mosque, the place where the conflict started, because its content moderation system mistakenly associated the site with a designation the company reserves for terrorist organizations. Facebook, which owns Instagram, announced Wednesday that it had set up a “special operation center” that will be active 24 hours a day to moderate hate speech and violent content related to the Israeli-Palestinian conflict.

To learn more about how platforms have struggled with posts around the latest Israel-Palestine violence, I talked to Dia Kayyali, a researcher who focuses on the real-life impact of content moderation, and related topics. They are the associate director for advocacy at Mnemonic, an organization devoted to documenting human rights violations and international crimes in Syria, Yemen, and Sudan. Our conversation has been edited and condensed for clarity.

Delia Marinescu: Do you think Twitter should have taken action on Israel’s rocket-emoji tweets? If so, what would you have liked to see?

Dia Kayyali: That tweet specifically is offensive, but I don’t think that it necessarily should get removed. It doesn’t necessarily constitute a threat, so I don’t think that on its surface that it actually necessarily violates Twitter’s rules. And also it’s not necessarily spreading misinformation, so it doesn’t necessarily need to be labeled. Now, there are other tweets that I’ve seen where they are justifying their actions—for example, I’m sure you saw the YouTube video that got removed.

That’s the sort of thing platforms need to be paying attention to and probably need to be labeling some of that content as misleading.

Last week, Instagram and Twitter blamed technical errors for deleting posts mentioning the possible eviction of Palestinians from East Jerusalem. Instagram said in a statement that an automated update caused content reshared by multiple users to appear as missing, affecting posts on other topics as well, and said it was sorry to hear that Palestinians felt that they had been targeted. Do you buy that explanation? Does this tell us anything about how these content moderation algorithms work more broadly?

I absolutely do not buy this explanation. If you are going to do some sort of update and you know how people are using your platform, that’s the moment you chose to do it? Absolutely not. It’s also not how they roll out updates. You don’t just roll out an update without testing it in different places, so every time, for example, that Facebook or Instagram makes some small change, like they change the reporting flow, they test that in small place. Even if it were a mistake, which I don’t believe, it still reflects just total negligence toward human rights of Palestinians.

So do you think it was censorship?

Yes, I absolutely believe it was censorship. Censorship means government action, so it’s hard to talk about censorship when we’re talking about platforms. But in this case we know how close Facebook’s relationship is with the Israeli government—we know how rapidly they respond to Israeli government requests. Every public indication is that it’s happening because they’re listening to one side of the story and agreeing with it.

Last week, Instagram also removed posts and blocked hashtags about the Al-Aqsa Mosque because its content moderation system mistakenly associated the site with a designation the company reserves for terrorist organizations. How was that possible? How does content moderation work in a situation like this?

This is, again, unfortunately not a new issue. There is an ongoing issue where they associate certain words that are pretty well-known in our community with terrorists and violent extremists. The fact they keep making that claim over and over again when they are such a huge company with practically limitless resources is really disingenuous. Al-Aqsa Mosque is not the only phrase that has been associated that way.

“As far as Facebook is concerned,
most Palestinians are Hamas
—that’s how they treat content
coming from the region.”

— Dia Kayyali

Facebook set up a “special operation center” that is active 24 hours a day to moderate hate speech and violent content related to the ongoing Israeli-Palestinian conflict, a senior Facebook executive said Wednesday. What are the most urgent solutions they can implement?

In sort of typical fashion, myself and other advocates who are working on this found out when they made the announcement.

To be honest with you, hearing about this feels a bit like … what’s the point of having a special operation center if you’re going to continue to have these incredibly close relationships with the Israeli government and not take the other side of the conversation seriously? It doesn’t feel like it’s going to be helpful. It feels like it’ll probably result in more removal.

I think they should be a little bit clear on what they’re trying to do with this special operation center. We know that Facebook failed horribly in Myanmar, but once they got a lot of bad press, they did work with Myanmar civil society and they instituted some tools and policies that were helpful. Here, they said they are working with native Arabic and Hebrew speakers. Having appropriate language capacity is always an issue, so it’s good they will have native speakers, but they need to have people that actually speak the appropriate dialect. Arabic, it’s not one language.ADVERTISEMENT

As far as Facebook is concerned, most Palestinians are Hamas—that’s how they treat content coming from the region. It’s great if they’re putting more resources on this, but it’s not going to help if they’re not doing it conscientiously to address the problems that civil society keeps bringing up.

What’s needed is really some co-design of policies and more transparency into the policies. So understanding where automation is being used in the process, where automation makes a decision, and where it is a person. So at what point is the bias creeping in? I mentioned the specific example of the word Shaheed appearing to trigger their automated removal. That’s one of the places where they should be working with civil society to make sure they are not accidentally or on purpose capturing things that are going to include a lot of protected speech.

I am curious if the special operation center is intended to rapidly respond to user appeals. That would be something really helpful.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: