main-img
ABOUT
The Problem

Palestinians cannot freely express themselves on Meta’s platforms. Palestinian campaigns, human rights documentation, and advocacy work is flagged, censored, and taken down. During the May Uprising of 2021 alone, 7amleh documented over 500 digital rights violations, 7amleh has documented almost 1500 violations from 2021 to 2022 alone. Over 80% of these violations occurred on Meta’s platforms. Much of this censorship is happening due to the influence of the Israeli government. The state of Israel sends tens of thousands of takedown requests to social media companies a year, and the companies comply 90% of the time. Moreover, as of 2021, 87% of these requests were directed at Meta. Palestinian accounts are blocked, and Palestinians do not feel comfortable posting opinions on the platform for fear of political and/or criminal reprecussions. These examples represent just a few of the ways in which Palestinians' digital rights are violated, and show the role Meta’s platforms have played in hosting these violations. Meanwhile, hate speech against Palestinians in Hebrew is propagated. 7amleh documented a 16% increase from 2019 to 2020 , and an 8% increase from 2020 to 2021 in violent discourse towards Arabs.

Why is Palestinian content disproportionately flagged, censored, or otherwise discriminated against? Content moderation errors on Meta’s platforms have been widely documented in recent years. However, in the Palestinian context, there are a few main root causes that lead to overmoderation including Meta’s treatment of Arabic language content, opaque internal content moderation policies and processes, as well as outsized Israeli influence and power.

Arabic Language Content Moderation

Meta overmoderates Arabic language content. Arabic, one of the most commonly used languages on the platform, has around thirty different dialects, and many are substantially different from one another. Meta primarily outsources its content moderation, and leaked internal documents prove that Meta has been aware for quite some time that these companies do not have staff with the necessary Arabic language skills to accurately review content. The same document cited that in one example, 90% of content written in the Egyptian dialect was incorrectly categorized, and that reviewers had almost no knowledge of the Iraqi dialect. This has a huge impact on Palestinians as well. During the May 2021 uprising, Meta found that it mistakenly took down almost half of the Arabic language content in cases in which users appealed the action. However, the problem is not simply a staffing one. In recent years, Meta has become more reliant on algorithms and automated technologies to moderate their content. However, this is problematic for a number of reasons. First, Meta does not differentiate their published data on the efficiencies of their automated technologies by countries and languages. Second, machine-learning algorithms need a pool of correctly moderated examples to be able to learn and operate effectively. However, as is discussed above, Meta’s human moderators are inadequate, and according to 7amleh many of the escalated cases during May 2021 were restored, that said, false positive cases indicate that the company’s algorithms are likely to produce similarly poor results. These issues came to a head in 2021 when content related to Al-Asqa, the third holiest site in the Islam faith, was mistakenly censored while Palestinians were being harrassed at the mosque during Ramadan prayers.

Meta's Opaque Policies and Processes

Meta’s content moderation policies and practices lack transparency, and leave community members with limited access to the tech company. Primarily, Meta lacks clarity and transparency as to how they determine what constitutes hate speech and incitement of violence on the platform. Similarly, the company's Dangerous Individuals and Organizations (DIO) policy also lacks clarity. For starters, the company has never published their DIO list, despite calls from digital rights organizations for transparency, and the only information publicly available is from a leaked version of the list obtained by the Intercept. If content is flagged as hate speech, inciting violence, or being connected to and/or in praise of a DIO, it is generally taken off the platform. However, in a politically tumultuous environment, where there is likely to be disagreement on what constitutes hate speech, or who should be considered a DIO, the most vulnerable come under greater risk of being targeted. The leaked version of Meta’s DIO list showed that Palestinian civil society organizations were included even though it is unclear how they would be tied to terrorism or violence in anyway. This displays that content moderation is not just a technical problem. The over moderation of Palestinian content on Meta’s platforms is also a political problem. In the wake of September 11th, islamophobia and the War on Terror have produced discriminatory systems for Muslim and Arab communities alike. This is seen in the overrepresentation of Muslim and Arab individuals and organizations on Meta’s DIO list. The list has over 50 Palestinian organizations or individuals, and just 2 Israel organizations or individuals. The combination of technical and political discrimination Palestinian content faces on the platform creates fertile ground for Israeli influence.

Israeli Meta Relations

In May of 2021, during the previously mentioned uprisings and military aggression regarding Gaza, the Israeli Minister of Defense tried to publicly pressure Meta to increase its efforts in taking extremist content off the platform. This statement highlights Israel’s comfortability in its relationship with Meta. This comfortability has been developed over years, but truly transformed in 2015 when 20,000 Israelis filed a lawsuit against the company for $1 billion claiming that the tech company was facilitating and encouraging Palestinian terrorism. The case was later dropped, but Meta subsequently pledged to combat Palestinian incitement on its platforms, and intensified its relationship with the Israeli government. 2015 also happens to be when Israel founded the Cyber Unit, which was created “to police online terrorism”. This unit works directly with social media companies to request that content and/or accounts be taken down or removed from the platforms, and the companies overwhelmingly listen. In 2019, the Cyber Unit sent around 20,000 requests (with Meta receiving 87%), and social media companies complied 90% of the time. Israel’s relationship with the tech company is further expressed through a number of former high-level Israeli bureaucratics who have obtained decision-making positions at Meta. Leveraging power to its fullest extent, the Israeli government has also collaborated with Government Organized Non-Governmental Organizations (GONGOs) to organize the efforts of flooding Meta with requests to take down Palestinian Content. These requests are on top of the tens of thousands already coming directly from the Israeli government itself. The Act-IL Initiative, for example, has some 26,000 members across 73 countries advocating for the censoring and silencing of Palestinian voices on social media, with Meta’s platforms being a primary target. Lastly, Israel holds significant economic leverage over Meta as well. In 2021 alone, around $319 million was spent on social media ads. 95% of that money was spent on Meta’s platforms. This scale of advertisement is greater than what Palestinians, Jordanians, and Egyptians combined spend on social media ads, making the Israeli advertising market one of the largest in the region, and therefore, a very important client to Meta.

Where Are We Now?

In recent years, civil society has put substantial pressure on Meta to improve its content moderation policies with regards to Palestine and Israel. In the Summer of 2020, it came to light that Meta, then known as Facebook, was considering changing their official hate speech policy to include the term “Zionist”, as a proxy for “Jewish”, or “Israeli”. In effect, this would have banned legit criticism of the state of Israel, and silenced Palestinians trying to raise awareness for the human rights violations being committed by the state of Israel. In response, an international coalition of civil society organizations launched the Facebook, we need to talk campaign. Through letter writing, protest and rallies at Meta offices, and meetings with representatives from the company, Meta did not official change their hate speech policy to protect “Zionist”. However, there is at least some evidence to suggest that unofficially criticism and documentation of human rights violations by Israel is still likely to be censored.

In the wake of the May Uprisings of 2021, more action on the issue was inspired. 7amleh’s documentation of over 500 digital rights violations on a two week period, made possible by the collaboration of local, regional, and international partners, along with a wave of outspoken Palestinian’s sharing how they had been silence and censored, brought the issue to Meta’s independent oversight board. Upon review, the board issued a recommendation that Meta have a third party review the company’s content moderation policies and practice with regards to Arabic and Hebrew content, as well as content related to Palestine. The board also stated that the review's findings should be made public. BSR, the management consulting firm which specializes in socially responsible business practices, was hired to conduct the review, and they have submitted their report to Meta for review. After being delayed for months, BSR finally published the report on September 22nd, and Meta published an official response to the report as well. 7amleh, together with 73 other organizations from around the globe, issued our response to the report a few days later. However, the main takeaway is that the report provided further proof that Palestinians are being silenced and censored on Meta's platforms.

With the May Uprisings of 2021, and the Facebook, we need to talk campaign behind us, we are unfortunately seeing a continuation of censoring and silencing Palestinian voices and narratives on Meta’s platforms. The convergence of Ramadan, Passover, and Easter in 2022, saw yet another wave of violence in Palestine. As Palestinians took to social media to document the Israeli Army’s aggression, their posts were flagged as “Against Community Standards”, while violent Israeli extremists were not stymied in their use of WhatsApp (another Meta platform) to incite and organize violent attacks against Palestinians. These three events, happening years apart from each other, show us the importance of continuing to stand in support of Palestinians right to freedom of expression!

How to help us?

CAMPAIGN PARTNERS