An Independent Due Diligence Exercise into Meta’s Human Rights Impact in Israel and Palestine During the May 2021 Escalation


We want our services to be a place where people can express themselves freely and safely around the world. This is especially true in situations where social media can be used to spread hatred, stoke tension, and incite violence on the ground. That’s why we have clear rules against TerrorAnd the hate speech And the incitement to violenceand the specialized experts who assist in developing and enforcing these rules. We also have a company human rights policy and dedicated human rights team Who helps us manage human rights risks and better understand how our products and technologies affect different countries and societies.

As part of our commitment to help create an environment where people can express themselves freely and safely, and to follow the recommendation of the Oversight Board at September 2021We asked Business for Social Responsibility (BSR) – an independent organization with expertise in human rights – to conduct a due diligence exercise on the impact of our policies and operations in Israel and Palestine during the May 2021 escalation, including examining whether these policies were implemented and operations without bias. Over the past year, the BSR has conducted detailed analysis, including engagement with groups and rights holders in Israel, Palestine, and globally, to inform its report. Today, we publish these findings and our response.

Due diligence insights

As BSR acknowledged in their report, the May 2021 industry-wide events, long-standing challenges around content moderation in conflict-affected areas, and the need to protect freedom of expression while reducing the risk of using online services to spread hate, have emerged. or incitement to violence. The report also highlighted how difficult it is to manage these issues due to the complex circumstances surrounding the conflict, including social and historical dynamics, various fast-moving violent events, and the actions and activities of terrorist organizations.

Despite these complexities, the BSR identified a number of areas of ‘good practice’ in our response. This included our efforts to prioritize measures to reduce the risk of using the platform to encourage violence or harm, including creating a special operations center quickly to respond to activity across our apps in real time. This center was staffed with teams of experts, including regional experts and native Arabic and Hebrew speakers, who worked to remove content that violated our policies, while also making sure that errors in our app were addressed as soon as we became aware of them. It also included our efforts to remove content that was appropriate and in line with international human rights standards.

In addition to these areas of good practice, the BSR concluded that different perspectives, nationalities, races, and religions were well represented in the teams working on this at META. They found no evidence of intentional bias on any of these grounds among any of these employees. Nor did they find any evidence that in developing or implementing any of our policies we sought to benefit or harm any particular community.

However, BSR has raised significant concerns about poor content enforcement, including incitement to violence against Israelis and Jews on our platforms, and specific cases in which they have considered our policies and operations to have an unintended impact on Palestinian and Arab communities – primarily on their freedom of expression. BSR made 21 specific recommendations as a result of due diligence, covering areas related to our policies, how these policies are implemented, and our efforts to provide transparency to our users.

Our actions in response to recommendations

Since we received the final report, we’ve carefully reviewed these recommendations to help us see where and how we can improve. Our response demonstrates our commitment to implementing 10 of the recommendations, partially implementing four, and evaluating the feasibility of six more. We will not take any further action on a single recommendation.

There are no overnight quick fixes for many of these recommendations, BSR explains. Although we’ve made major changes as a result of this exercise already, this process will take time – including time to understand how best to approach some of these recommendations, and whether they are technically feasible.

Here is an update on our work to address some of the key areas identified in the report:

our policies

The BSR recommended that we review our policies at incitement to violence And the Dangerous individuals and organizations (DOI) – The rules we have established that prohibit groups such as terrorists, hate and criminal organizations, as defined in our policies, that advertise a violent mission or engage in violent acts from being on Facebook or Instagram. We evaluate these entities based on their online and offline behavior – most importantly, their links to violence. We have committed to implementing these recommendations, including launching a review of each of these policy areas to examine how we deal with political discussion of banned groups, and how we can do more to address content that glorifies violence. As part of this comprehensive review, we will consult extensively with a wide range of experts, academics and stakeholders – not only in the region, but around the world.

BSR also recommended that we arrange a system Strikes and penalties We apply when people violate our DOI policy. We have committed to evaluating the feasibility of this particular recommendation, but we have already begun work to make this system simpler, more transparent and proportional.

In addition, BSR encouraged us to conduct stakeholder engagement and ensure transparency about our US legal obligations in this area. We have partially implemented this recommendation. While we regularly engage broad stakeholders on these policies and how they are implemented, we rely on legal counsel and relevant sanctioning authorities to understand our specific compliance obligations in this area. We agree that transparency is very important here and through us Community StandardsIn this article, we provide details about how we define terrorist groups, how we rank them, and how these categories affect the penalties we apply to people who break our rules.

implement our policies

BSR has made a number of recommendations that focus on our approach to reviewing content in both Hebrew and Arabic.

The BSR recommended that we continue to work on the development and publication of machine learning workbooks operating in the Hebrew language. We have committed to implementing this recommendation, and since May 2021 we have launched a Hebrew Classifier of “Aggressive Speech” to help us proactively detect more offending Hebrew content. We believe this will greatly improve our ability to handle situations like this, where we see significant spikes in content violation.

The BSR also recommended that we continue to work to create processes to better direct dialect-infringing Arabic content for content review. We are in the process of evaluating the feasibility of this recommendation. We have large and diverse teams reviewing Arabic content, with native language skills and an understanding of the local cultural context across the region – including in Palestine. We also have systems in place that use technology to help identify language content, so we can ensure it is reviewed by relevant content reviewers. We are exploring a range of options to see how we can improve this process. This includes reviewing the assignment of more content reviewers with diverse language and dialect capabilities, and working to understand whether we can train our systems to better distinguish between different Arabic dialects to help guide and review Arabic content.

BSR’s analysis indicates that Facebook and Instagram ban antisemitic content as part of their hate speech policy, which does not allow an attack on a person based on their religion, or any other protected characteristic. The BSR also notes that because we do not currently track hate speech targets, we are unable to measure the prevalence of antisemitic content, and they have recommended developing a mechanism to allow us to do so. We agree that it would be worthwhile to better understand the prevalence of certain types of hate speech, and we have committed to assessing the feasibility of this.

In addition, BSR recommended that we modify the processes we have for updating and maintaining associated keywords Designated dangerous organizationsTo help prevent hashtags being accidentally blocked – like the bug in May 2021 that temporarily limited people’s ability to see content on the max hashtag page. While we quickly fixed this issue, it wasn’t supposed to happen in the first place. We’ve already implemented this recommendation, and established a process to ensure that teams of experts at Meta are now responsible for auditing and approving these keywords.


To support all of this, BSR has made a series of recommendations focused on helping people better understand our policies and processes.

The BSR recommends that we provide specific and accurate information to people when we remove infringing content and implement “warnings”. We are implementing this recommendation in part, because some people are violating multiple policies at the same time – which creates challenges for how accurately we can be on a large scale. We are already providing this specific and granular information in most cases, and we are beginning to provide it in more cases, where it is possible.

The BSR also recommended that we disclose the number of official reports from government agencies to remove content that does not violate local law, but is likely to violate our Community Standards. We have committed to implementing this recommendation. We already publish a semi-annual report detailing the number of pieces of content, by country, that we limit in violation of local law as a result of a valid legal request. We’re now expanding the metrics we provide as part of this report to also include details about content that was removed for violating our policies at a government request.

The BSR report is a very important step forward for us and our human rights work. Global events are dynamic, so the ways in which we deal with safety, security and freedom of expression must be dynamic as well. Human rights assessments like these are an important way in which we can continue to improve our products, policies, and operations.

For more information on the additional steps we’ve taken, you can read our response to the full BSR assessment here. We will continue to inform people of the progress we have made in our annual Human Rights Report.


Leave a Reply

Your email address will not be published.