ABCDEFGHIJKLMNOPQRSTUVWXYZAAABACADAEAFAG
1
Case nameour namesource URLsummarysummary / issuecategorydate (m)yearmonth as numberMeta...The Board...principleoutcomenoteno of user complaintsfull abstract FB categorycontraversyclick-through linkIDdate
2
if listedpaste summary, about the case & oversight board's decisionrelates to no. of google news stories
3
Altered Video of President BidenBiden the Pedophilehttps://oversightboard.com/decision/FB-GW8BY1Y3/A video edited to portray U.S. President Joe Biden as inappropriately touching his adult granddaughter’s chest was not removed from FB.A video that was edited to make it appear as though U.S. President Joe Biden is inappropriately touching his adult granddaughter’s chest, and which is accompanied by a caption describing him as a “pedophile.” Hate SpeechFeb20242left video on Facebookupheldthe content does not violate the company’s Manipulated Media policy because the clip does not show President Biden saying words he did not say, and it was not altered through AI. The current policy only prohibits edited videos showing people saying words they did not say (there is no prohibition covering individuals doing something they did not do) and only applies to video created through AI. According to Meta, a key characteristic of “manipulated media” is that it could mislead the “average” user to believe it is authentic and unaltered. In this case, the looping of one scene in the video is an obvious alteration. post remained onlineThis case is also categorized as Elections / Misinformation The Oversight Board has upheld Meta’s decision to leave up a video that was edited to make it appear as though U.S. President Joe Biden is inappropriately touching his adult granddaughter’s chest, and which is accompanied by a caption describing him as a “pedophile.” The Facebook post does not violate Meta’s Manipulated Media policy, which applies only to video created through artificial intelligence (AI) and only to content showing people saying things they did not say. Since the video in this post was not altered using AI and it shows President Biden doing something he did not do (not something he didn’t say), it does not violate the existing policy. Additionally, the alteration of this video clip is obvious and therefore unlikely to mislead the “average user” of its authenticity, which, according to Meta, is a key characteristic of manipulated media. Nevertheless, the Board is concerned about the Manipulated Media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent (for example, to electoral processes). Meta should reconsider this policy quickly, given the number of elections in 2024.

About the Case

In May 2023, a Facebook user posted a seven-second video clip, based on actual footage of President Biden, taken in October 2022, when he went to vote in person during the U.S. midterm elections. In the original footage, he exchanged “I Voted” stickers with his adult granddaughter, a first-time voter, placing the sticker above her chest, according to her instruction, and then kissing her on the cheek. In the video clip, posted just over six months later, the footage has been altered so that it loops, repeating the moment when the president’s hand made contact with his granddaughter’s chest to make it look like he is inappropriately touching her. The soundtrack to the altered video includes the lyric “Girls rub on your titties” from the song “Simon Says” by Pharoahe Monch, while the post’s caption states that President Biden is a “sick pedophile” and describes the people who voted for him as “mentally unwell.” Other posts containing the same altered video clip, but not the same soundtrack or caption, went viral in January 2023.

A different user reported the post to Meta as hate speech, but this was automatically closed by the company without any review. They then appealed this decision to Meta, which resulted in a human reviewer deciding the content was not a violation and leaving the post up. Finally, they appealed to the Board.

The Oversight Board has upheld Meta’s decision to leave up the post.

The Board recommends that Meta:

Reconsider the scope of its Manipulated Media policy to cover audio and audiovisual content, content showing people doing things they did not do (as well as saying things they did not say) and content regardless of how it was created or altered.
Clearly define in a single unified Manipulated Media policy the harms it aims to prevent – beyond users being misled – such as preventing interference with the right to vote and to participate in the conduct of public affairs.
Stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered and could mislead. Such a label should be attached to the media (for example, at the bottom of a video) rather than the entire post and be applied to all identical instances of that media on Meta’s platforms.

* Case summaries provide an overview of cases and do not have precedential value.
Hate Speech / Bullying and harassment / Manipulated media250https://oversightboard.com/decision/FB-GW8BY1Y3/77Feb 2024
4
Fruit juice dietThe Extreme Fruit juice diet https://www.oversightboard.com/decision/FB-4294T386/A woman shares her very positive first-hand results of a fruit juice-only diet. two videos were posted to the same Facebook Page, described as featuring content on life, culture and food in Thailand. In both, a woman is interviewed by a man about her experience of following a diet only consisting of fruit juice. The conversations take place in Italian.

In the first video, the woman says that she has experienced increased mental focus, improved skin and bowel movement, happiness and a "feeling of lightness" since starting the diet, while she also shares that she previously suffered from skin problems and swollen legs. She brings up the issue of anorexia, but states that her weight has normalised, after she initially lost more than 10 kilograms (22 pounds) due to her dietary changes. Around five months later, the man interviews the woman again in a second video, asking how she feels almost a year into observing this fruit juice-only diet. She responds by saying she looks young for her age, that she has not lost any more weight except for "four kilos of impurities", and she encourages him to try the diet. She also states that she will become a "fruitarian" upon breaking her fast, but that she is thinking about starting a "pranic journey", which, according to her, means living "on energy" in place of eating or drinking regularly.
miscOct202310left contents on Facebook, after both posts were reported multiple times for violating Facebook's Suicide and Self-Injury Community Standard, and following human review that assessed the content as non-violating.upheldThe Board finds that neither of these posts violate the Suicide and Self-Injury Community Standard because they do not provide "instructions for drastic and unhealthy weight loss when shared together with terms associated with eating disorders", and do not "promote, encourage, coordinate or provide instructions for eating disorders". both post remained onlineBoth the content creator's Facebook Page on which the two videos were posted and the Facebook Page of the woman shown in the videos are part of Meta's Partner Monetisation Programme. This means that the content creator and presumably the woman being interviewed earn money from posts on their Pages, when Meta displays ads on their content. For this to happen, the Pages would have passed an eligibility check and the content would have had to comply with both Meta's Community Standards and its Content Monetisation Policies. Within its Content Monetisation Policies, Meta prohibits certain categories from being monetised on its platforms, even if they do not violate the Community Standards.The Oversight Board has upheld Meta's decisions to keep up two posts in which a woman shares her first-hand experience of a fruit juice-only diet. The Board agrees that neither violate Facebook's Suicide and Self-Injury Community Standard because they do not "provide instructions for drastic and unhealthy weight loss", nor do they "promote" or "encourage" eating disorders. However, as both pages involved in these two cases were part of Meta's Partner Monetisation Programme, the Board recommends that the company restrict "extreme and harmful diet-related content" in its Content Monetisation Policies.

About the cases

Between late 2022 and early 2023, two videos were posted to the same Facebook Page, described as featuring content on life, culture and food in Thailand. In both, a woman is interviewed by a man about her experience of following a diet only consisting of fruit juice. The conversations take place in Italian.

In the first video, the woman says that she has experienced increased mental focus, improved skin and bowel movement, happiness and a "feeling of lightness" since starting the diet, while she also shares that she previously suffered from skin problems and swollen legs. She brings up the issue of anorexia, but states that her weight has normalised, after she initially lost more than 10 kilograms (22 pounds) due to her dietary changes. Around five months later, the man interviews the woman again in a second video, asking how she feels almost a year into observing this fruit juice-only diet. She responds by saying she looks young for her age, that she has not lost any more weight except for "four kilos of impurities", and she encourages him to try the diet. She also states that she will become a "fruitarian" upon breaking her fast, but that she is thinking about starting a "pranic journey", which, according to her, means living "on energy" in place of eating or drinking regularly.

Between them, the posts were viewed more than 2,000,000 times and received over 15,000 comments. The videos share details of the woman's Facebook Page, which experienced a significant increase in interactions following the second post.

After both posts were reported multiple times for violating Facebook's Suicide and Self-Injury Community Standard, and following human review that assessed the content as non-violating, they remained on Facebook. A separate user in each case then appealed Meta's decision to the Board.

Both the content creator's Facebook Page on which the two videos were posted and the Facebook Page of the woman shown in the videos are part of Meta's Partner Monetisation Programme. This means that the content creator and presumably the woman being interviewed earn money from posts on their Pages, when Meta displays ads on their content. For this to happen, the Pages would have passed an eligibility check and the content would have had to comply with both Meta's Community Standards and its Content Monetisation Policies. Within its Content Monetisation Policies, Meta prohibits certain categories from being monetised on its platforms, even if they do not violate the Community Standards.

The Oversight Board upholds Meta's decisions to leave up the two posts.

The Board recommends that Meta:

Restrict extreme and harmful diet-related content in its Content Monetisation Policies to avoid creating financial incentives for influential users to create harmful content.

* Case summaries provide an overview of cases and do not have precedential value.
Suicide and self-injury100https://www.oversightboard.com/decision/FB-4294T386/76Oct 2023
5
Communal Violence in Indian State of OdishaViolence between Hindus & Muslims in Indiahttps://www.oversightboard.com/decision/FB-515JVE4X/A video of communal violence in the Indian state of Odisha was removed after a request from local law enforcement.A Facebook user posted a video of an event from the previous day that depicts a religious procession in Sambalpur in the Indian state of Odisha related to the Hindu festival of Hanuman Jayanti. The video caption reads “Sambalpur,” which is a town in Odisha, where communal violence broke out between Hindus and Muslims during the festival. Violence / Incitement / Graphic ContentNov202311removed the content. After receiving a request from Odisha law enforcement to remove an identical video, posted by another user with a different caption. Meta found that the post violated the spirit of its Violence and Incitement Community Standard and added the video to a Media Matching Service bank. This locates and flags for possible action content that is identical or nearly identical to previously flagged photos, videos, or text. upheldthe post violated the Violence and Incitement Community Standard which prohibits “content that constitutes a credible threat to public or personal safety.” The majority of the Board finds that given the ongoing violence in Odisha at the time, and the fact that no policy exceptions applied, the content posed a serious and likely risk of furthering violence. A minority of the Board believes that the post could be properly removed under Meta’s Violence and Incitement Community Standard, but for a different reason. As the video depicted a past incident of incitement with no contextual clues indicating that a policy exception should apply, and similar content was being shared with the aim of inciting violence, Meta was justified in removing the content. post was removed The Oversight Board has upheld Meta’s decision to remove a Facebook post containing a video of communal violence in the Indian state of Odisha. The Board found that the post violated Meta’s rules on violence and incitement. The majority of the Board also concludes that Meta’s decision to remove all identical videos across its platforms was justified in the specific context of heightened tensions and ongoing violence in the state of Odisha. While the content in this case was not covered by any policy exceptions, the Board urges Meta to ensure that its Violence and Incitement Community Standard allows content that “condemns or raises awareness of violent threats.”

About the Case

In April 2023, a Facebook user posted a video of an event from the previous day that depicts a religious procession in Sambalpur in the Indian state of Odisha related to the Hindu festival of Hanuman Jayanti. The video caption reads “Sambalpur,” which is a town in Odisha, where communal violence broke out between Hindus and Muslims during the festival.

The video shows a procession crowd carrying saffron-colored flags, associated with Hindu nationalism, and chanting “Jai Shri Ram” - which can be translated literally as “Hail Lord Ram” (a Hindu god). In addition to religious contexts where the phrase is used to express devotion to Ram, the expression has been used in some circumstances to promote hostility against minority groups, especially Muslims. The video then zooms into a person standing on the balcony of a building along the route of the procession who is shown throwing a stone at the procession. The crowd then pelts stones towards the building amidst chants of “Jai Shri Ram,” “bhago” (which can be translated as “run”) and “maro maro” (which can be translated as “hit” or “beat”). The content was viewed about 2,000 times and received fewer than 100 comments and reactions.

Following the violence that broke out during the religious procession shown in the video, the Odisha state government shut down internet services, blocked social media platforms, and imposed a curfew in several areas of Sambalpur. In the context of the violence that broke out during the procession, shops were reportedly set on fire and a person was killed.

Shortly after the events depicted in the video, Meta received a request from Odisha law enforcement to remove an identical video, posted by another user with a different caption. Meta found that the post violated the spirit of its Violence and Incitement Community Standard and added the video to a Media Matching Service bank. This locates and flags for possible action content that is identical or nearly identical to previously flagged photos, videos, or text.

Meta informed the Board that the Media Matching Service bank was set up to globally remove all instances of the video, regardless of the caption, given the safety risks posed by this content. This blanket removal applied to all identical videos, even if they fell within Meta’s exceptions for awareness raising, condemnation, and news reporting. The Board noted that, given the settings of the Media Matching Service bank, many pieces of content identical to this video have been removed in the months that followed the events in Sambalpur, Odisha.

Through the Media Matching Service bank, Meta identified the content at issue in this case and removed it, citing its rules prohibiting “[c]alls for high-severity violence including […] where no target is specified but a symbol represents the target and/or includes a visual of an armament or method that represents violence.”

The Oversight Board upholds Meta’s decision to remove the content.

The Board reiterates recommendations from previous cases that Meta:

- Ensure that the Violence and Incitement Community Standard allows content containing statements with “neutral reference to a potential outcome of an action or an advisory warning,” and content that “condemns or raises awareness of violent threats.”
- Provide more clarity to users and explain in the landing page of the Community Standards, in the same way the company does with the newsworthiness allowance, that allowances to the Community Standards may be made when their rationale, and Meta’s values, demand a different outcome than a strict reading of the rules. The Board also reiterates its prior recommendation to Meta to include a link to a Transparency Center page which provides information about the “spirit of the policy” allowance.

* Case summaries provide an overview of cases and do not have precedential value.
Violence / Incitement75https://www.oversightboard.com/decision/FB-515JVE4X/69Nov 2023
6
Armenian Prisoners of War VideoAbuse of Armenian POWshttps://oversightboard.com/decision/FB-YLRV35WD/A video depicting Azerbaijani soldiers torturing Armenian soldiers was left up due to "newsworthiness".In October 2022, a Facebook user posted a video on a Page that identifies itself as documenting alleged war crimes committed by Azerbaijan against Armenians in the context of the Nagorno-Karabakh conflict. This conflict reignited in September 2020 and escalated into fighting in Armenia in September 2022, leaving thousands dead and hundreds of people missing.Violence / Incitement / Graphic ContentJune20236applied the newsworthiness allowance to allow the content to remain on Facebook, and the contents of the video required a "mark as disturbing" warning screen.upheldposting violent content allowed in the case of conflict based on 'newsworthiness'post remained on FB with a content warningThe Board agrees with Meta that the public interest value in keeping the content on the platform with a warning screen outweighed the risk to the safety and dignity of the prisoners of war.0The Oversight Board has upheld Meta's decision to leave up a Facebook post that included a video depicting identifiable prisoners of war and add a "mark as disturbing" warning screen to the video. The Board found that Meta correctly applied a newsworthiness allowance to the post, which would have otherwise been removed for violating its Coordinating Harm and Promoting Crime Community Standard. However, the Board recommends that Meta strengthen internal guidance around reviewing this type of content and develop a protocol for preserving and sharing evidence of human rights violations with the appropriate authorities.

About the case

In October 2022, a Facebook user posted a video on a Page that identifies itself as documenting alleged war crimes committed by Azerbaijan against Armenians in the context of the Nagorno-Karabakh conflict. This conflict reignited in September 2020 and escalated into fighting in Armenia in September 2022, leaving thousands dead and hundreds of people missing.

The video begins with a user-inserted age warning that it is only suitable for people over the age of 18, and an English text, which reads "Stop Azerbaijani terror. The world must stop the aggressors." The video appears to depict a scene where prisoners of war are being captured.

It shows several people who appear to be Azerbaijani soldiers searching through rubble, with their faces digitally obscured with black squares. They find people in the rubble who are described in the caption as Armenian soldiers, whose faces are left unobscured and identifiable. Some appear to be injured, others appear to be dead. The video ends with an unseen person, potentially the person filming, continuously shouting curse words and using abusive language in Russian and Turkish at an injured soldier sitting on the ground.

In the caption, which is in English and Turkish, the user states that the video depicts Azerbaijani soldiers torturing Armenian prisoners of war. The caption also highlights the July 2022 gas deal between the European Union and Azerbaijan to double gas imports from Azerbaijan by 2027.
Coordinating Harm and Publicising Crime150https://oversightboard.com/decision/FB-YLRV35WD/38June 2023
7
Sri Lanka PharmaceuticalsShould people be encourage to donate needed medicines?https://oversightboard.com/decision/FB-CZHY85JC/A request to send pharmaceuticals to Sri Lanka during the country's crisis was allowed to circulate on FB despite contravening rules around pharmaceuticals.A post requesting pharmaceutical donations to Sri Lanka during the country's financial crisismiscMarch20223decided to allow content seeking to donate, gift or ask for pharmaceutical drugs in Sri Lanka between 27 April and 10 November 2022 due to the severe political, economic and healthcare crisis in Sri Lanka at that time upheldrequesting pharmaceutical donations violates its Restricted Goods and Services Community Standard, which prohibits content that asks for pharmaceutical drugs.post remained on FBSince 10 November 2022, when the allowance ended, Meta reviews any content attempting to donate, gift or ask for pharmaceutical drugs in Sri Lanka against the Restricted Goods and Services policy and enforces the policy without the allowance.0The Oversight Board has upheld Meta's decision to leave a Facebook post up that is asking for donations of pharmaceutical drugs to Sri Lanka during the country's financial crisis. However, the Board has found that secret, discretionary policy exemptions are incompatible with Meta's human rights responsibilities, and has made recommendations to increase transparency and consistency around the "spirit of the policy" allowance. This allowance permits content where a strict reading of a policy produces an outcome that is at odds with that policy's intent.

About the case

In April 2022, an image was posted on the Facebook Page of a medical trade union in Sri Lanka, asking for people to donate drugs and medical products to the country, and providing a link for them to do so.

At the time, Sri Lanka was in the midst of a severe political and financial crisis, which emptied the country's foreign currency reserves. As a result, Sri Lanka, which imports 85% of its medical supplies, did not have the funds to import drugs. Doctors reported that hospitals were running out of medicine and essential supplies, and said they feared an imminent health catastrophe.

The Meta teams responsible for monitoring risk during the Sri Lanka crisis identified the content in this case. The company found that the post violated its Restricted Goods and Services Community Standard, which prohibits content that asks for pharmaceutical drugs but applied a scaled "spirit of the policy" allowance.

"Spirit of the policy" allowances permit content where the policy rationale, and Meta's values, demand a different outcome to a strict reading of the rules. Scaled allowances apply to entire categories of content, rather than just individual posts. The rationale for the Restricted Goods and Services policy includes "encouraging safety". Meta referred this case to the Board.
Undefined75https://oversightboard.com/decision/FB-CZHY85JC/36March 2022
8
Tigray Communication Affairs BureauViolence in Tigrayhttps://oversightboard.com/decision/FB-E1154YLY/A post threatening violence against Ethiopia's Prime Minister and forces was removed.During a period of increasing violence in Ethiopia, a post on the official Tigray Regional State's Communication Affairs Bureau encouraged the national army to turn against the prime minister and his supporters. The post also urged the government forces to surrender and says they'll die if they refuse.Violence / Incitement / Graphic ContentFeb20222removed the post because the content violated Meta's CS on violence and incitement and removing it was in line with its human rights responsibilities. The post said to "turn its gun" towards "Abiy Ahmed" – Ethiopia's PM. It also threatens govt forces with deathupheldthe conflict in Ethiopia is marked by sectarian violence – high risk the post could lead to more violence. Removal is necessary due to CS on violence and incitementPost was removed7The Oversight Board has upheld Meta's decision to remove a post threatening violence in the conflict in Ethiopia. The content violated Meta's Community Standard on violence and incitement and removing it is in line with the company's human rights responsibilities. Overall, the Board found that Meta must do more to meet its human rights responsibilities in conflict situations and makes policy recommendations to address this.

About the case

4 February 2022, Meta referred a case to the Board concerning content posted on Facebook during a period of escalating violence in the conflict in Ethiopia, where Tigrayan and government forces have been fighting since November 2020.

The post appeared on the official page of the Tigray Regional State's Communication Affairs Bureau and was viewed more than 300,000 times. It discusses the losses suffered by federal forces and encourages the national army to "turn its gun" towards the "Abiy Ahmed group". Abiy Ahmed is Ethiopia's Prime Minister. The post also urges government forces to surrender and says they will die if they refuse.

After being reported by users and identified by Meta's automated systems, the content was assessed by two Amharic-speaking reviewers. They determined that the post did not violate Meta's policies and left it on the platform.

At the time, Meta was operating an Integrity Product Operations Centre (IPOC) for Ethiopia. IPOCs are used by Meta to improve moderation in high-risk situations. They operate for a short time (days or weeks) and bring together experts to monitor Meta's platforms and address any abuse. Through the IPOC, the post was sent for expert review, found to violate Meta's violence and incitement policy, and then removed two days later.
Violence and Incitement100https://oversightboard.com/decision/FB-E1154YLY/33Feb 2022
9
India Sexual Harassment VideoIs footage of sexual assault newsworthy?https://oversightboard.com/decision/IG-KFLY3526/An Instagram post containing a video of a woman being sexually assaulted by a group of men was removed.A video from India showing an Adivasi woman (referred to as a 'tribal woman' in the video) being assaulted by a group of men, shared on an Instagram account focused on Dalit perspectives.Nudity & sexual activityMarch20223removed post because it shows a group of "dalit" men sexually assaulting a "tribal woman" - this violates Adult Sexual Exploitation policy. On review, a "newsworthiness allowance" was put on the post, restoring the content with a warning screen and 18+ limituphelddalit and adivasi people (esp women) suffer discrimination and crime in India - content in post raises awareness. Post has public interest value. Does not contain explicit content or nudity, victim unidentifiablePost was restored with a warning screen11The Board has upheld Meta's decision to restore a post to Instagram containing a video of a woman being sexually assaulted by a group of men. The Board has found that Meta's "newsworthiness allowance" is inadequate in resolving cases such as this at scale and that the company should introduce an exception to its Adult Sexual Exploitation policy.

About the case

In March 2022, an Instagram account describing itself as a platform for Dalit perspectives posted a video from India showing a woman being assaulted by a group of men. "Dalit" people have previously been referred to as "untouchables", and have faced oppression under the caste system. The woman's face is not visible in the video and there is no nudity. The text accompanying the video states that a "tribal woman" was sexually assaulted in public, and that the video went viral. "Tribal" refers to indigenous people in India, also referred to as Adivasi.

After a user reported the post, Meta removed it for violating the Adult Sexual Exploitation policy, which prohibits content that "depicts, threatens or promotes sexual violence, sexual assault or sexual exploitation".

A Meta employee flagged the content removal via an internal reporting channel upon learning about it on Instagram. Meta's internal teams then reviewed the content and applied a "newsworthiness allowance". This allows otherwise violating content to remain on Meta's platforms if it is newsworthy and in the public interest. Meta restored the content, placing the video behind a warning screen which prevents anyone under the age of 18 from viewing it, and later referred the case to the Board.
Sexual Exploitation of Adults50https://oversightboard.com/decision/IG-KFLY3526/29March 2022
10
South Africa SlurSouth Africa slurhttps://www.oversightboard.com/decision/FB-TYE2766GA post claiming white minority control in post-Apartheid South Africa, using racial slurs, was removed.A FB post that discussed "multi-racialism" in South Africa. It made the claim that white people are still in control post-Apartheid, and used racial slurs to describe the current state of affairs for black folks.Hate SpeechSept20219Post was removed for violating its policy prohibiting the use of slurs targeted at people based on race, ethnicity and/or national origin. The used of the K-word and N-word are on FB's list of prohibited slurs for the Sub-Saharan marketupheldThey claim that the k-word and n-word have discriminatory uses, and that the k-word is particularly harmful in the SA contextpost was removedWhile the user's post discussed relevant and challenging socio-economic and political issues in South Africa, the user racialised this critique by choosing the most severe terminology possible in the country.6The Oversight Board has upheld Facebook's decision to remove a post discussing South African society under its Hate Speech Community Standard. The Board found that the post contained a slur which, in the South African context, was degrading, excluding and harmful to the people it targeted.

About the case

In May 2021, a Facebook user posted in English in a public group that described itself as focused on unlocking minds. The user's Facebook profile picture and banner photo each depict a black person. The post discussed "multi-racialism" in South Africa, and argued that poverty, homelessness and landlessness have increased for black people in the country since 1994.

It stated that white people hold and control the majority of the wealth, and that wealthy black people may have ownership of some companies, but not control. It also stated that if "you think" sharing neighbourhoods, language and schools with white people makes you "deputy-white" then "you need to have your head examined." The post then concluded with "[y]ou are" a "sophisticated slave", "a clever black", "'n goeie kaffir" or "House nigger" (hereafter redacted as "k***ir" and "n***er").
Hate Speech50https://www.oversightboard.com/decision/FB-TYE2766G25Sept 2021
11
Former President Trump's SuspensionFormer President Trump's Suspensionhttps://oversightboard.com/decision/FB-691QAMHJ/Two of Donald Trump's posts promoting electoral fraud and calls for action were taken down.Former President Trump posted a video on FB and Instagram during the Capitol Riots on 6 Jan 2021 which was taken down, and then wrote a post. Both were removed, and he was blocked from posting for 24 hours. They then blocked him indefinitely from FB. FB also found 5 other violations on the DJT FB pageDangerous individuals / organisationsJan20211Took down his posts and suspended his account indefinitely because they violate FB's Community Standards on dangerous individuals and organisations.upheldThe Board found that 2 of his posts on 6 Jan violated FB and Insta community standards prohibiting praise or support of people engaged in violence. They determined his posts about electoral fraud created a clear and immediate risk of harm.Mr. Trump remained suspended, but The Board asked FB to define Trump's suspension and to look at it again in 6 monthsThe Board stated that it is not always useful to draw a firm distinction between political leaders and other influential users, recognising that other users with large audiences can also contribute to serious risks of harm.9666The Board has upheld Facebook's decision, on 7 January 2021, to restrict then-President Donald Trump's access to posting content on his Facebook Page and Instagram account.

However, it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension. Facebook's normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the Page and account.

The Board insists that Facebook review this matter to determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform. Facebook must complete its review of this matter within six months of the date of this decision. The Board also made policy recommendations for Facebook to implement in developing clear, necessary and proportionate policies that promote public safety and respect freedom of expression.

About the case

Elections are a crucial part of democracy. On 6 January 2021, during the counting of the 2020 electoral votes, a mob forcibly entered the Capitol Building in Washington, DC. This violence threatened the constitutional process. Five people died and many more were injured during the violence. During these events, then-President Donald Trump posted two pieces of content.

At 16:21 Eastern Standard Time, as the riot continued, Mr Trump posted a video on Facebook and Instagram:

I know your pain. I know you're hurt. We had an election that was stolen from us. It was a landslide election, and everyone knows it, especially the other side, but you have to go home now. We have to have peace. We have to have law and order. We have to respect our great people in law and order. We don't want anybody hurt. It's a very tough period of time. There's never been a time like this where such a thing happened; where they could take it away from all of us, from me, from you, from our country. This was a fraudulent election, but we can't play into the hands of these people. We have to have peace. So go home. We love you. You're very special. You've seen what happens. You see the way others are treated that are so bad and so evil. I know how you feel. But go home and go home in peace.

At 17:41 Eastern Standard Time, Facebook removed this post for violating its Community Standard on dangerous individuals and organisations.

At 18:07 Eastern Standard Time, as police were securing the Capitol, Mr Trump posted a written statement on Facebook:

These are the things and events that happen when a sacred landslide election victory is so unceremoniously and viciously stripped away from great patriots who have been badly and unfairly treated for so long. Go home with love in peace. Remember this day forever!

At 18:15 Eastern Standard Time, Facebook removed this post for violating its Community Standard on dangerous individuals and organisations. It also blocked Mr Trump from posting on Facebook or Instagram for 24 hours.

On 7 January, after further reviewing Mr Trump's posts, his recent communications off Facebook and additional information about the severity of the violence at the Capitol, Facebook extended the block "indefinitely and for at least the next two weeks until the peaceful transition of power is complete".

On 20 January, with the inauguration of President Joe Biden, Mr Trump ceased to be the president of the United States.

On 21 January, Facebook announced that it had referred this case to the Board. Facebook asked whether it had correctly decided on 7 January to prohibit Mr Trump's access to posting content on Facebook and Instagram for an indefinite amount of time. The company also requested recommendations about suspensions when the user is a political leader.

In addition to the two posts on 6 January, Facebook previously found five violations of its Community Standards in organic content posted on the Donald J. Trump Facebook Page, three of which were within the last year. While the five violating posts were removed, no account-level sanctions were applied.
Dangerous Individuals and Organisations450https://oversightboard.com/decision/FB-691QAMHJ/24Jan 2021
12
COVID-19 Lockdowns in Brazil"Lockdowns are ineffective"https://oversightboard.com/decision/FB-B6NGYREK/A Brazilian medical council's post claiming that the World Health Organisation condemned lockdowns was kept up.The FB page of a state-level medical council posted a photo of a written notice on COVID-19 reduction measures that criticised the Brasilian government's response to the COVID pandemic.Violence / Incitement / Graphic ContentMarch20213decided that the post did not violate CS and kept the post upupheldthere was some inaccurate information, but did not create a risk of "imminent harm"post remained on FB30The Oversight Board has upheld Facebook's decision to leave up a post by a state-level medical council in Brazil, which claimed that lockdowns are ineffective and had been condemned by the World Health Organization (WHO).

The Board found that Facebook's decision to keep the content on the platform was consistent with its content policies. The Board found that the content contained some inaccurate information, which raises concerns considering the severity of the pandemic in Brazil and the council's status as a public institution. However, the Board found that the content did not create a risk of imminent harm and should, therefore, stay on the platform. Finally, the Board emphasised the importance of measures other than removal to counter the spread of COVID-19 misinformation to be adopted under certain circumstances, such as those in this case.

About the case

In March 2021, the Facebook Page of a state-level medical council in Brazil posted a picture of a written notice on measures to reduce the spread of COVID-19, entitled "Public note against lockdown".

The notice claims that lockdowns are ineffective, against fundamental rights in the Constitution and condemned by the WHO. It includes an alleged quote from Dr David Nabarro, a WHO special envoy for COVID-19, stating that "the lockdown does not save lives and makes poor people much poorer." The notice claims that the Brazilian state of Amazonas had an increase in deaths and hospital admissions after lockdown as evidence of the failure of lockdown restrictions. The notice claims that lockdowns would lead to greater mental disorders, alcohol and drug abuse, and economic damage, amongst other things. It concludes that effective preventative measures against COVID-19 include education campaigns about hygiene, masks, social distancing, vaccination and government monitoring – but never lockdowns.

The Page has over 10,000 followers. The content was viewed around 32,000 times and shared around 270 times. No users reported the content. Facebook took no action against the content and referred the case to the Board. The content remains on the platform.
Violence and Incitement50https://oversightboard.com/decision/FB-B6NGYREK/20March 2021
13
Alleged Crimes in Raya KoboAlleged atrocities in Tigrayhttps://oversightboard.com/decision/FB-MP4ZC4CC/A post with unverified allegations about TPLF's involvement in atrocities was removed.A post was created that contained allegations against the Tigray People's Liberation Front (TPLF) and Tigrayan civilians, presenting an unverified rumor as if it were established fact.Hate SpeechJuly20217Originally removed the post for violating the hate speech CS. It was later restored, Meta claiming that the post did not target the Tigray ethnicity.upheldapplied FB violence and incitement CS to post – content was an unverifiable rumour, alleging an ethnic group committed mass atrocities. This was determined to be dangerous and significantly increase risk of imminent violence.post was removed23The Oversight Board has upheld Meta's original decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities in Ethiopia's Amhara region. However, as Meta restored the post after the user's appeal to the Board, the company must once again remove the content from the platform.

About the case

In late July 2021, a Facebook user from Ethiopia posted in Amharic. The post included allegations that the Tigray People's Liberation Front (TPLF) killed and raped women and children, and looted the properties of civilians in Raya Kobo and other towns in Ethiopia's Amhara region. The user also claimed that ethnic Tigrayan civilians assisted the TPLF with these atrocities. The user claims in the post that he received the information from the residents of Raya Kobo. The user ended the post with the following words "we will ensure our freedom through our struggle".

After Meta's automatic Amharic language systems flagged the post, a content moderator determined that the content violated Facebook's Hate Speech Community Standard and removed it. When the user appealed this decision to Meta, a second content moderator confirmed that the post violated Facebook's Community Standards. Both moderators belonged to Meta's Amharic content review team.

The user then submitted an appeal to the Oversight Board. After the Board selected this case, Meta identified its original decision to remove the post as incorrect and restored it on 27 August. Meta told the Board that it usually notifies users that their content has been restored on the day they restore it. However, due to a human error, Meta informed this user that their post had been restored on 30 September – over a month later. This notification happened after the Board asked Meta whether it had informed the user that their content had been restored.
Hate Speech75https://oversightboard.com/decision/FB-MP4ZC4CC/15July 2021
14
Sudan Graphic VideoViolent imagery from Sudanhttps://oversightboard.com/decision/FB-AP0NSBVC/A video depicting violence against a civilian in Sudan was removed.A graphic video was posted depicting a civilian victim of violence in Sudan with a caption that calls on people to not trust the military with hashtags that reference civil disobedience and military abuses.Violence / Incitement / Graphic ContentDec202112first removed the post for violating FB graphic content CS. After appeal, Meta issued a newsworthiness allowance that exempted removal. It was restored with a warning screen on the videoupheldagreed with Meta decision, but noted that the graphic content policy is unclear on how users can share graphic content to raise awareness or document abusespost was restored5The Oversight Board has upheld Meta's decision to restore a Facebook post depicting violence against a civilian in Sudan. The content raised awareness of human rights abuses and had significant public interest value. The Board recommended that Meta add a specific exception on raising awareness of or documenting human rights abuses to the Violent and Graphic Content Community Standard.

About the case

On 21 December 2021, Meta referred a case to the Board concerning a graphic video which appeared to depict a civilian victim of violence in Sudan. The content was posted to the user's Facebook profile page following the military coup in the country on 25 October 2021.

The video shows a person lying next to a car with a significant head wound and a visibly detached eye. Voices can be heard in the background saying in Arabic that someone has been beaten and left in the street. A caption, also in Arabic, calls on people to stand together and not to trust the military, with hashtags referencing documenting military abuses and civil disobedience.

After being identified by Meta's automated systems and reviewed by a human moderator, the post was removed for violating Facebook's Violent and Graphic Content Community Standard. After the user appealed, however, Meta issued a newsworthiness allowance exempting the post from removal on 29 October 2021. Due to an internal miscommunication, Meta did not restore the content until nearly five weeks later. When Meta restored the post, it placed a warning screen on the video.
Violent and Graphic Content50https://oversightboard.com/decision/FB-AP0NSBVC/12Dec 2021
15
Zwarte PietTradition Dutch black-face character "Zwarte Piet"https://www.oversightboard.com/decision/FB-S6NRTDAJ/A removed video featuring white people in blackface.a Dutch video which featured white people in blackface represented the Dutch character Zwarte Piet (Black Pete)Hate SpeechDec202012removed the post as it contravened its prohibition of caricatures of Black People in blackface, unless shared to condemn the practice or raise awarenessupheldposting caricatures of Black people in the form of blackfacepost was removed-The Oversight Board has upheld Facebook's decision to remove specific content that violated the express prohibition on posting caricatures of Black people in the form of blackface, contained in its Hate Speech Community Standard.

About the case

On 5 December 2020, a Facebook user in the Netherlands shared a post including text in Dutch and a 17-second-long video on their timeline. The video showed a young child meeting three adults, one dressed to portray "Sinterklaas" and two portraying "Zwarte Piet", also referred to as "Black Pete".

The two adults portraying Zwarte Piets had their faces painted black and wore Afro wigs under hats and colourful renaissance-style clothes. All the people in the video appear to be white, including those with their faces painted black. In the video, festive music plays and one Zwarte Piet says to the child, "[l]ook here, and I found your hat. Do you want to put it on? You'll be looking like an actual Pete!"

Facebook removed the post for violating its Hate Speech Community Standard.

Key findings

While Zwarte Piet represents a cultural tradition shared by many Dutch people without apparent racist intent, it includes the use of blackface, which is widely recognised as a harmful racial stereotype.

Since August 2020, Facebook has explicitly prohibited caricatures of Black people in the form of blackface as part of its Hate Speech Community Standard. As such, the Board found that Facebook made it sufficiently clear to users that content featuring blackface would be removed unless shared to condemn the practice or raise awareness.

A majority of the Board saw sufficient evidence of harm to justify removing the content. They argued that the content included caricatures that are inextricably linked to negative and racist stereotypes, and are considered by parts of Dutch society to sustain systemic racism in the Netherlands. They took note of documented cases of Black people experiencing racial discrimination and violence in the Netherlands linked to Zwarte Piet. These included reports that Black children felt scared and unsafe in their homes and were afraid to go to school during the Sinterklaas festival.

A majority found that allowing such posts to accumulate on Facebook would help create a discriminatory environment for Black people that would be degrading and harassing. They believed that the impacts of blackface justified Facebook's policy and that removing the content was consistent with the company's human rights responsibilities.

A minority of the Board, however, saw insufficient evidence to directly link this piece of content to the harm supposedly being reduced by removing it. They noted that Facebook's value of "Voice" specifically protects disagreeable content and that, while blackface is offensive, depictions on Facebook will not always cause harm to others. They also argued that restricting expression based on cumulative harm can be hard to distinguish from attempts to protect people from subjective feelings of offence.

The Board found that removing content without providing an adequate explanation could be perceived as unfair by the user. In this regard, it noted that the user was not told that their content was specifically removed under Facebook's blackface policy.

The Oversight Board's decision

The Oversight Board upholds Facebook's decision to remove the content.

In a policy advisory statement, the Board recommends that Facebook:

Link the rule in the Hate Speech Community Standard prohibiting blackface to its reasoning for the rule, including the harm that the company seeks to prevent.
Ensure that users are always notified of the reasons for any enforcement of the Community Standards against them, including the specific rule that Facebook is enforcing, in line with the Board's recommendation in case 2020-003-FB-UA. Where Facebook removes content for violating its rule on blackface, any notice to users should refer to this specific rule, and link to resources that explain the harm that this rule seeks to prevent. Facebook should also provide a detailed update on its "feasibility assessment" of the Board's prior recommendations on this topic.
*Case summaries provide an overview of the case and do not have precedential value
Hate Speech200https://www.oversightboard.com/decision/FB-S6NRTDAJ/9Dec 2020
16
Armenian in AzerbaijanA slur against Azerbaijanishttps://www.oversightboard.com/decision/FB-QBJDASCV/Photos of churches in Baku, Azerbaijan, using the term 'taziks,' a derogatory slur used by Armenians, was removed.A Facebook post which included historical photos described as showing churches in Baku, Azerbaijan. It came with accompanying text in Russian that claimed that Armenians built Baku, and that this heritage (including Baku) has been destroyed. They also used the term "taziks" to describe Azerbaijanis, which is a derogatory term used by Armenians for Azerbaijains.Hate SpeechNov202011Post was removed for violating Community Standard on hate speech. The term "taziks" means "wash bowl" and is wordplay on the word "aziks" which is a slurupheldthe post used a slur to describe a group of people based on a protected characteristic.post was removed35The Oversight Board has upheld Facebook's decision to remove a post containing a demeaning slur which violated Facebook's Community Standard on hate speech.

About the case

In November 2020, a user posted content which included historical photos described as showing churches in Baku, Azerbaijan. The accompanying text in Russian claimed that Armenians built Baku and that this heritage, including the churches, has been destroyed. The user used the term "тазики" ("taziks") to describe Azerbaijanis, who the user claimed are nomads and have no history compared to Armenians.

The user included hashtags in the post calling for an end to Azerbaijani aggression and vandalism. Another hashtag called for the recognition of Artsakh, the Armenian name for the Nagorno-Karabakh region, which is at the centre of the conflict between Armenia and Azerbaijan. The post received more than 45,000 views and was posted during the recent armed conflict between the two countries.
Hate Speech100https://www.oversightboard.com/decision/FB-QBJDASCV/8Nov 2020
17
Post in Polish targeting trans peopleTransphobic imageryhttps://www.oversightboard.com/decision/FB-UK2RUS24/A post targeting transgender people with violent speech advocating for members to commit suicide was not removed by FB. A Facebook user in Poland posted an image of a striped curtain in the blue, pink and white colours of the transgender flag, with text in Polish stating, "New technology… Curtains that hang themselves", and above that, "spring cleaning <3". The user's biography includes the description, "I am a transphobe". Hate SpeechJan20241left the post online, after automated and human review found it not violating Facebook's Suicide and Self-injury Standard. Non of the report based on hate speech were sent for human review. overturnedthe post violated both the Hate Speech and Suicide and Self-injury Community Standards.post was removedAdditionally, the company disabled the account of the user who posted the content for several previous violations. 11 The Oversight Board has overturned Meta's original decision to leave up a Facebook post in which a user targeted transgender people with violent speech advocating for members of this group to commit suicide. The Board finds that the post violated both the Hate Speech and Suicide and Self-injury Community Standards. However, the fundamental issue in this case is not with the policies, but their enforcement. Meta's repeated failure to take the correct enforcement action, despite multiple signals about the post's harmful content, leads the Board to conclude that the company is not living up to the ideals it has articulated on LGBTQIA+ safety. The Board urges Meta to close enforcement gaps, including by improving internal guidance to reviewers.

About the case

In April 2023, a Facebook user in Poland posted an image of a striped curtain in the blue, pink and white colours of the transgender flag, with text in Polish stating, "New technology… Curtains that hang themselves", and above that, "spring cleaning <3". The user's biography includes the description, "I am a transphobe". The post received less than 50 reactions.

Between April and May 2023, 11 different users reported the post a total of 12 times. Only two of the 12 reports were prioritised for human review by Meta's automated systems, with the remainder closed. The two reports sent for human review, for potentially violating Facebook's Suicide and Self-injury Standard, were assessed as non-violating. None of the reports based on hate speech were sent for human review.

Three users then appealed Meta's decision to leave up the Facebook post, with one appeal resulting in a human reviewer upholding the original decision based on the Suicide and Self-injury Community Standard. Again, the other appeals, made under the Hate Speech Community Standard, were not sent for human review. Finally, one of the users who originally reported the content appealed to the Board. As a result of the Board selecting this case, Meta determined that the post did violate both its Hate Speech and Suicide and Self-injury policies and removed it from Facebook. Additionally, the company disabled the account of the user who posted the content for several previous violations.

The Oversight Board overturns Meta's original decision to leave up the content.

The Board recommends that Meta:

Clarify on its Suicide and Self-injury page that the policy forbids content promoting or encouraging suicide aimed at an identifiable group of people.
Modify the internal guidance it gives to at-scale reviewers to ensure that flag-based visual depictions of gender identity that do not contain a human figure are understood as representations of a group defined by the gender identity of its members.

* Case summaries provide an overview of the case and do not have precedential value.
LBGT, Sex and gender equality50https://www.oversightboard.com/decision/FB-UK2RUS24/55Jan 2024
18
Holocaust DenialBlatant Holocaust denialhttps://www.oversightboard.com/decision/IG-ZJ7J6D28/An Instagram post containing false and distorted claims about the Holocaust was left online.An Instagram post which contains false and distorted claims about the Holocaust. The claims question the number of victims of the Holocaust, suggesting it is not possible that six million Jewish people could have been murdered based on supposed population numbers the user quotes for before and after the Second World War. The post also questions the existence of crematoria at Auschwitz by claiming the chimneys were built after the war, and that world leaders at the time did not acknowledge the Holocaust in their memoirs. Hate SpeechJan20241left the post online after several automated and human reviews found it not violating Meta's Hate Speech policy. Some of the reviews were closed automatically due to the company’s COVID-19 automation policies. These policies, introduced at the beginning of the pandemic in 2020, automatically closed certain review jobs to reduce the volume of reports being sent to human reviewers, while keeping open potentially “high-risk” reports. overturnedthe content violated Meta’s Hate Speech Community Standard, which bans Holocaust denial.post was removed6 The Oversight Board has overturned Meta’s original decision to leave up an Instagram post containing false and distorted claims about the Holocaust. The Board finds that the content violated Meta’s Hate Speech Community Standard, which bans Holocaust denial. This prohibition is consistent with Meta’s human-rights responsibilities. The Board is concerned about Meta’s failure to remove this content and has questions about the effectiveness of the company’s enforcement. The Board recommends Meta take steps to ensure it is systematically measuring the accuracy of its enforcement of Holocaust denial content, at a more granular level.

About the Case

On September 8, 2020, an Instagram user posted a meme of Squidward – a cartoon character from the television series SpongeBob SquarePants. This includes a speech bubble entitled “Fun Facts About The Holocaust,” which contains false and distorted claims about the Holocaust. The claims, in English, question the number of victims of the Holocaust, suggesting it is not possible that six million Jewish people could have been murdered based on supposed population numbers the user quotes for before and after the Second World War. The post also questions the existence of crematoria at Auschwitz by claiming the chimneys were built after the war, and that world leaders at the time did not acknowledge the Holocaust in their memoirs.

On October 12, 2020, several weeks after the content was posted, Meta revised its Hate Speech Community Standard to explicitly prohibit Holocaust denial or distortion.

Since the content was posted in September 2020, users reported it six times for violating Meta’s Hate Speech policy. Four of these reports were reviewed by Meta’s automated systems that either assessed the content as non-violating or automatically closed the reports due to the company’s COVID-19 automation policies. These policies, introduced at the beginning of the pandemic in 2020, automatically closed certain review jobs to reduce the volume of reports being sent to human reviewers, while keeping open potentially “high-risk” reports.

Two of the six reports from users led to human reviewers assessing the content as non-violating. A user who reported the post in May 2023, after Meta announced it would no longer allow Holocaust denial, appealed the company’s decision to leave the content up. However, this was also automatically closed due to Meta’s COVID-19 automation policies, which were still in force in May 2023. They then appealed to the Oversight Board.

The Oversight Board overturns Meta’s original decision to leave up the content.

The Board recommends that Meta:

Take technical steps to ensure that it is sufficiently and systematically measuring the accuracy of its enforcement of Holocaust denial content, to include gathering more granular details.
Publicly confirm whether it has fully ended all COVID-19 automation policies put in place during the pandemic.

* Case summaries provide an overview of cases and do not have precedential value.
Hate Speech125https://oversightboard.com/decision/IG-ZJ7J6D28/54Jan 2024
19
Praise be to GodProblems with Praising Godhttps://www.oversightboard.com/decision/IG-2R3UEQRR/A post of a photo of a couple in bridal wear, with the caption "alhamdulillah"("Praise be to God") triggered a violation of the 'promotion of dangerous organisations' protocolAn Instagram user in Pakistan posted a photo of themselves in bridal wear at a traditional pre-wedding event. The caption accompanying the post stated "alhamdulillah", which is an expression used by many people in Muslim and Arab societies meaning "praise be to God".Dangerous individuals / organisationsNov202311removed for violating the company's Dangerous Organisations and Individuals policy. This policy prohibits content that contains praise, substantiative support or representation of organisations or individuals that Meta deems as dangerous.overturnedthe content "did not contain any references to a designated organisation or individuals", and therefore did not violate its Dangerous Organisations and Individuals policy. post was restored A user appealed Meta's decision to remove their Instagram post, which contains a photo of them in bridal wear, accompanied by a caption that states "alhamdulillah", a common expression meaning "praise be to God". After the Oversight Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

In June 2023, an Instagram user in Pakistan posted a photo of themselves in bridal wear at a traditional pre-wedding event. The caption accompanying the post stated "alhamdulillah", which is an expression used by many people in Muslim and Arab societies meaning "praise be to God". The post received less than 1,000 views.

The post was removed for violating the company's Dangerous Organisations and Individuals policy. This policy prohibits content that contains praise, substantiative support or representation of organisations or individuals that Meta deems as dangerous.

In their statement to the Board, the user emphasised that the phrase "alhamdulillah" is a common cultural expression used to express gratitude and has no "remote or direct links to a hate group, a hateful nature or any association to a dangerous organisation". The Board would view the phrase as protected speech under Meta's Community Standards, consistent with freedom of expression and the company's value of protecting "voice".

The user stressed the popularity of the phrase by stating, "this is one of the most popular phrases amongst the population of over two billion Muslims on the planet... if this is the reason the post has been removed, I consider this to be highly damaging for the Muslim population on Instagram and inherently somewhat ignorant".

After the Board brought this case to Meta's attention, the company determined that the content "did not contain any references to a designated organisation or individuals", and therefore did not violate its Dangerous Organisations and Individuals policy. Subsequently, Meta restored the content to Instagram.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error, after the Board had brought the case to Meta's attention.
Dangerous individuals and organisations50https://www.oversightboard.com/decision/IG-2R3UEQRR/75Nov 2023
20
Planet of the Apes racismPlanet of the Apes racismhttps://www.oversightboard.com/decision/FB-AJTD9P90/A post likened a group of Black individuals involved in a riot in France to the "Planet of the Apes".A Facebook user posted a video that appears to have been taken from a car driving at night. The video shows the car driving through neighbourhoods until a group of Black men appear and are seen chasing the car towards the end of the footage. The caption states in English that, "France has fell like planet of the friggin apes over there rioting in the streets running amok savages" and writes about how "the ones" that make it to "our shores" are given housing for what the user believes to be at a significant cost. Hate SpeechNov202311initially left the content on Facebook.overturnedthe content violated the Hate Speech Community Standard and its original decision to leave the content up was incorrect. post was removed A user appealed Meta's decision to leave up a Facebook post that likens a group of Black individuals involved in a riot in France to the "Planet of the Apes". After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post.

Case description and background

In January 2023, a Facebook user posted a video that appears to have been taken from a car driving at night. The video shows the car driving through neighbourhoods until a group of Black men appear and are seen chasing the car towards the end of the footage. The caption states in English that, "France has fell like planet of the friggin apes over there rioting in the streets running amok savages" and writes about how "the ones" that make it to "our shores" are given housing for what the user believes to be at a significant cost. The post had under 500 views. A Facebook user reported the content.

Under Meta's Hate Speech policy, the company removes content that dehumanises people belonging to a designated protected characteristic group by comparing them to "insects" or "animals in general or specific types of animals that are culturally perceived as intellectually or physically inferior (including but not limited to: Black people and apes or ape-like creatures; Jewish people and rats; Muslim people and pigs; Mexican people and worms)."

Meta initially left the content on Facebook. After the Board brought this case to Meta's attention, the company determined that the content violated the Hate Speech Community Standard and its original decision to leave the content up was incorrect. Meta explained to the Board that the caption for the video violated its Hate Speech policy by comparing the men to apes and should have been removed. The company then removed the content from Facebook.

The Board overturns Meta's original decision to leave up the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to Meta's attention.
Hate Speech50https://www.oversightboard.com/decision/FB-AJTD9P90/74Nov 2023
21
Educational posts about ovulationOverly detailed posts about ovulationhttps://www.oversightboard.com/decision/BUN-8S1H6EU5/Two educational posts in Pakistan involving candid information about how women may visually notice when they are ovulating were removed under rules around imagery of "sexual activity".For the first case, on 15 March 2023, a Facebook user based in the United States commented on a post in a Facebook group. The comment was written in English and included a photo of four different types of cervical mucus and corresponding fertility levels, with a description of each overlaid on the photo. The comment was in response to someone else's post, which asked about PCOS (polycystic ovary syndrome), fertility issues and vaginal discharge. The content had no views, no shares and had been reported once by Meta's automated systems. The group states that its purpose is to help provide women in Pakistan who suffer from "invisible conditions" related to reproductive health such as "endometriosis, adenomyosis, PCOS and other menstrual issues" with a safe space to discuss the challenges that they face and to support one another.

For the second case, on 7 March 2023, an Instagram user posted a video depicting someone's hand over a sink with vaginal discharge on the person's fingers. The caption underneath the video is written in Spanish and the headline reads, "Ovulation – How to recognise it?" The rest of the caption describes in detail how cervical mucus becomes clearer during ovulation, and at what point in the menstrual cycle someone can expect to be ovulating. It also describes other physiological changes that one can expect when experiencing ovulation such as an increased libido and body temperature, and difficulty sleeping. The description for the user's account says that it is dedicated to vaginal/vulvar health and period/menstruation education. The content had more than 25,000 views, no shares and had been reported once by Meta's automated systems.
Nudity & sexual activityNov202311initially removed each of the two pieces of content under its Adult Nudity and Sexual Activity policy, which prohibits "imagery of sexual activity" except "in cases of medical or health context". However, Meta acknowledged that both pieces of content fall within its allowance for sharing imagery with the presence of by-products of sexual activity (which may include vaginal secretions) in a medical or health context and restored them back to each platform. overturnedneither piece of content violated the Adult Nudity and Sexual Activity Community Standard and the removals were incorrect.both posts were restored In this summary decision, the Board is considering two educational posts about ovulation together. The Board believes that Meta's original decisions to remove each post makes it more difficult for people to access a highly stigmatised area of health information for women. After the Board brought these two appeals to Meta's attention, the company reversed its earlier decisions and restored both posts.

Case description and background

For the first case, on 15 March 2023, a Facebook user based in the United States commented on a post in a Facebook group. The comment was written in English and included a photo of four different types of cervical mucus and corresponding fertility levels, with a description of each overlaid on the photo. The comment was in response to someone else's post, which asked about PCOS (polycystic ovary syndrome), fertility issues and vaginal discharge. The content had no views, no shares and had been reported once by Meta's automated systems. The group states that its purpose is to help provide women in Pakistan who suffer from "invisible conditions" related to reproductive health such as "endometriosis, adenomyosis, PCOS and other menstrual issues" with a safe space to discuss the challenges that they face and to support one another.

For the second case, on 7 March 2023, an Instagram user posted a video depicting someone's hand over a sink with vaginal discharge on the person's fingers. The caption underneath the video is written in Spanish and the headline reads, "Ovulation – How to recognise it?" The rest of the caption describes in detail how cervical mucus becomes clearer during ovulation, and at what point in the menstrual cycle someone can expect to be ovulating. It also describes other physiological changes that one can expect when experiencing ovulation such as an increased libido and body temperature, and difficulty sleeping. The description for the user's account says that it is dedicated to vaginal/vulvar health and period/menstruation education. The content had more than 25,000 views, no shares and had been reported once by Meta's automated systems.

For both cases, Meta initially removed each of the two pieces of content under its Adult Nudity and Sexual Activity policy, which prohibits "imagery of sexual activity" except "in cases of medical or health context". However, Meta acknowledged that both pieces of content fall within its allowance for sharing imagery with the presence of by-products of sexual activity (which may include vaginal secretions) in a medical or health context and restored them back to each platform.

After the Board brought these two cases to Meta's attention, the company determined that neither piece of content violated the Adult Nudity and Sexual Activity Community Standard and the removals were incorrect. The company then restored both pieces of content to Facebook and Instagram respectively.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial errors once the Board brought these cases to the company's attention.
Adult nudity and sexual activity50https://www.oversightboard.com/decision/BUN-8S1H6EU5/73Nov 2023
22
Mention of Al-ShabaabTerrorist group Al-Shabaabhttps://www.oversightboard.com/decision/BUN-QBBLZ8WI/Two posts images referring to the Somali terrorist group Al-Shabaab were flagged for removal. For the first case, in July 2023, a Facebook user, who appears to be a news outlet, posted a picture showing a weapon and military equipment lying on the ground at soldiers' feet with a caption saying "Somali government forces" and "residents" undertook a military operation and killed Al-Shabaab forces in the Mudug region of Somalia.

For the second case, also in July 2023, a Facebook user posted two pictures with a caption. The first picture shows a woman painting a black colour over a blue pillar. The second picture shows a black Al-Shabaab emblem painted over the pillar. The caption says, "the terrorists that used to hide have come out of their holes, and the world has finally seen them".

Harakat al-Shabaab al-Mujahideen, popularly known as Al-Shabaab, or "the Youth", (in Arabic) is an Islamist terrorist group with links to al-Qa'ida working to overthrow the Somali government. The group mainly operates in Somalia and has carried out several attacks in neighbouring countries.
Dangerous individuals / organisationsNov202311Meta originally removed the post from Facebook, citing its Dangerous Organisations and Individuals (DOI) policy, under which the company removes content that "praises", "substantively supports" or "represents" individuals and organisations that the company designate as dangerous. However, the policy recognises that "users may share content that includes references to designated dangerous organisations and individuals to report on, condemn or neutrally discuss them or their activities". overturnedthe posts did not violate its policies. Although the posts refer to Al-Shabaab, a designated dangerous organisation, they do not praise Al-Shabaab but instead report on and condemn the group. Meta concluded that its initial removal was incorrect as the posts fell into the exception to the DOI policyboth posts were restored2In this summary decision, the Board reviewed two posts referring to the terrorist group Al-Shabaab. After the Board brought these two appeals to Meta's attention, the company reversed its original decisions and restored both posts.

Case description and background

For the first case, in July 2023, a Facebook user, who appears to be a news outlet, posted a picture showing a weapon and military equipment lying on the ground at soldiers' feet with a caption saying "Somali government forces" and "residents" undertook a military operation and killed Al-Shabaab forces in the Mudug region of Somalia.

For the second case, also in July 2023, a Facebook user posted two pictures with a caption. The first picture shows a woman painting a black colour over a blue pillar. The second picture shows a black Al-Shabaab emblem painted over the pillar. The caption says, "the terrorists that used to hide have come out of their holes, and the world has finally seen them".

Harakat al-Shabaab al-Mujahideen, popularly known as Al-Shabaab, or "the Youth", (in Arabic) is an Islamist terrorist group with links to al-Qa'ida working to overthrow the Somali government. The group mainly operates in Somalia and has carried out several attacks in neighbouring countries.

Meta originally removed the post from Facebook, citing its Dangerous Organisations and Individuals (DOI) policy, under which the company removes content that "praises", "substantively supports" or "represents" individuals and organisations that the company designate as dangerous. However, the policy recognises that "users may share content that includes references to designated dangerous organisations and individuals to report on, condemn or neutrally discuss them or their activities".

In their appeal to the Board, both users argued that their content did not violate Meta's Community Standards. The user in the first case described their account as a news outlet and stated that the post is a news report about the government operation against the terrorist group Al-Shabaab. The user in the second case stated that the aim of the post is to inform and raise awareness about the activities of Al-Shabaab and condemn it.

After the Board brought these two cases to Meta's attention, the company determined that the posts did not violate its policies. Although the posts refer to Al-Shabaab, a designated dangerous organisation, they do not praise Al-Shabaab but instead report on and condemn the group. Meta concluded that its initial removal was incorrect as the posts fell into the exception to the DOI policy and restored both pieces of content to the platform.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought these cases to Meta's attention.
Dangerous individuals and organisations100https://www.oversightboard.com/decision/BUN-QBBLZ8WI/72Nov 2023
23
Media conspiracy cartoonObviously antisemitic cartoonhttps://www.oversightboard.com/decision/FB-J5OOP3YZ/A post depicting a caricature of a Jewish man holding a music box labelled "media", while a monkey labelled "BLM" sits on his shoulder was kept on Facebook despite violating "hate speech" rules.A user posted a comment containing an image which depicts a caricature of a Jewish man holding an old-fashioned music box, while a monkey rests on his shoulders. The caricature has an exaggerated hooked nose and is labelled with a Star of David inscribed with "Jude", resembling the badges Jewish people were forced to wear during the Holocaust. The monkey on his shoulder is labelled with "BLM", (the acronym for the "Black Lives Matter" movement) while the music box is labelled with "media". Hate SpeechNov202311left the content online.overturnedthe post violated its Hate Speech policy, and that its original decision to leave up the content was incorrect. post was removed A user appealed Meta's decision to leave up a Facebook comment which is an image depicting a caricature of a Jewish man holding a music box labelled "media", while a monkey labelled "BLM" sits on his shoulder. After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the comment.

Case description and background

In May 2023, a user posted a comment containing an image which depicts a caricature of a Jewish man holding an old-fashioned music box, while a monkey rests on his shoulders. The caricature has an exaggerated hooked nose and is labelled with a Star of David inscribed with "Jude", resembling the badges Jewish people were forced to wear during the Holocaust. The monkey on his shoulder is labelled with "BLM", (the acronym for the "Black Lives Matter" movement) while the music box is labelled with "media". The comment received fewer than 100 views.

This content violates two separate elements of Meta's Hate Speech policy. Meta's Hate Speech policy prohibits content which references "harmful stereotypes historically linked to intimidation", such as, "claims that Jewish people control financial, political or media institutions". Furthermore, Meta's Hate Speech policy forbids dehumanising imagery, such as content which equates "Black people and apes or ape-like creatures". This content violates both elements as it insinuates that Jewish people control media institutions and equates "BLM" with a monkey. In their appeal to the Board, the user who reported the content stated that the content was antisemitic and racist towards Black people.

Meta initially left the content on Facebook. When the Board brought this case to Meta's attention, the company determined that the post violated its Hate Speech policy, and that its original decision to leave up the content was incorrect. The company then removed the content from Facebook.

The Board overturns Meta's original decision to leave up the content. The Board acknowledges Meta's correction of its initial error once the Board brought this case to Meta's attention.
Hate Speech50https://www.oversightboard.com/decision/FB-J5OOP3YZ/71Nov 2023
24
Human trafficking in ThailandWarnings of human trafficking in Thailandhttps://www.oversightboard.com/decision/FB-ONL5YQVE/A post calling attention to and warning about human trafficking practices in Thailand was removed. A Facebook user posted in Thai about a human trafficking business targeting Thais and transporting them for sale in Myanmar. The post discusses what the user believes are common practices that the business employs, such as pressuring victims to recruit others into the business. It also makes ironic statements, such as "if you want to be a victim of human trafficking, don't wait". The content also contains screenshots of what appears to be messages from the business attempting to recruit victims, and of content promoting the business. miscNov202311originally removed the post from Facebook, citing its Human Exploitation policy, under which the company removes "[c]ontent that recruits people for, facilitates or exploits people through any of the following forms of human trafficking", such as "labour exploitation (including bonded labour)". The policy defines human trafficking as "the business of depriving someone of liberty for profit". It allows "content condemning or raising awareness about human trafficking or smuggling issues". overturnedwhile the images in isolation would violate the Human Exploitation policy, the overall context is clear, making the content non-violating. post was restored A user appealed Meta's decision to remove a Facebook post calling attention to human trafficking practices in Thailand. The appeal underlines the importance of designing moderation systems that are sensitive to contexts of awareness-raising, irony, sarcasm and satire. After the Board brought the appeal to Meta's attention, the company reversed its earlier decision and restored the post.

Case description and background

A Facebook user posted in Thai about a human trafficking business targeting Thais and transporting them for sale in Myanmar. The post discusses what the user believes are common practices that the business employs, such as pressuring victims to recruit others into the business. It also makes ironic statements, such as "if you want to be a victim of human trafficking, don't wait". The content also contains screenshots of what appears to be messages from the business attempting to recruit victims, and of content promoting the business.

Meta originally removed the post from Facebook, citing its Human Exploitation policy, under which the company removes "[c]ontent that recruits people for, facilitates or exploits people through any of the following forms of human trafficking", such as "labour exploitation (including bonded labour)". The policy defines human trafficking as "the business of depriving someone of liberty for profit". It allows "content condemning or raising awareness about human trafficking or smuggling issues".

After the Board brought this case to Meta's attention, the company determined that its removal was incorrect and restored the content to Facebook. The company told the Board that, while the images in isolation would violate the Human Exploitation policy, the overall context is clear, making the content non-violating.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to the company's attention.
Human exploitation50https://www.oversightboard.com/decision/FB-ONL5YQVE/70Nov 2023
25
Haitian police station videoViolence in an Haitian police stationhttps://www.oversightboard.com/decision/FB-LXNFAD5F/A video showed an outbreak of violence and threats at a Haitian police station. FB did not remove the video citing "newsworthiness".A Facebook user posted a video showing people in civilian clothing entering a police station, attempting to break into a cell holding a man – who is a suspected gang member, according to Meta – and shouting "we're going to break the lock" and "they're already dead". Towards the end of the video, someone yells "bwa kale na boudaw", which Meta interpreted as a call for the group to "to take action against the person 'bwa kale style' – in other words, to lynch him". Meta also interpreted "bwa kale" as a reference to the civilian movement in Haiti that involves people taking justice into their own hands. The video is accompanied by a caption in Haitian Creole that includes the statement, "the police cannot do anything". Violence / Incitement / Graphic ContentDec202312removed the video with a three-week delayoverturnedthe video did violate the company's Violence and Incitement policy. Nonetheless, the majority of the Board disagrees with Meta's assessment on the application of the newsworthiness allowance in this case. For the majority, Meta's near three-week delay in removing the content meant the risk of offline harm had diminished sufficiently for a newsworthiness allowance to be applied. post was restored The Oversight Board has overturned Meta's decision to take down a video from Facebook showing people entering a police station in Haiti, attempting to break into a cell holding an alleged gang member and threatening them with violence. The Board finds that the video did violate the company's Violence and Incitement policy. Nonetheless, the majority of the Board disagrees with Meta's assessment on the application of the newsworthiness allowance in this case. For the majority, Meta's near three-week delay in removing the content meant the risk of offline harm had diminished sufficiently for a newsworthiness allowance to be applied. Moreover, the Board recommends that Meta assess the effectiveness and timeliness of its responses to content escalated through the Trusted Partner programme.

About the case

In May 2023, a Facebook user posted a video showing people in civilian clothing entering a police station, attempting to break into a cell holding a man – who is a suspected gang member, according to Meta – and shouting "we're going to break the lock" and "they're already dead". Towards the end of the video, someone yells "bwa kale na boudaw", which Meta interpreted as a call for the group to "to take action against the person 'bwa kale style' – in other words, to lynch him". Meta also interpreted "bwa kale" as a reference to the civilian movement in Haiti that involves people taking justice into their own hands. The video is accompanied by a caption in Haitian Creole that includes the statement, "the police cannot do anything". The post was viewed more than 500,000 times and the video around 200,000 times.

Haiti is experiencing unprecedented insecurity, with gangs taking control of territory and terrorising the population. With police unable to address the violence and, in some instances, said to be complicit, a movement has emerged that has seen "more than 350 people [being] lynched by local people and vigilante groups" in a four-month period this year, according to the UN High Commissioner for Human Rights. In retaliation, gangs have taken revenge on those believed to be in or sympathetic to the movement.

A Trusted Partner flagged the video to Meta as potentially violating 11 days after it was posted, warning the content might incite further violence. Meta's Trusted Partner programme is a network of non-governmental organisations, humanitarian agencies and human rights researchers from 113 countries. Meta told the Board that the "greater the level of risk [of violence in a country], the higher the priority for developing relationships with Trusted Partners", who can report content to the company. About eight days after the Trusted Partner's report in this case, Meta determined that the video included both a statement of intent to commit and a call for high-severity violence and removed the content from Facebook. Meta referred this case to the Board to address the difficult moderation questions raised by content related to the "Bwa Kale" movement in Haiti. Meta did not apply the newsworthiness allowance because the company found the risk of harm was high and outweighed the public interest value of the post, noting the ongoing pattern of violent reprisals and killings in Haiti.

The Oversight Board overturns Meta's decision to take down this content, requiring the post to be restored.

The Board recommends that Meta:

- Assess the timeliness and effectiveness of its responses to content escalated through the Trusted Partner programme, to address the risk of harm particularly where Meta has no or limited proactive moderation tools, processes or measures to identify and assess content.
- The Board also takes this opportunity to remind Meta of a previous recommendation, from the Russian Poem case, that calls for the company to make public an exception to its Violence and Incitement policy. This exception allows for content that "condemns or raises awareness of violence", but Meta requires the user to make it clear that they are posting the content for either of these two reasons. *Case summaries provide an overview of the case and do not have precedential value.
Violence / Incitement50https://www.oversightboard.com/decision/FB-LXNFAD5F/68Dec 2023
26
Azov removalUkrainian prisoners of warhttps://www.oversightboard.com/decision/IG-1BMH3DQ6/A post asking, "where is Azov?" in Ukrainian - referring to at least 700 Azov soldiers who remain in Russian captivity - was removed.An Instagram user created a post with an image of the Azov Regiment symbol. Overlaying the symbol was text in Ukrainian asking, "where is Azov?" The caption stated that more than 700 Azov soldiers remain in Russian captivity, with their conditions unknown. The user calls for their return, stating: "we must scream until all the Azovs are back from captivity!"Dangerous individuals / organisationsDec202312originally removed the post from Facebook under its Dangerous Organisations and Individuals (DOI) policy, which prohibits content that "praises", "substantively supports" or "represents" individuals and organisations that Meta designates as dangerous. overturnedthe Azov Regiment is no longer designated as a dangerous organisation. Additionally, Meta recognised that regardless of the Azov Regiment's designation, this post falls under the exception that allows references to dangerous individuals and organisations when discussing the human rights of individuals and members of designated entities.post was restored1 A user appealed Meta's decision to remove an Instagram post asking, "where is Azov?" in Ukrainian. The post's caption calls for soldiers of the Azov Regiment in Russian captivity to be returned. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

In December 2022, an Instagram user created a post with an image of the Azov Regiment symbol. Overlaying the symbol was text in Ukrainian asking, "where is Azov?" The caption stated that more than 700 Azov soldiers remain in Russian captivity, with their conditions unknown. The user calls for their return, stating: "we must scream until all the Azovs are back from captivity!"

The user appealed the removal of the post, emphasising the importance of sharing information during times of war. The user also highlighted that the content did not violate Meta's policies, as Meta allows content commenting on the Azov Regiment. The post received nearly 800 views and was detected by Meta's automated systems.

Meta originally removed the post from Facebook under its Dangerous Organisations and Individuals (DOI) policy, which prohibits content that "praises", "substantively supports" or "represents" individuals and organisations that Meta designates as dangerous. However, Meta allows "discussions about the human rights of designated individuals or members of designated dangerous entities, unless the content includes other praise, substantive support or representation of designated entities or other policy violations, such as incitement to violence".

Meta told the Board that it removed the Azov Regiment from its Dangerous Organisations and Individuals list in January 2023. A Washington Post article states that Meta now draws a distinction between the Azov Regiment, which it views as under formal control of the Ukrainian government, and other elements of the broader Azov movement, some that the company considers far-right nationalists and still designates as dangerous.

After the Board brought this case to Meta's attention, the company determined that its removal was incorrect and restored the content to Instagram. The company acknowledged that the Azov Regiment is no longer designated as a dangerous organisation. Additionally, Meta recognised that regardless of the Azov Regiment's designation, this post falls under the exception that allows references to dangerous individuals and organisations when discussing the human rights of individuals and members of designated entities.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to Meta's attention.
Dangerous individuals and organisations100https://www.oversightboard.com/decision/IG-1BMH3DQ6/67Dec 2023
27
Bengali debate about religionAtheists challenging Islamic scholarshttps://www.oversightboard.com/decision/FB-MFADK60O/A linked Bengali YouTube video that addressed Islamic scholars' unwillingness to discuss atheism was removed as "harmful content"A user who identifies themselves as an atheist and critic of religion posted a link to a YouTube video on Facebook. The thumbnail image of the video asks, in Bengali, "Why are Islamic scholars afraid to debate the atheists on video blogs?" and contains an image of two Islamic scholars. The caption of the post states, "Join the premiere to get the answer!" Violence / Incitement / Graphic ContentDec202312initially removed the content under its Coordinating Harm and Promoting Crime policy, which prohibits content "facilitating, organising, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals". Meta acknowledged that this content does not violate this policy; although the views espoused by the atheist may be viewed as "provocative to many Bangladeshis". overturnedthe content did not violate the Coordinating Harm and Promoting Crime policy and the removal was incorrect. post was restored1 A user appealed Meta's decision to remove a Facebook post with a link to a YouTube video that addressed Islamic scholars' unwillingness to discuss atheism. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

In May 2023, a user who identifies themselves as an atheist and critic of religion posted a link to a YouTube video on Facebook. The thumbnail image of the video asks, in Bengali, "Why are Islamic scholars afraid to debate the atheists on video blogs?" and contains an image of two Islamic scholars. The caption of the post states, "Join the premiere to get the answer!" The content had approximately 4,000 views.

In their appeal to the Board, the user claimed that the purpose of sharing the video was to promote a "healthy debate or discussion" with Islamic scholars, specifically on topics such as the theory of evolution and Big Bang theory. The user states that this post adheres to Facebook's Community Standards by "promoting open discussion". Furthermore, the user stressed that Bangladeshi atheist activists are frequently subject to censorship and physical harms.

Meta initially removed the content under its Coordinating Harm and Promoting Crime policy, which prohibits content "facilitating, organising, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals". Meta acknowledged that this content does not violate this policy; although the views espoused by the atheist may be viewed as "provocative to many Bangladeshis". Meta offered no further explanation regarding why the content was removed from the platform. Although a direct attack against people based on their religious affiliation could be removed for hate speech, a different Meta policy, there is no prohibition in the company's policies against critiquing a religion's concepts or ideologies.

After the Board brought this case to Meta's attention, the company determined that the content did not violate the Coordinating Harm and Promoting Crime policy and the removal was incorrect. The company then restored the content to Facebook.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought this case to the company's attention. The Board also urges Meta to speed up the implementation of still-open recommendations to reduce such errors.
Coordinating Harm and Publicising Crime50https://www.oversightboard.com/decision/FB-MFADK60O/66Dec 2023
28
Girl's Education in AfghanistanIndirectly praising the Talibanhttps://www.oversightboard.com/decision/FB-HFFVZENH/A post discussing the importance of educating girls in Afghanistan was removed for somehow praising the Taliban.A Facebook user in Afghanistan posted text in Pashto describing the importance of educating girls in Afghanistan. The user called on people to continue raising their concerns and noted the consequences of failing to take these concerns to the Taliban. The user also states that preventing access to education for girls will be a loss to the nation. Dangerous individuals / organisationsDec202312originally removed the post from Facebook, citing its Dangerous Organisations and Individuals policy, under which the company removes content that "praises", "substantively supports" or "represents" individuals and organisations it designates as dangerous, including the Taliban. The policy allows content that discusses a dangerous organisation or individual in a neutral way or that condemns its actions. overturnedthe content did not violate the Dangerous Organisations and Individuals policy, and that the removal of the post was incorrect.post was restored1 A user appealed Meta's decision to remove a Facebook post discussing the importance of educating girls in Afghanistan. This case highlights an error in the company's enforcement of its Dangerous Organisations and Individuals policy. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

In July 2023, a Facebook user in Afghanistan posted text in Pashto describing the importance of educating girls in Afghanistan. The user called on people to continue raising their concerns and noted the consequences of failing to take these concerns to the Taliban. The user also states that preventing access to education for girls will be a loss to the nation.

Meta originally removed the post from Facebook, citing its Dangerous Organisations and Individuals policy, under which the company removes content that "praises", "substantively supports" or "represents" individuals and organisations it designates as dangerous, including the Taliban. The policy allows content that discusses a dangerous organisation or individual in a neutral way or that condemns its actions.

After the Board brought this case to Meta's attention, the company determined that the content did not violate the Dangerous Organisations and Individuals policy, and that the removal of the post was incorrect. The company then restored the content.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to the company's attention. The Board also urges Meta to speed up the implementation of still-open recommendations to reduce such errors.
Dangerous Individuals and organisations50https://www.oversightboard.com/decision/FB-HFFVZENH/65Dec 2023
29
Niger coup cartoonNiger coup cartoonhttps://www.oversightboard.com/decision/FB-BLKZ1ZI8/A cartoon image on the military coup in Niger was removed under Hate speech rules.A Facebook user in France posted a cartoon image showing a military boot labelled "Niger", kicking a person wearing a red hat and dress. On the dress is the geographical outline of Africa. Earlier in the same month, there was a military takeover in Niger when General Abdourahamane Tchiani, with the help of the presidential guard of which he was head, ousted President Mohamed Bazoum, and declared himself leader of the country. Hate SpeechDec202312originally removed the post from Facebook, citing its Hate Speech policy, under which the company removes content containing attacks against people on the basis of a protected characteristic, including some depictions of violence against these groups. overturnedthe content did not violate the Hate Speech policy and its removal was incorrect. post was restored1 A user appealed Meta's decision to remove a Facebook post on the military coup in Niger. This case highlights errors in Meta's content moderation, including its automated systems for detecting hate speech. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

In July 2023, a Facebook user in France posted a cartoon image showing a military boot labelled "Niger", kicking a person wearing a red hat and dress. On the dress is the geographical outline of Africa. Earlier in the same month, there was a military takeover in Niger when General Abdourahamane Tchiani, with the help of the presidential guard of which he was head, ousted President Mohamed Bazoum, and declared himself leader of the country.

Meta originally removed the post from Facebook, citing its Hate Speech policy, under which the company removes content containing attacks against people on the basis of a protected characteristic, including some depictions of violence against these groups.

After the Board brought this case to Meta's attention, the company determined that the content did not violate the Hate Speech policy and its removal was incorrect. The company then restored the content to Facebook.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to the company's attention. The Board also urges Meta to speed up the implementation of still-open recommendations to reduce such errors.
Hate Speech50https://www.oversightboard.com/decision/FB-BLKZ1ZI8/64Dec 2023
30
Federal constituency in NigeriaBadly captioned Nigeria politicianhttps://www.oversightboard.com/decision/FB-SI0CLWAX/The caption on an image of Nigerian politician Yusuf Gagdi implied he was a member of the censured Kurdish military group the PKK. The post was removed.A Facebook user posted a photograph of Nigerian politician Yusuf Gagdi with the caption "Rt Hon Yusuf Gagdi OON member of the house of reps PKK". Mr. Gagdi is a representative in the Nigerian Federal House of Representatives from the Pankshin/Kanam/Kanke Federal Constituency in Plateau state. The constituency encompasses three areas, which the user refers to by abbreviating their full names to PKK. However, PKK is also an alias of the Kurdistan Workers' Party, a designated dangerous organisation.Dangerous individuals / organisationsDec202312initially removed the post from Facebook, citing its Dangerous Organisations and Individuals policy, under which the company removes content that "praises", "substantively supports" or "represents" individuals and organisations that it designates as dangerous. overturnedthe post's removal was incorrect because it does not contain any reference to a designated organisation or individual, and it restored the content. post was restored1 A user appealed Meta's decision to remove a Facebook post containing an image of Nigerian politician Yusuf Gagdi with a caption referring to a federal constituency in Nigeria. Removal was apparently based on the fact that the Nigerian constituency goes by the same initials (PKK) that are used to designate a terrorist organisation in Turkey, though the two entities are completely unrelated. This case highlights the company's overenforcement of the Dangerous Organisations and Individuals policy. This can have a negative impact on users' ability to make and share political commentary, resulting in an infringement of users' freedom of expression. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

In July 2023, a Facebook user posted a photograph of Nigerian politician Yusuf Gagdi with the caption "Rt Hon Yusuf Gagdi OON member of the house of reps PKK". Mr. Gagdi is a representative in the Nigerian Federal House of Representatives from the Pankshin/Kanam/Kanke Federal Constituency in Plateau state. The constituency encompasses three areas, which the user refers to by abbreviating their full names to PKK. However, PKK is also an alias of the Kurdistan Workers' Party, a designated dangerous organisation.

Meta initially removed the post from Facebook, citing its Dangerous Organisations and Individuals policy, under which the company removes content that "praises", "substantively supports" or "represents" individuals and organisations that it designates as dangerous.

In their appeal to the Board, the user stated the post contains a picture of a democratically elected representative of a Nigerian federal constituency presenting a motion in the house, and does not violate Meta's Community Standards.

After the Board brought this case to Meta's attention, the company determined that the post's removal was incorrect because it does not contain any reference to a designated organisation or individual, and it restored the content.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to the company's attention. The Board also urges Meta to speed up the implementation of still-open recommendations to reduce such errors.
Dangerous individuals and organisations50https://www.oversightboard.com/decision/FB-SI0CLWAX/63Dec 2023
31
Fictional assault on gay coupleCall to Homophobic violencehttps://www.oversightboard.com/decision/FB-TTXIBH8S/FB did not remove a videoclip depicting a fictional physical assault on a gay couple who are holding hands, followed by a caption containing further homophobic calls to violence. A Facebook user posted a 30-second video clip, which appears to be scripted and produced with actors, showing a gay couple being beaten and kicked by people. The video then shows another group of individuals dressed in religious attire approaching the fight. After a few seconds, this group joins in, also assaulting the couple. The video ends with the sentence in English: "Do your part this pride month." The accompanying caption, also in English, states, "Together we can change the world." Hate SpeechDec202312 initially left the content on Facebook. overturnedthe content did violate Meta's Hate Speech policy.post was removed1 A user appealed Meta's decision to leave up a Facebook post that depicts a fictional physical assault on a gay couple who are holding hands, followed by a caption containing calls to violence. This case highlights errors in Meta's enforcement of its Hate Speech policy. After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post.

Case description and background

In July 2023, a Facebook user posted a 30-second video clip, which appears to be scripted and produced with actors, showing a gay couple being beaten and kicked by people. The video then shows another group of individuals dressed in religious attire approaching the fight. After a few seconds, this group joins in, also assaulting the couple. The video ends with the sentence in English: "Do your part this pride month." The accompanying caption, also in English, states, "Together we can change the world." The post was viewed approximately 200,000 times and reported fewer than 50 times.

According to Meta: "Our Hate Speech policy prohibits calls to action and statements supporting or advocating harm against people based on a protected characteristic, including sexual orientation". The post's video and caption endorse violence against a protected characteristic, which is clearly depicted through visuals of two men holding hands and references to Pride month. Therefore, the content violates Meta's Hate Speech policy.

Meta initially left the content on Facebook. After the Board brought this case to Meta's attention, the company determined that the content did violate its Community Standards and removed the content.

The Board overturns Meta's original decision to leave up the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to the company's attention. The Board also urges Meta to speed up the implementation of still-open recommendations to reduce such errors.
Hate Speech50https://www.oversightboard.com/decision/FB-TTXIBH8S/62Dec 2023
32
Karachi Mayoral Election CommentDangerous election resultshttps://www.oversightboard.com/decision/FB-7UK5F6VG/A comment showing the results of the 2023 Karachi mayoral election results mentioned the Tehreek-e-Labbaik Pakistan (TLP), a party considered "dangerous" under Meta's rules.A Facebook user commented on a post of a photograph of Karachi politician Hafiz Naeem ur Rehman with former Pakistani Prime Minister Imran Khan and Secretary General of the Jamaat-e-Islami political party, Liaqat Baloch. The comment is an image of a graph taken from a television programme that shows the number of seats won by the various parties in the Karachi mayoral election. One of the parties included in the list is Tehreek-e-Labbaik Pakistan (TLP), a far-right Islamist political party in Pakistan. The 2023 Karachi mayoral election was a contested race, with one losing party alleging that the vote was unfairly rigged and ensuing violent protests taking place between supporters of different parties.Dangerous individuals / organisationsDec202312originally removed the comment from Facebook, citing its Dangerous Organisations and Individuals policy, under which the company removes content that "praises", "substantively supports" or "represents" individuals and organisations it designates as dangerous.overturnedthe content did not violate its policies. Meta's policy allows for neutral discussion of a designated entity in the context of social and political discourse, in this case, reporting on the outcome of an election. post was restored1 A Facebook user appealed Meta's decision to remove their comment showing the 2023 Karachi mayoral election results and containing the name of Tehreek-e-Labbaik Pakistan (TLP), a far-right Islamist political party designated under Meta's Dangerous Organisations and Individuals policy. This case highlights the over-enforcement of this policy and its impact on users' ability to share political commentary and news reporting. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the comment.

Case description and background

In June 2023, a Facebook user commented on a post of a photograph of Karachi politician Hafiz Naeem ur Rehman with former Pakistani Prime Minister Imran Khan and Secretary General of the Jamaat-e-Islami political party, Liaqat Baloch. The comment is an image of a graph taken from a television programme that shows the number of seats won by the various parties in the Karachi mayoral election. One of the parties included in the list is Tehreek-e-Labbaik Pakistan (TLP), a far-right Islamist political party in Pakistan. The 2023 Karachi mayoral election was a contested race, with one losing party alleging that the vote was unfairly rigged and ensuing violent protests taking place between supporters of different parties.

Meta originally removed the comment from Facebook, citing its Dangerous Organisations and Individuals policy, under which the company removes content that "praises", "substantively supports" or "represents" individuals and organisations it designates as dangerous. However, the policy recognises that "users may share content that includes references to designated dangerous organisations and individuals in the context of social and political discourse. This includes content reporting on, neutrally discussing or condemning dangerous organisations and individuals or their activities".

In the appeal to the Board, the user identified themselves as a journalist and stated that the comment was about the Karachi mayoral election results. The user clarified that the intention of the comment was to inform the public and discuss the democratic process.

After the Board brought this case to Meta's attention, the company determined that the content did not violate its policies. Meta's policy allows for neutral discussion of a designated entity in the context of social and political discourse, in this case, reporting on the outcome of an election.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to the company's attention.
Dangerous individuals and organisations50https://www.oversightboard.com/decision/FB-7UK5F6VG/61Dec 2023
33
Breast Self-ExamBreast Self-Exam instructionshttps://www.oversightboard.com/decision/FB-I04M3KVF/A video providing instructions on how to perform a breast self-examination was removed for featuring a nude breast.A Facebook user posted a video with a caption. The caption explains that the video provides instructions on how women should undertake a breast self-examination each month to check for breast cancer. The animated video depicts a nude female breast and gives information on breast cancer and when to contact a doctor. In addition, the video specifies that a doctor's advice should be followed. Nudity & sexual activityDec 202312removed the post - nine years after it was first shared - from the platform under its Adult Nudity and Sexual Activity policy, which prohibits "imagery of real nude adults" if it depicts "uncovered female nipples" except, among other reasons, for "breast cancer awareness" purposes. overturnedthe content falls within the allowance of raising breast cancer awareness.post was restoredIt is unclear why the post was enforced nine years after its original posting.1 A user appealed Meta's decision to remove a Facebook post that included a video providing instructions on how to perform a breast self-examination. After the Board brought the appeal to Meta's attention, the company reversed its earlier decision and restored the post.

Case description and background

In April 2014 – more than nine years ago – a Facebook user posted a video with a caption. The caption explains that the video provides instructions on how women should undertake a breast self-examination each month to check for breast cancer. The animated video depicts a nude female breast and gives information on breast cancer and when to contact a doctor. In addition, the video specifies that a doctor's advice should be followed. The post was viewed fewer than 500 times.

Nine years after it was first shared, Meta removed the post from the platform under its Adult Nudity and Sexual Activity policy, which prohibits "imagery of real nude adults" if it depicts "uncovered female nipples" except, among other reasons, for "breast cancer awareness" purposes. However, Meta has since acknowledged that the content falls within the allowance of raising breast cancer awareness and has restored the content to Facebook. It is unclear why the post was enforced nine years after its original posting.

In her appeal to the Board, the user expressed surprise at the content being taken down after nine years and stated that the purpose of posting the video was to educate women on conducting a breast self-examination, thereby enhancing their likelihood of detecting early-stage symptoms and ultimately saving lives. The user stated that "if they were male breasts, nothing would have happened".

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to the company's attention.
Adult nudity and sexual activity100https://www.oversightboard.com/decision/FB-I04M3KVF/60Dec 2023
34
Heritage of PrideGay slurhttps://www.oversightboard.com/decision/IG-FEYWNWI2/A post celebrating Pride month contained a photo of a march where one of the placards ironically mentioned a gay slur. Post was removed for violating hate speech. An Instagram user posted an image with a caption that includes a quote by writer and civil rights activist James Baldwin, which speaks of the power of love to unite humanity. The caption also states the user's hope for a year of rest, community and revolution, and calls for the continuous affirmation of queer beauty. The image in the post shows a man holding a sign that says, "That's Mr Faggot to you", with the original photographer credited in the caption.Hate SpeechDec202312initially removed the content from Instagram under Meta's Hate Speech policy, which prohibits the use of certain words that it considers to be slurs. overturnedthe content did not violate the Hate Speech Community Standard and the original decision was incorrect.post we restored1 A user appealed Meta's decision to remove an Instagram post that was celebrating Pride month by reclaiming a slur that has traditionally been used against gay people. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

In January 2022, an Instagram user posted an image with a caption that includes a quote by writer and civil rights activist James Baldwin, which speaks of the power of love to unite humanity. The caption also states the user's hope for a year of rest, community and revolution, and calls for the continuous affirmation of queer beauty. The image in the post shows a man holding a sign that says, "That's Mr Faggot to you", with the original photographer credited in the caption. The post was viewed approximately 37,000 times.

Under Meta's Hate Speech policy, the company prohibits the use of certain words that it considers to be slurs. The company recognises, however, that "speech, including slurs, that might otherwise violate our standards can be used self-referentially or in an empowering way". Meta explains that its "policies are designed to allow room for these types of speech", but the company requires people to "clearly indicate their intent". If the intention is unclear, Meta may remove content.

Meta initially removed the content from Instagram. The user, a verified Instagram account based in the United States, appealed Meta's decision to remove the post to the Board. After the Board brought this case to Meta's attention, the company determined that the content did not violate the Hate Speech Community Standard and that its original decision was incorrect. The company then restored the content to Instagram.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to Meta's attention.
Hate Speech50https://www.oversightboard.com/decision/IG-FEYWNWI2/59Dec 2023
35
Supreme Court in white hoodsKlu Klux Court?https://www.oversightboard.com/decision/FB-79KHZ1P5/A post depicting six of the nine members of the US Supreme Court wearing the robes of the Ku Klux Klan was removed under hate speech rules.A Facebook post that contains an edited image of the Supreme Court of the United States, depicting six of the nine members wearing the robes of the Ku Klux Klan while three justices, considered to be more liberal, appear unaltered.miscDec202312removed the post for violating Meta's Dangerous Organisations and Individuals policy. This policy prohibits content that contains praise, substantive support or representation of organisations or individuals that Meta deems as dangerous.overturnedthe content did not violate Meta's Dangerous Organisations and Individuals policy and its removal was incorrect. post was restoredIn their appeal to the Board, the user emphasised that the post was intended to be a political critique rather than an endorsement of the Ku Klux Klan. 1 A user appealed Meta's decision to remove a Facebook post that contains an edited image of the Supreme Court of the United States, depicting six of the nine members wearing the robes of the Ku Klux Klan. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

In July 2023, a user posted an edited image on Facebook that depicts six justices of the Supreme Court of the United States as members of the Ku Klux Klan while three justices, considered to be more liberal, appear unaltered. The post contained no caption and received fewer than 200 views.

The post was removed for violating Meta's Dangerous Organisations and Individuals policy. This policy prohibits content that contains praise, substantive support or representation of organisations or individuals that Meta deems as dangerous.

In their appeal to the Board, the user emphasised that the post was intended to be a political critique rather than an endorsement of the Ku Klux Klan. The user stated that the content highlights what the user regards as the six justices' "prejudicial, hateful and destructive attitudes towards women, women's rights to choose abortions, the gay, lesbian, transgender and queer communities, and the welfare of other vulnerable groups".

After the Board brought this case to Meta's attention, the company determined that the content did not violate Meta's Dangerous Organisations and Individuals policy and its removal was incorrect. The company then restored the content to Facebook.

The Board overturns Meta's original decision to remove the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to Meta's attention.
Freedom of expression, Humour, Politics / Dangerous Individuals and organisations50https://www.oversightboard.com/decision/FB-79KHZ1P5/58Dec 2023
36
Al-Shifa HospitalStrike on hospital in Gazahttps://www.oversightboard.com/decision/IG-WUC3649N/An Instagram video of the aftermath of a strike on or near Al-Shifa hospital in Gaza during Israel's ground offensive, with a caption condemning the attack was removed.A video posted on Instagram in the second week of November, showing what appears to be the aftermath of a strike on or near Al-Shifa Hospital in Gaza City during Israel's ground offensive in the north of the Gaza Strip. The Instagram post in this case shows people, including children, lying on the ground lifeless or injured and crying. One child appears to be dead, with a severe head injury. A caption in Arabic and English below the video states that the hospital has been targeted by the "usurping occupation", a reference to the Israeli army, and tags human rights and news organisations.Violence / Incitement / Graphic ContentDec 202312initially removed the post in this case for violating its Dangerous Organisations and Individuals policy, which prohibits third-party imagery depicting the moment of designated terror attacks on visible victims under any circumstances, even if shared to condemn or raise awareness of the attack. After the Board identified this case, Meta reversed its original decision and restored the content with a "mark as disturbing" warning screen. This restricted the visibility of the content to people over the age of 18 and removed it from recommendations to other Facebook users.overturnedrestoring the content to the platform, with a "mark as disturbing" warning screen, is consistent with Meta's content policies, values and human rights responsibilities. post was restoredPost appears with a warning screen. But board concludes that Meta's demoting of the restored content, in the form of its exclusion from the possibility of being recommended, does not accord with the company's responsibilities to respect freedom of expression.1 The case involves an emotionally powerful video showing a woman, during the 7 October Hamas-led terrorist attack on Israel, begging her kidnappers not to kill her as she is taken hostage and driven away. The accompanying caption urges people to watch the video to better understand the horror that Israel woke up to on 7 October 2023. Meta's automated systems removed the post for violating its Dangerous Organisations and Individuals Community Standard. The user appealed the decision to the Oversight Board. After the Board identified the case for review, Meta informed the Board that the company had subsequently made an exception to the policy line under which the content was removed and restored the content with a warning screen. The Board overturns Meta's original decision and approves the decision to restore the content with a warning screen but disapproves of the associated demotion of the content barring it from recommendations. This case, together with Al-Shifa Hospital (2023-049-IG-UA), are the Board's first cases decided under its expedited review procedures.

2. Case context and Meta's response

On 7 October, 2023, Hamas, a designated Tier 1 organisation under Meta's Dangerous Organisations and Individuals Community Standard, led unprecedented terrorist attacks on Israel from Gaza that killed an estimated 1,200 people, and resulted in roughly 240 people being taken hostage (Ministry of Foreign Affairs, Government of Israel). Israel immediately undertook a military campaign in Gaza in response to the attacks. Israel's military action has killed more than 18,000 people in Gaza as of mid-December 2023 (UN Office for the Coordination of Humanitarian Affairs, drawing on data from the Ministry of Health in Gaza), in a conflict where both sides have been accused of violating international law. Both the terrorist attacks and Israel's subsequent military actions have been the subjects of intense worldwide publicity, debate, scrutiny and controversy, much of which has taken place on social media platforms, including Instagram and Facebook.

Meta immediately designated the events of 7 October a terrorist attack under its Dangerous Organisations and Individuals policy. Under its Community Standards, this means that Meta would remove any content on its platforms that "praises, substantively supports or represents" the 7 October attacks or their perpetrators. It would also remove any perpetrator-generated content relating to such attacks and third-party imagery depicting the moment of such attacks on visible victims.

In reaction to an exceptional surge in violent and graphic content being posted to its platforms following the terrorist attacks and military response, Meta put in place several temporary measures, including lowering the confidence thresholds for the automatic classification systems (classifiers) of its Hate Speech, Violence and Incitement, and Bullying and Harassment policies to identify and remove content. Meta informed the Board that these measures applied to content originating in Israel and Gaza across all languages. The changes to these classifiers increased the automatic removal of content where there was a lower confidence score for the content violating Meta's policies. In other words, Meta used its automated tools more aggressively to remove content that might be prohibited. Meta did this to prioritise its value of safety, with more content removed than would have occurred under the higher confidence threshold in place prior to 7 October. While this reduced the likelihood that Meta would fail to remove violating content that might otherwise evade detection or where capacity for human review was limited, it also increased the likelihood of Meta mistakenly removing non-violating content related to the conflict.

When escalation teams assessed videos as violating its Violent and Graphic Content, Violence and Incitement and Dangerous Organisations and Individuals policies, Meta relied on Media Matching Service banks to automatically remove matching videos. This approach raised the concern of over-enforcement, including people facing restrictions on or suspension of their accounts following multiple violations of Meta's content policies (sometimes referred to as "Facebook jail"). To mitigate that concern, Meta withheld "strikes" that would ordinarily accompany automatic removals based on the Media Matching Service banks (as Meta announced in its newsroom post).

Meta's changes in the classifier confidence threshold and its strike policy are limited to the Israel-Gaza conflict and are intended to be temporary. As of 11 December 2023, Meta had not restored confidence thresholds to pre-7 October levels.

3. Case description

This case involves a video of the 7 October attacks depicting a woman begging her kidnappers not to kill her as she is taken hostage and driven away on a motorbike. The woman is seen sitting on the back of the vehicle, reaching out and pleading for her life. The video then shows a man, who appears to be another hostage, being marched away by captors. The faces of the hostages and those abducting them are not obscured and are identifiable. The original footage was shared broadly in the immediate aftermath of the attacks. The video posted by the user in this case, approximately one week after the attacks, integrates text within the video stating: "Israel is under attack", and includes the hashtag #FreeIsrael, also naming one of the hostages. In a caption accompanying the video, the user states that Israel was attacked by Hamas militants and urges people to watch the video to better understand the horror that Israel woke up to on 7 October 2023. At the time of writing, both people being abducted in the video were still being held hostage.

An instance of this video was placed in a Media Matching Service bank. Meta initially removed the post in this case for violating its Dangerous Organisations and Individuals policy, which prohibits third-party imagery depicting the moment of designated terror attacks on visible victims under any circumstances, even if shared to condemn or raise awareness of the attack. Meta did not apply a strike. The user then appealed Meta's decision to the Oversight Board.

In the immediate aftermath of the 7 October terrorist attacks, Meta enforced strictly its policy on videos showing the moment of attack on visible victims. Meta explained this was due to concerns about the dignity of the hostages as well as the use of such videos to celebrate or promote Hamas' actions. Meta added videos depicting moments of attack on 7 October, including the video shown in this case, to Media Matching Service banks so that future instances of identical content could be removed automatically.

Meta told the Board that it applied the letter of the Dangerous Organisations and Individuals policy to such content and issued consolidated guidance to reviewers. On 13 October, the company explained in its Newsroom post that it temporarily expanded the Violence and Incitement policy to remove content that clearly identified hostages when Meta is made aware of it, even if it was done to condemn the actions or raise awareness of their situation. The company affirmed to the Board that these policies applied equally to both Facebook and Instagram, although similar content has been reported to have appeared widely on the latter platform, indicating that there may have been less effective enforcement of this policy there.

The Violence and Incitement Community Standard generally allows content that depicts kidnappings and abductions in a limited number of contexts, including where the content is shared for informational, condemnation or awareness-raising purposes or by the family as a plea for help. However, according to Meta, when it designates a terrorist attack under its Dangerous Organisations and Individuals policy, and those attacks include hostage-taking of visible victims, Meta's rules on moment-of-attack content override the Violence and Incitement Community Standard. In such cases, the allowances within that policy for informational, condemning or awareness-raising sharing of moment-of-kidnapping videos do not apply and the content is removed.

However, as events developed following 7 October, Meta observed online trends indicating a change in the reasons why people were sharing videos featuring identifiable hostages at the moment of their abduction. Families of victims were sharing the videos to condemn and raise awareness, and the Israeli government and media organisations were similarly sharing the footage, including to counter emerging narratives denying the 7 October events took place or denying the severity of the atrocities.

In response to these developments, Meta implemented an exception to its Dangerous Organisations and Individuals policy, while maintaining its designation of the 7 October events. Subject to operational constraints, moment-of-kidnapping content showing identifiable hostages would be allowed with a warning screen in the context of condemning, raising awareness, news reporting or a call for release.

Meta told the Board that the rollout of this exception was staggered and did not reach all users at the same time. On or around 20 October, the company began to allow hostage-taking content from the 7 October attacks. Initially, it did so only from accounts included in the "Early Response Secondary Review" programme (commonly known as "cross-check"), given concerns about operational constraints, including uncertain human review capacity. The cross-check programme provides guaranteed additional human review of content by specific entities whenever they post content that is identified as potentially violating and requiring enforcement under Meta content policies. On 16 November, Meta determined that it had capacity to expand the allowance of hostage-taking content to all accounts and did so, but only for content posted after this date. Meta has informed the Board and explained in the public newsroom update that the exception it is currently making is only limited to videos depicting the moment of kidnapping of the hostages taken in Israel on 7 October.

After the Board identified this case, Meta reversed its original decision and restored the content with a "mark as disturbing" warning screen. This restricted the visibility of the content to people over the age of 18 and removed it from recommendations to other Facebook users.

The Board overturns Meta's original decision to remove the content from Facebook. It finds that restoring the content to the platform, with a "mark as disturbing" warning screen, is consistent with Meta's content policies, values and human rights responsibilities. However, the Board also concludes that Meta's demoting of the restored content, in the form of its exclusion from the possibility of being recommended, does not accord with the company's responsibilities to respect freedom of expression.

The Board overturns Meta's original decision to remove the content from Facebook. It finds that restoring the content to the platform, with a "mark as disturbing" warning screen, is consistent with Meta's content policies, values and human rights responsibilities. However, the Board also concludes that Meta's demoting of the restored content, in the form of its exclusion from the possibility of being recommended, does not accord with the company's responsibilities to respect freedom of expression.
Safety, Violence, War and Conflict / Violence and graphic content100https://www.oversightboard.com/decision/IG-WUC3649N/57Dec 2023
37
Hostages Kidnapped from IsraelIsraeli Woman kidnapped by Hamashttps://www.oversightboard.com/decision/FB-M8D2SOGS/A captioned video showing a woman being kidnapped during the 7 October Hamas-led terrorist attack was auto-removed by content systems.A video showing a woman, during the 7 October Hamas-led terrorist attack on Israel, begging her kidnappers not to kill her as she is taken hostage and driven away. The accompanying caption urges people to watch the video to better understand the horror that Israel woke up to on 7 October 2023. Violence / Incitement / Graphic ContentDec202312first removed the post through automated systems for violating its Dangerous Organisations and Individuals Community Standard. It later subsequently made an exception to the policy line under which the content was removed and restored the content with a warning screen. overturnedrestoring the content to the platform, with a "mark as disturbing" warning screen, is consistent with Meta's content policies, values and human rights responsibilities.post was restoredPost appears with a warning screen. But board disapproved of the associated demotion of the content barring it from recommendations.1 The case involves an emotionally powerful video showing a woman, during the 7 October Hamas-led terrorist attack on Israel, begging her kidnappers not to kill her as she is taken hostage and driven away. The accompanying caption urges people to watch the video to better understand the horror that Israel woke up to on 7 October 2023. Meta's automated systems removed the post for violating its Dangerous Organisations and Individuals Community Standard. The user appealed the decision to the Oversight Board. After the Board identified the case for review, Meta informed the Board that the company had subsequently made an exception to the policy line under which the content was removed and restored the content with a warning screen. The Board overturns Meta's original decision and approves the decision to restore the content with a warning screen but disapproves of the associated demotion of the content barring it from recommendations. This case, together with Al-Shifa Hospital (2023-049-IG-UA), are the Board's first cases decided under its expedited review procedures.

2. Case context and Meta's response

On 7 October, 2023, Hamas, a designated Tier 1 organisation under Meta's Dangerous Organisations and Individuals Community Standard, led unprecedented terrorist attacks on Israel from Gaza that killed an estimated 1,200 people, and resulted in roughly 240 people being taken hostage (Ministry of Foreign Affairs, Government of Israel). Israel immediately undertook a military campaign in Gaza in response to the attacks. Israel's military action has killed more than 18,000 people in Gaza as of mid-December 2023 (UN Office for the Coordination of Humanitarian Affairs, drawing on data from the Ministry of Health in Gaza), in a conflict where both sides have been accused of violating international law. Both the terrorist attacks and Israel's subsequent military actions have been the subjects of intense worldwide publicity, debate, scrutiny and controversy, much of which has taken place on social media platforms, including Instagram and Facebook.

Meta immediately designated the events of 7 October a terrorist attack under its Dangerous Organisations and Individuals policy. Under its Community Standards, this means that Meta would remove any content on its platforms that "praises, substantively supports or represents" the 7 October attacks or their perpetrators. It would also remove any perpetrator-generated content relating to such attacks and third-party imagery depicting the moment of such attacks on visible victims.

In reaction to an exceptional surge in violent and graphic content being posted to its platforms following the terrorist attacks and military response, Meta put in place several temporary measures, including lowering the confidence thresholds for the automatic classification systems (classifiers) of its Hate Speech, Violence and Incitement, and Bullying and Harassment policies to identify and remove content. Meta informed the Board that these measures applied to content originating in Israel and Gaza across all languages. The changes to these classifiers increased the automatic removal of content where there was a lower confidence score for the content violating Meta's policies. In other words, Meta used its automated tools more aggressively to remove content that might be prohibited. Meta did this to prioritise its value of safety, with more content removed than would have occurred under the higher confidence threshold in place prior to 7 October. While this reduced the likelihood that Meta would fail to remove violating content that might otherwise evade detection or where capacity for human review was limited, it also increased the likelihood of Meta mistakenly removing non-violating content related to the conflict.

When escalation teams assessed videos as violating its Violent and Graphic Content, Violence and Incitement and Dangerous Organisations and Individuals policies, Meta relied on Media Matching Service banks to automatically remove matching videos. This approach raised the concern of over-enforcement, including people facing restrictions on or suspension of their accounts following multiple violations of Meta's content policies (sometimes referred to as "Facebook jail"). To mitigate that concern, Meta withheld "strikes" that would ordinarily accompany automatic removals based on the Media Matching Service banks (as Meta announced in its newsroom post).

Meta's changes in the classifier confidence threshold and its strike policy are limited to the Israel-Gaza conflict and are intended to be temporary. As of 11 December 2023, Meta had not restored confidence thresholds to pre-7 October levels.

3. Case description

This case involves a video of the 7 October attacks depicting a woman begging her kidnappers not to kill her as she is taken hostage and driven away on a motorbike. The woman is seen sitting on the back of the vehicle, reaching out and pleading for her life. The video then shows a man, who appears to be another hostage, being marched away by captors. The faces of the hostages and those abducting them are not obscured and are identifiable. The original footage was shared broadly in the immediate aftermath of the attacks. The video posted by the user in this case, approximately one week after the attacks, integrates text within the video stating: "Israel is under attack", and includes the hashtag #FreeIsrael, also naming one of the hostages. In a caption accompanying the video, the user states that Israel was attacked by Hamas militants and urges people to watch the video to better understand the horror that Israel woke up to on 7 October 2023. At the time of writing, both people being abducted in the video were still being held hostage.

An instance of this video was placed in a Media Matching Service bank. Meta initially removed the post in this case for violating its Dangerous Organisations and Individuals policy, which prohibits third-party imagery depicting the moment of designated terror attacks on visible victims under any circumstances, even if shared to condemn or raise awareness of the attack. Meta did not apply a strike. The user then appealed Meta's decision to the Oversight Board.

In the immediate aftermath of the 7 October terrorist attacks, Meta enforced strictly its policy on videos showing the moment of attack on visible victims. Meta explained this was due to concerns about the dignity of the hostages as well as the use of such videos to celebrate or promote Hamas' actions. Meta added videos depicting moments of attack on 7 October, including the video shown in this case, to Media Matching Service banks so that future instances of identical content could be removed automatically.

Meta told the Board that it applied the letter of the Dangerous Organisations and Individuals policy to such content and issued consolidated guidance to reviewers. On 13 October, the company explained in its Newsroom post that it temporarily expanded the Violence and Incitement policy to remove content that clearly identified hostages when Meta is made aware of it, even if it was done to condemn the actions or raise awareness of their situation. The company affirmed to the Board that these policies applied equally to both Facebook and Instagram, although similar content has been reported to have appeared widely on the latter platform, indicating that there may have been less effective enforcement of this policy there.

The Violence and Incitement Community Standard generally allows content that depicts kidnappings and abductions in a limited number of contexts, including where the content is shared for informational, condemnation or awareness-raising purposes or by the family as a plea for help. However, according to Meta, when it designates a terrorist attack under its Dangerous Organisations and Individuals policy, and those attacks include hostage-taking of visible victims, Meta's rules on moment-of-attack content override the Violence and Incitement Community Standard. In such cases, the allowances within that policy for informational, condemning or awareness-raising sharing of moment-of-kidnapping videos do not apply and the content is removed.

However, as events developed following 7 October, Meta observed online trends indicating a change in the reasons why people were sharing videos featuring identifiable hostages at the moment of their abduction. Families of victims were sharing the videos to condemn and raise awareness, and the Israeli government and media organisations were similarly sharing the footage, including to counter emerging narratives denying the 7 October events took place or denying the severity of the atrocities.

In response to these developments, Meta implemented an exception to its Dangerous Organisations and Individuals policy, while maintaining its designation of the 7 October events. Subject to operational constraints, moment-of-kidnapping content showing identifiable hostages would be allowed with a warning screen in the context of condemning, raising awareness, news reporting or a call for release.

Meta told the Board that the rollout of this exception was staggered and did not reach all users at the same time. On or around 20 October, the company began to allow hostage-taking content from the 7 October attacks. Initially, it did so only from accounts included in the "Early Response Secondary Review" programme (commonly known as "cross-check"), given concerns about operational constraints, including uncertain human review capacity. The cross-check programme provides guaranteed additional human review of content by specific entities whenever they post content that is identified as potentially violating and requiring enforcement under Meta content policies. On 16 November, Meta determined that it had capacity to expand the allowance of hostage-taking content to all accounts and did so, but only for content posted after this date. Meta has informed the Board and explained in the public newsroom update that the exception it is currently making is only limited to videos depicting the moment of kidnapping of the hostages taken in Israel on 7 October.

After the Board identified this case, Meta reversed its original decision and restored the content with a "mark as disturbing" warning screen. This restricted the visibility of the content to people over the age of 18 and removed it from recommendations to other Facebook users.

The Board overturns Meta's original decision to remove the content from Facebook. It finds that restoring the content to the platform, with a "mark as disturbing" warning screen, is consistent with Meta's content policies, values and human rights responsibilities. However, the Board also concludes that Meta's demoting of the restored content, in the form of its exclusion from the possibility of being recommended, does not accord with the company's responsibilities to respect freedom of expression.
Safety, Violence, War and Conflict / Violence and incitement100https://www.oversightboard.com/decision/FB-M8D2SOGS/56Dec 2023
38
Call for women's protest in CubaMen are inferior animalshttps://oversightboard.com/decision/IG-RH16OBG3/A video of a woman protesting against the Cuban government for failing to defend repressed peoples was removed for "hate speech"A video posted by a Cuban news platform on Instagram in which a woman protests against the Cuban government, calls for other women to join her on the streets and criticises men, by comparing them to 'inferior' animals, for failing to defend those who have been repressed.Hate SpeechOct202310removed a post, though it took 7 months, for violating Meta's Hate Speech policyoverturnedthe post, when taken as a whole, does not generalise and dehumanise men, but uses qualified behvaioural statements which is permitted under Meta's Hate Speech policy.post was restored-The Oversight Board has overturned Meta's decision to remove a video posted by a Cuban news platform on Instagram in which a woman protests against the Cuban government, calls for other women to join her on the streets and criticises men, by comparing them to animals culturally perceived as inferior, for failing to defend those who have been repressed. The Board finds the speech in the video to be a qualified behavioural statement that, under Meta's Hate Speech Community Standard, should be allowed. Furthermore, in countries where there are strong restrictions on people's rights to freedom of expression and peaceful assembly, it is critical that social media protects the users' voice, especially in times of political protest.

About the case

In July 2022, a news platform, which describes itself as critical of the Cuban government, posted a video on its verified Instagram account. The video shows a woman calling on other women to join her on the streets to protest against the government. At a certain point, she describes Cuban men as "rats" and "mares" carrying urinal pots, because they cannot be counted on to defend people being repressed by the government. A caption in Spanish accompanying the video includes hashtags that refer to the "dictatorship" and "regime" in Cuba, and it calls for international attention on the situation in the country, by using #SOSCuba.

The video was shared around the first anniversary of the nationwide protests that had taken place in July 2021 when Cubans took to the streets, in massive numbers, for their rights. State repression increased in response, continuing into 2022. The timing of the post was also significant because it was shared days after a young Cuban man was killed in an incident involving the police. The woman in the video appears to reference this when she mentions that "we cannot keep allowing the killing of our sons". Text overlaying the video connects political change to women's protests.

The video was played more than 90,000 times and shared fewer than 1,000 times.

Seven days after it was posted, a hostile speech classifier identified the content as potentially violating and sent it for human review. While a human moderator found that the post violated Meta's Hate Speech policy, the content remained online as it went through additional rounds of human review under the cross-check system. A 7-month gap between these rounds meant that the post was removed in February 2023. On the same day in February, the user who shared the video appealed Meta's decision. Meta upheld its decision, without escalating the content to its policy or subject matter experts. A standard strike was applied to the Instagram account, but no feature limit.
Hate Speech75https://oversightboard.com/decision/IG-RH16OBG3/53Oct 2023
39
Hotel in EthiopiaBurn this hotel in Ethopiahttps://oversightboard.com/decision/FB-IULHG7JK/A post calling for a hotel to be burned in Ethiopia. The hotel's address was also posted. FB did not remove the post.An image was posted with a caption calling for a hotel in Ethiopia owned by a general in the Ethiopian National Defence Forces to be burned down. The post included a picture of the hotel, its address, and the name of the general.Violence / Incitement / Graphic ContentSept20239left a post online as it didn't violate Meta's Violence and Incitement policy according to Meta's teamoverturnedthe content directly called for a hotel to be burned down and listed the hotel's address. The Board brought the case to Meta's attention and determined that the post did in fact violate Meta's Violence and Incitement policy.post was removedSummary decision.1A user appealed Meta's decision to leave up a Facebook post that called for a hotel in Ethiopia's Amhara region to be burned down. This case highlights Meta's error in enforcing its policy against a call for violence in a country experiencing armed conflict and civil unrest. After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post.

Case description and background

On 6 April 2023, a Facebook user posted an image and caption that called for a hotel in Ethiopia's Amhara region to be burned down. The user claimed that the hotel was owned by a general in the Ethiopian National Defense Force. The post also included a photograph of the hotel, its address and the name of the general.

The user posted this content during a period of heightened political tension in the Amhara region when protests had been taking place for several days against the government's plan to dissolve a regional paramilitary force.

Under Meta's Violence and Incitement policy, the company removes content that calls for high-severity violence. In their appeal to the Board, the user who reported the content stated that the post calls for violence and violates Meta's Community Standards.

Meta initially left the content on Facebook. When the Board brought this case to Meta's attention, it determined that the post violated its Violence and Incitement policy, and that its original decision to leave up the content was incorrect. The company then removed the content from Facebook.
Violence and Incitement125https://oversightboard.com/decision/FB-IULHG7JK/52Sept 2023
40
Lebanese ActivistSaying bad things about Hezbollahhttps://oversightboard.com/decision/IG-24CW5DHI/A Instagram post in which an activist is asked about the usefulness of the Secretary General of Hezbollah was removed.A post in which a Lebanese activist was interviewed by a news anchor in Arabic. The activist was asked if a soccer player or Hassan Nasrallah (Secretary General of Hezbollah) is more useful, to which he replied by criticising Nasrallah and his actions.Dangerous individuals / organisationsSept20239removed an instagram post citing Meta's Dangerous Organisations and Individuals policyoverturnedthe content was a criticism or neutral report on the actions of a dangerous organisation or individual, rather than supporting Hassan Nasrallah, which does not violate Meta's policy. The Dangerous Organisations and Individuals policy allows "[an] expression of a negative perspective about a designated entity or individual", including "disapproval, disgust, rejection, criticism, mockery etc.".post was restoredSummary decision.11A user appealed Meta's decision to remove an Instagram post of an interview where an activist discusses Hassan Nasrallah, the Secretary General of Hezbollah. This case highlights the over-enforcement of Meta's Dangerous Organisations and Individuals policy. This can have a negative impact on users' ability to share political commentary and news reporting, resulting in an infringement of users' freedom of expression. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

On January 2023, the verified account of a Lebanese activist posted a video of himself being interviewed by a news anchor in Arabic. The news anchor begins by jokingly asking the activist whether a professional football player, or Hassan Nasrallah, the Secretary General of Hezbollah, is more useful. The activist responds by praising the football player and criticising Nasrallah. The activist highlights the plane hijackings and kidnappings conducted by Hezbollah, along with Nasrallah's support of former Lebanese politicians Nabih Berri and Michel Aoun – both of whom the activist claims were unwanted by the Lebanese people. Throughout the interview, video clips of Nasrallah play on mute. The caption that the activist added continues this comparison, joking: "Let's see how many goals Nasrallah can score first." The post received 137,414 views and was reported to the Board 11 times.

Meta initially removed the post from Instagram under its Dangerous Organisations and Individuals policy. In his appeal to the Board, the user claimed that Hezbollah uses coordinated reporting to remove content that criticises the organisation. The Board has not independently verified the claim that coordinated reporting was responsible for the removal of this content, or for any of the reports relating to the content. The user claimed that "Instagram's community guidelines are being used to extend Hezbollah's oppression against peaceful citizens like me".

After the Board brought this case to Meta's attention, the company determined that its removal was incorrect and restored the content to Instagram. The company acknowledged that while Hassan Nasrallah is designated as a dangerous individual, Meta lets users criticise or neutrally report on the actions of a dangerous organisation or individual. Specifically, Meta allows "[an] expression of a negative perspective about a designated entity or individual", including "disapproval, disgust, rejection, criticism, mockery etc." Meta acknowledged that the video was posted with a satirical and condemning caption, making the content non-violating.
Dangerous Individuals and Organisations100https://oversightboard.com/decision/IG-24CW5DHI/51Sept 2023
41
Video discussing corruption of law enforcement in Indonesia"Dirty" Indonesian policehttps://oversightboard.com/decision/FB-IZP492PJ/A video discussing corruption in the Indonesian Police Force was removed as a "threat".A video discussing corruption in the Indonesian police force, using metaphors to state that corrupt practices of subordinate law enforcement officers were protected by leaders of the forceViolence / Incitement / Graphic ContentSept20239removed the post citing Meta's Violence and Incitement policy as the content was deemed to contain 'threats that could lead to death (and other forms of high-severity violence)... targeting people'.overturnedthe content was ironic and was an analogy that corruption had started at the lower levels of the national police and risen the ranks to cause the whole system to be corrupt, rather than targeting or inciting violence towards any particular grouppost was restoredSummary decision.-A user appealed Meta's decision to remove a Facebook post that included a video discussing corruption among police officers in Indonesia. The case highlights an inconsistency in how Meta applies its Violence and Incitement policy to political metaphorical statements, which could be a significant deterrent to open online expression about governments. After the Board brought the appeal to Meta's attention, the company reversed its earlier decision and restored the post.

Case description and background

In April 2023, a Facebook user posted a video in which they gave a monologue in Bahasa Indonesia denouncing the corrupt practices of Indonesia's National Police. The user alleged that the Chief of the National Police had said, "If I can't clean my tail, I'll cut off its head." The user remarked that those dirty tails that could not be cleaned had actually become the heads, because the corrupt practices of subordinate law enforcement officers were guarded and maintained by the leaders of the police force. The user also named some specific individuals involved in their case who had since been promoted. Under the video, there was caption that read, "How could a dirty broom clean a dirty floor?"

The Board understands the analogy by the Chief of the National Police to mean that he was taking a hard line towards corruption and implying that if he could not eradicate corruption among lower-level officers, he would take action against higher-level ones. The Board takes the user's remarks that "dirty tails became heads" as irony, suggesting that corrupt officers from the lower levels rose through the ranks to become corrupt officials. Together with the caption that referred to the "dirty broom", the Board considers that this was why the user believed corruption was endemic in Indonesia.

Meta originally removed the post from Facebook, citing its Violence and Incitement policy, under which the company removes content containing "threats that could lead to death (and other forms of high-severity violence)… targeting people".

After the Board brought this case to Meta's attention, the company determined that its removal was incorrect and restored the content to the platform. The company told the Board that, instead of targeting a particular person or group of people, the user was drawing attention to the pervasive nature of corruption and the relationship between police leaders and subordinates. The company therefore concluded that there was no target for violence, as is required to violate the Violence and Incitement policy.
Violence and Incitement50https://oversightboard.com/decision/FB-IZP492PJ/50Sept 2023
42
Responding to AntisemitismKanye West's anti-semiticismhttps://oversightboard.com/decision/IG-5MC5OJIL/A video condemning anti-semitic, Holocause denying comments made by Kanye West was removed for "hate speech".An instagram video showing the anti-semitic, pro-Hitler, Holocaust denying comments made by Kanye West followed by a TV reporter expressing their outrage over these comments, and recounting personal history relating to the HolocaustDangerous individuals / organisationsSept20239removed a post condemning Kanye West's remarks for violating its Dangerous Organisations and Individuals, and Hate Speech policies.overturnedthe video did not support HItler and did not violate the cited policies as the second half of the video clearly condemned Kanye West's statementspost was restoredSummary decision.-A user appealed Meta's decision to remove an Instagram post of a video that condemned remarks by music artist Ye (the American rapper formerly known as Kanye West) praising Hitler and denying the Holocaust. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

In January 2023, an Instagram user from Turkey posted a video containing an excerpt of an interview in English where Ye states that he "likes" Adolph Hitler and that Hitler "didn't kill six million Jews". The video then cuts to a person who appears to be a TV reporter expressing outrage over Ye's statements and recounting how his family members were killed in the Holocaust. The video is subtitled in Turkish and has a caption that can be translated as "TV reporter responds to Kanye West".

Meta originally removed the post from Instagram, citing its Dangerous Organisations and Individuals (DOI) and Hate Speech policies. Under Meta's DOI policy, the company removes praise of designated individuals, including Adolf Hitler. However, the policy recognises that "users may share content that includes references to designated dangerous organisations and individuals to report on, condemn or neutrally discuss them or their activities". Under its Hate Speech policy, the company removes Holocaust denial as a form of harmful stereotype that is "historically linked to intimidation, exclusion or violence on the basis of a protected characteristic". The Hate Speech policy also recognises that "people sometimes share content that includes slurs or someone else's hate speech to condemn it or raise awareness".

In their appeal to the Board, the user argued that the video does not support Adolf Hitler and that they were misunderstood.

After the Board brought this case to Meta's attention, the company determined that the content did not violate its policies. Although the video contained praise for Adolf Hitler and Holocaust denial, the second part of the video clearly condemned these statements, placing it within an allowable context. Therefore, the company concluded that its initial removal was incorrect and restored the content to the platform.
Dangerous Individuals and Organisations200https://oversightboard.com/decision/IG-5MC5OJIL/49Sept 2023
43
United States Post Discussing AbortionAnti-abortion death threatshttps://oversightboard.com/decision/IG-FZSE6J9C/Three pieces of content containing rhetorical uses of violent language in response to the anti-abortion bill proposals was auto-tagged as "harmful"Three abortion related pieces of content containing rhetorical uses of violent language to speak against abortion bill propossals in South Carolina and abortion politics in the United StatesViolence / Incitement / Graphic ContentSept20239removed three posts discussing abortion and containing rhetorical violent language as a figure of speech after Meta's automated hostile speech classifier identified the content as potentially harmfuloverturnedeach the posts include violent language but are expressed in a mock first-person voice to emphasise opposing opinions, and none of the posts express a threat or intent to commit violencepost was restored-About the cases

The three abortion-related pieces of content considered in this decision were posted by users in the United States in March 2023.

In the first case, a user posted an image of outstretched hands, overlaid with the text, “Pro-Abortion Logic” in a public Facebook group. The post continued, “We don’t want you to be poor, starved or unwanted. So we’ll just kill you instead.” The group describes itself as supporting the “sanctity of human life.”

In the other two cases, both users’ posts related to news articles covering a proposed bill in South Carolina that would apply state homicide laws to abortion, meaning the death penalty would be allowed for people getting abortions. In one of these posts, on Instagram, the image of the article headline was accompanied by a caption referring to the South Carolina lawmakers as being “so pro-life we’ll kill you dead if you get an abortion.” The other post, on Facebook, contained a caption asking for clarity on whether the lawmakers’ position is that “it’s wrong to kill so we are going to kill you.”

After Meta’s automated systems, specifically a hostile speech classifier, identified the content as potentially harmful, all three posts were sent for human review. Across the three cases, six out of seven human reviewers determined the posts violated Meta’s Violence and Incitement Community Standard because they contained death threats. The three users appealed the removals of their content. When the Board selected these cases, Meta determined its original decisions were wrong and restored the posts.
Violence and Insightment250https://oversightboard.com/decision/IG-FZSE6J9C/48Sept 2023
44
Political Dispute Ahead of Turkish ElectionWho's a Servant of the British?https://oversightboard.com/decision/FB-T8JDDDJV/Videos of a politician confronting posted by Turkish media and referring to someone as a 'servant of the British' were removed under "hate speech" rules.Posts from three Turkish media organisations showing a similar video of a politician confronting another in public using the term 'servant of the British'Hate SpeechAugust20238removed the posts of three Turkish media organisations for violating Meta's Hate Speech policyoverturnedThe term used "İngiliz uşağı" or "servant of the British" is not deemed hate speech by the Boardpost was restored1About the cases

For these decisions, the Board considers three posts – two on Facebook, one on Instagram – from three different Turkish media organisations, all independently owned. They contain a similar video featuring a former Member of Parliament (MP) of the ruling party confronting a member of the main opposition party in the aftermath of the Turkish earthquakes in February 2023. In the run-up to the Turkish elections, the earthquakes were expected to significantly affect voting patterns.

The video shows Istanbul's Mayor Ekrem İmamoğlu, a key opposition figure, visiting one of the most heavily affected cities when he is confronted by a former MP, who shouts that he is "showing off", calls him a "servant of the British", and tells him to return to "his own" city. Both the public and expert commentators confirm that the phrase "İngiliz uşağı" is understood by Turkish speakers to mean "a person who acts for the interests and benefits" of Britain or the West in general.

Meta removed all three posts for violating its Hate Speech policy rule against slurs. Although several of Meta's mistake-prevention systems had been engaged, including cross-check, which led to the posts in each case undergoing several rounds of human review, this did not result in the content being restored.

In total, the posts were viewed across the three accounts more than 1,100,000 times before being removed.

While the three users were notified that they had violated the Hate Speech Community Standard, they were not told the specific rule they had broken. Additionally, feature limits to the accounts of two of the media organisations were applied, which prevented one from being able to create new content for 24 hours, and another losing its ability to live-stream video for three days.

After the Board identified the cases, Meta decided that its original decisions were wrong because the term "İngiliz uşağı" should not have been on its slur lists, and it restored the content. Separately, Meta had been conducting an annual audit of its slur lists for Turkey ahead of the elections, which led to the term "İngiliz uşağı" being removed in April 2023.
Hate Speech50https://oversightboard.com/decision/FB-T8JDDDJV/47August 2023
45
Promoting Ketamine for Non-FDA Approved TreatmentsKetamine for depressionhttps://oversightboard.com/decision/IG-TOM6IXVH/A post discussing ketamine as a treatment for mood disorders was not removed.An instagram post discussing the user's experience of using ketamine as a treatment for anxiety and depression.Regulated GoodsAugust20238left the post up under Meta's Resticted Goods and Services Community StandardoverturnedThe post violates Meta's Branded Content policies which state that certain goods, services or brands may not be promoted with branded content, these goods include 'drugs and drug-related products, including illegal or recreational drugs'. As the post was a paid partnership, clearly promoted the use of Ketamine, and was not covered by an exception, it violates these policies and should therefore be taken downpost taken down3About the case

On 29 December 2022, a verified Instagram user posted ten related images as part of a single post with a caption. A well-known ketamine therapy provider is tagged as the co-author of the post, which was labelled as a "paid partnership". Under Meta's Branded Content policies, Meta's business partners must add such labels to their content to transparently disclose a commercial relationship with a third party.

In the caption, the user stated that they were given ketamine as treatment for anxiety and depression at two of the ketamine therapy provider's office locations in the United States. While the user described ketamine as medicine, the post contains no mention of a professional diagnosis, no clear evidence that treatment occurred at a licensed clinic and nothing showing that the treatment took place under medical supervision. The post describes the user's treatment as a "magical entry into another dimension". The post also expressed a belief that "psychedelics" (a category that the post implied includes ketamine) are an important emerging mental health medicine. Ten drawings, some including psychedelic imagery, depict the user's experience in a storyboard style, indicating that the user received several "therapy sessions" for "treatment-resistant depression and anxiety". The account of the user describing the experience has around 200,000 followers and the post was viewed around 85,000 times.

Three users reported one or more of the images included in the post, and the content was removed and then restored three times under Meta's Restricted Goods and Services Community Standard. After the third time the post was removed, the content creator brought it to Meta's attention. The content was then escalated to policy or subject matter experts for an additional review and restored around six months after it was originally posted. Meta then referred the case to the Board. The content creator's status as a "managed partner" helped to escalate the post within Meta. "Managed partners" are entities across different industries, including individuals such as celebrities and organisations such as businesses or charities. They receive varying levels of enhanced support, including access to a dedicated partner manager.

Regulated Goods250https://oversightboard.com/decision/IG-TOM6IXVH/46August 2023
46
Images of Gender-Based ViolenceImages of domestic violencehttps://oversightboard.com/decision/FB-1RWWJUAT/A graphic photo of a victim of domestic violenceA photo of a woman with visible marks of a physical attack, including bruises, with a caption stating that her husband physically beat her due to a typographical error she had made in a letter addressed to him.Bullying & HarassmentAugust20238left up content which mocked a target of gender-based violenceoverturnedThe post violated Meta's bullying and harassment rules as the woman depicted was visible. However, this post would not have violated the rules if the caption had accompanied a picture of a fictional character, or if the target was not identifiable; this distinction highlighted to the Board that the policy had a gap which seemed to allow content that normalises gender-based violence.post taken down1About the case

In May 2021, a Facebook user in Iraq posted a photo with a caption in Arabic. The photo shows a woman with visible marks of a physical attack, including bruises on her face and body. The caption begins by warning women about making a mistake when writing to their husbands. The caption states that the woman in the photo wrote a letter to her husband, which he misunderstood, according to the caption, due to the woman's typographical error. According to the post, the husband thought the woman asked him to bring her a "donkey", while in fact, she was asking him for a "veil". In Arabic, the words for "donkey" and "veil" look similar ("حمار" and "خمار"). The post implies that because of the misunderstanding caused by the typographical error in her letter, the husband physically beat her. The caption then states that the woman got what she deserved as a result of the mistake. There are several laughing and smiling emojis throughout the post.

The woman depicted in the photograph is an activist from Syria whose image has been shared on social media in the past. The caption does not name her, but her face is clearly visible. The post also includes a hashtag used in conversations in Syria supporting women.

In February 2023, a Facebook user reported the content three times for violating Meta's Violence and Incitement Community Standard. If content is not reviewed within 48 hours, the report is automatically closed, as it was in this case. The content remained on the platform for nearly two years and was not reviewed by a human moderator.

The user who reported the content appealed Meta's decision to the Oversight Board. As a result of the Board selecting this case, Meta determined that the content violates the Bullying and Harassment policy and removed the post.
Bullying and Harassment75https://oversightboard.com/decision/FB-1RWWJUAT/45August 2023
47
Violence Against WomenTalking about gender-based violencehttps://oversightboard.com/decision/IG-H3138H6S/Instagram posts discussing experiences of and condemning gender-based violence were removed.Two instagram posts discussing experiences of violent intimate relationships and condemning gender-based violenceHate SpeechJuly20237removed two instagram posts which condemned gender-based violence as it violated Meta's rules on hate speechoverturnedThe post did not violate Meta's hate speech rules according to the Board. The Board was concerned that Meta's approach to enforcing gender-based hate speech may result in disproportionate removal of content made to increase awareness of and condemning gender-based violence.post was restored-About the cases

In this decision, the Board considers two posts from an Instagram user in Sweden together. Meta removed both posts for violating its Hate Speech Community Standard. After the Board identified the cases, Meta decided that the first post had been removed in error but maintained its decision on the second post.

The first post contains a video with an audio recording and its transcription, both in Swedish, of a woman describing her experience in a violent intimate relationship, including how she felt unable to discuss the situation with her family. The caption notes that the woman in the audio recording consented to its publication, and that the voice has been modified. It says that there is a culture of blaming victims of gender-based violence, and little understanding of how difficult it is for women to leave a violent partner. The caption says, “men murder, rape and abuse women mentally and physically – all the time, every day.” It also shares information about support organizations for victims of intimate partner violence, mentions the International Day for the Elimination of Violence against Women, and says it hopes women reading the post will realize they are not alone.

After one of Meta’s classifiers identified the content as potentially violating Meta’s rules on hate speech, two reviewers examined the post and removed it. This decision was then upheld by the same two reviewers on different levels of review. As a result of the Board selecting this case, Meta determined that it had removed the content in error, restoring the post.

As the Board began to assess the first post, it received another appeal from the same user. The second post, also shared on Instagram, contains a video of a woman speaking in Swedish and pointing at words written in Swedish on a notepad. In the video, the speaker says that although she is a man-hater, she does not hate all men. She also states that she is a man-hater for condemning misogyny and that hating men is rooted in fear of violence. Meta removed the content for violating its rules on hate speech. The user appealed the removal to Meta, but the company upheld its original decision after human review. After being informed that the Board had selected this case, Meta did not change its position.

Since at least 2017, digital campaigns have highlighted that Facebook’s hate speech policies result in the removal of phrases associated with calling attention to gender-based violence and harassment. For example, women and activists have coordinated posting phrases such as “men are trash” and “ men are scum” and protested their subsequent removal on the grounds of being anti-men hate speech.
Hate Speech75https://oversightboard.com/decision/IG-H3138H6S/44July 2023
48
Cambodian Prime MinisterThreats by Cambodia's Prime Ministerhttps://oversightboard.com/decision/FB-6OKJPNS3/A video featuring Cambodian Prime Minister Hun Sen threatening his political opponents with violence were not removed.A video featuring Cambodian Prime Minister Hun Sen threatening his political opponents with violenceViolence / Incitement / Graphic ContentJan20231left the content on Facebook as Meta deemed it newsworthy. Meta referred the case to the Board because of the difficult questions it raised about balancing the need for people to hear their political leaders with the need to prevent them from inciting violence or intimidating others from becoming politically engagedoverturnedHun Sen's remarks violated the Violence and Incitement Community Standard, and though newsworthy, the Board disagreed that it was sufficiently newsworthy to leave the content up. Due to the severity of the violation, the political context in Cambodia, the government's history of human rights violations, Hun Sen's history of inciting violence and strategic use of social media to amplitfy threats against his political components, the Board recommends that Hun Sen's official FB page and Instagram account should be suspended for 6 monthspost taken down3On 9 January 2023, a live video was streamed from the official Facebook Page of Cambodia's Prime Minister, Hun Sen.

The video shows a one hour 41-minute speech delivered by Hun Sen in Khmer – Cambodia's official language. In the speech, he responds to allegations that his ruling Cambodia People's Party (CPP) stole votes during the country's local elections in 2022. He calls on his political opponents who made the allegations to choose between the "legal system" and "a bat", and says that they can choose the legal system, or he "will gather CPP people to protest and beat you up". He also mentions "sending gangsters to [your] house" and says that he may "arrest a traitor with sufficient evidence at midnight". Later in the speech, however, he says "we don't incite people and encourage people to use force". After the live broadcast, the video was automatically uploaded onto Hun Sen's Facebook Page, where is has been viewed around 600,000 times.

Three users reported the video five times between 9 January and 26 January 2023, for violating Meta's Violence and Incitement Community Standard. This prohibits "threats that could lead to death" (high-severity violence) and "threats that lead to serious injury (mid-severity violence)", including "[s]tatements of intent to commit violence". After the users who reported the content appealed, it was reviewed by two human reviewers who found that it did not violate Meta's policies. At the same time, the content was escalated to policy and subject matter experts within Meta. They determined that it violated the Violence and Incitement Community Standard, but applied a newsworthiness allowance. This permits otherwise violating content where the public interest value outweighs the risk of it causing harm.

One of the users who reported the content appealed Meta's decision to the Board. Separately, Meta referred the case to the Board. In its referral, Meta stated that the case involves a challenging balance between its values of "Safety" and "Voice" in determining when to allow speech that violates its Violence and Incitement policy by a political leader to remain on its platforms.
Violence and Incitement350https://oversightboard.com/decision/FB-6OKJPNS3/43Jan 2023
49
Anti-Colonial Leader Amilcar CabralA poem about anti-colonial leader Amílcar Cabralhttps://oversightboard.com/decision/FB-33NK66FG/A posted poem referencing the 1970s Bissau-Guinean anti-colonial leader Amílcar Cabral commemorating his life was removed under "dangerous individual" rules.A post including a poem referencing the Bissau-Guinean anti-colonial leader Amílcar Cabral to commemorate his lifeDangerous individuals / organisationsJune20236removed the post, citing its Dangerous Organisations and Individuals (DOI) policy, as the poem 'praised', 'substantively supported' or 'represented' individuals and organisations that they designated as dangerous.overturnedthe poem was written in 1973 and was posted to commemorate the life of Amilcar Cabral, who is not a designated individual in its DOI policy. though the poem was in reference to his assassination, it was to praise the non-designated individual Amilcar Cabralpost was restoredSummary decision.-A user appealed Meta's decision to remove a Facebook post that consisted of a poem referencing the Bissau-Guinean anti-colonial leader Amílcar Cabral. After the Board brought the appeal to Meta's attention, the company reversed its earlier decision and restored the post.

Case description and background

In January 2023, a Facebook user posted content in French commemorating the passing of Amílcar Cabral on the anniversary of his assassination in 1973. Cabral is world-renowned as a Pan-African thinker who led an ultimately successfully revolutionary movement against Portuguese colonial rule in Guinea-Bissau and Cabo Verde. The post contained a poem, praising Cabral's contributions to the anti-colonial struggle and its impact across the African continent. The user claimed that the poem was written in 1973 and published in an African-Asian journal.

Meta originally removed the post from Facebook, citing its Dangerous Organisations and Individuals (DOI) policy, under which the company removes content that "praises", "substantively supports" or "represents" individuals and organisations that they designate as dangerous.

In their appeal to the Board, the user stated that the poem is decades old and was posted to celebrate Amílcar Cabral.

After the Board brought this case to Meta's attention, the company determined that its removal was incorrect and restored the content to the platform. The company told the Board that the Bissau-Guinean leader Amílcar Cabral is not a designated individual in its DOI policy but could be mistakenly associated with another person who is designated. The post's reference to the 1973 assassination indicates who the poster intended to reference. As a result of its review in this case, Meta said that it improved its enforcement practice "to avoid false positive removals of content praising the non-designated individual Amílcar Cabral".
Dangerous Individuals and Organisations50https://oversightboard.com/decision/FB-33NK66FG/42June 2023
50
Metaphorical Statement Against the President of PeruIs this a threat against the President of Peru?https://oversightboard.com/decision/FB-2AHD01LX/A post including a metaphorical statement against the then-President of Peru, Pedro Castillo was removed.A post stating that "we" will hang then-President of Peru, Pedro Castillo and comparing the execution to that of Italian Dictator, Benito Mussolini was removed, though the user stated the caption was metaphorical and didn't actually call for Pedro Castillo to be executed.Violence / Incitement / Graphic ContentJune20236removed the post under its Violence and Incitement policy.overturnedthe statement was metaphorical and did not violate the Violence and Incitement policy as it called for the suspension of Pedro Castillo, not advocating for violence against him.post was restoredSummary decision.-A user appealed Meta's decision to remove a Facebook post that included a metaphorical statement against Peru's then-President Pedro Castillo. After the Board brought the appeal to Meta's attention, the company reversed its original decision and restored the post.

Case description and background

On 24 November 2022, a Facebook user from Peru posted content in Spanish stating that "we" will hang the then-President of Peru Pedro Castillo and compared this to the execution of Italian dictator Benito Mussolini. The post says that this was a "metaphorical" statement , not a threat to be feared and referred to the potential "suspension" of the president by a vote of the legislature amidst corruption allegations. The post also states that Pedro Castillo does not need to worry about the user's metaphorical statement because they are not "filosenderista" like Mr Castillo – an idiomatic reference comparing the leftist president to Sendero Luminoso – a communist terrorist group from Peru.

The user posted this content approximately two weeks before Peru's Congress ultimately impeached Mr Castillo, soon after he attempted to dissolve the country's legislative body and install an emergency government.

Meta initially removed the post from Facebook under its Violence and Incitement policy. In their appeal to the Board, the user stated that Meta had misinterpreted the text, which was not a call to violence and that the post should be understood in the context of the presidential impeachment process being discussed at that time.

Under Meta's Violence and Incitement policy, the company removes "language that incites or facilitates serious violence" including "statements of intent to commit high-severity violence", when Meta believes that "there is a genuine risk of physical harm or direct threats to public safety". The policy further explains that the company considers "language and context in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety".

After the Board brought this case to Meta's attention, the company determined that the content did not violate its Violence and Incitement policy. Given the metaphorical nature of the statement and the context of impeachment proceedings against Pedro Castillo, who was president at the time, Meta concluded that the user appears to advocate "suspending" (or impeaching) the then-president, not committing violence against him. Therefore, the initial removal was incorrect and Meta restored the content on Facebook.
Violence and Incitement50https://oversightboard.com/decision/FB-2AHD01LX/41June 2023
51
Dehumanising Speech Against a WomanComparing a woman to a truckhttps://oversightboard.com/decision/FB-VJ6FO5UY/A post attacking a clearly identifiable woman and comparing her to a motor vehicle was not removed from FB.A post containing a photo of a clearly identifiable woman was left on Facebook despite the caption containing hate speech. The post referred to the woman as a preowned truck for sale which required paint to hide damage, emitted unusual smells and was rarely washed, the caption also added that the 'truck' was advertised all over town.Hate SpeechJune20236left the content on Facebook.overturnedon review, as requested by the board, Meta decided it violated it's Bullying and Harassment policypost was removedSummary decision. I.e. Meta reversed its decision on a piece of content after the Board brought it to Meta's attention.>100A user appealed Meta's decision to leave up a Facebook post that attacked an identifiable woman and compared her to a motor vehicle ("truck"). After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post.

Case description and background

In December 2022, a Facebook user posted a photo of a clearly identifiable woman. The caption above the photo, in English, referred to her as a preowned truck for sale. It continued to describe the woman using the metaphor of a "truck", requiring paint to hide damage, emitting unusual smells and being rarely washed. The user added that the woman was "advertised all over town". Another user reported the content to the Board, saying that it was misogynistic and offensive to the woman. The post received over 2 million views, and it was reported to Meta more than 500 times by Facebook users.

Before Meta reassessed its original decision, the user who posted the content edited the original post to superimpose a "vomiting" emoji over the woman's face. They updated the caption saying that they had concealed her identity out of their embarrassment "to say that I owned this pile of junk". They also added information naming various dating websites on which the woman supposedly had a profile.

Under Meta's Bullying and Harassment policy, the company removes content that targets private figures with "[a]ttacks through negative physical descriptions" or that makes "[c]laims about sexual activity".

Meta initially left the content on Facebook. When the Board brought this case to Meta's attention, it reviewed both the original post and the updated post. The company noted that both versions of the content include a negative physical description of a private individual by comparing her to a truck and both make inferences about her sexual activity by claiming she is "advertised all over town", although the edited post is more explicit with the references to dating websites. Therefore, Meta determined that both versions violated its Bullying and Harassment policy, and its original decision to leave up the content was incorrect. The company then removed the content from Facebook.
Hate Speech75https://oversightboard.com/decision/FB-VJ6FO5UY/40June 2023
52
Brazilian General's SpeechIncitement to storm the Brazillian congresshttps://oversightboard.com/decision/FB-659EAWI8/A video featuring a Brazilian general calling people to join an uprising at the National Congress was left on FB.A post featuring a Brazilian general calling people to take to the streets and go to the National Congress and the Supreme Court was left on Facebook despite violating Meta's Violence and Incitement policyViolence / Incitement / Graphic ContentJune20236decided to leave the video up as they repeatedly decided it didn't violate their policies. This decision was a clear departure from Meta's own rules.overturnedthe speaker's intent, content of the speech, reach and likelihood of imminent harm, all justified the post's removalpost was removed11The Oversight Board has overturned Meta's original decision to leave up a Facebook video, which features a Brazilian general calling people to "hit the streets" and "go to the National Congress and the Supreme Court". Although the Board acknowledges that Meta set up several risk evaluation and mitigation measures during and after the elections, given the potential risk of its platforms being used to incite violence in the context of elections, Meta should continuously increase its efforts to prevent, mitigate and address adverse outcomes. The Board recommends that Meta develop a framework for evaluating its election integrity efforts to prevent its platforms from being used to promote political violence.

About the case

Brazil's presidential elections in October 2022 were highly polarised, with widespread and coordinated online and offline claims questioning the legitimacy of elections. These included calls for military intervention and for the invasion of government buildings to stop the transition to a new government. The heightened risk of political violence did not subside with the assumption of office by newly elected President Luiz Inácio Lula da Silva on 1 January 2023, as civil unrest, protests,and encampments in front of military bases were ongoing.

Two days later, on 3 January 2023, a Facebook user posted a video related to the 2022 Brazilian elections. The caption in Portuguese includes a call to "besiege" Brazil's Congress as "the last alternative". The video also shows part of a speech given by a prominent Brazilian general who supports the re-election of former President Jair Bolsonaro. In the video, the uniformed general calls for people to "hit the streets" and "go to the National Congress… [and the] Supreme Court". A sequence of images follows, including one of a fire raging in the Three Powers Plaza in Brasília, which houses Brazil's presidential offices, Congress and Supreme Court. Text overlaying the image reads, in Portuguese, "Come to Brasília! Let's storm it! Let's besiege the three powers." Text overlaying another image reads "we demand the source code" – a slogan that protestors have used to question the reliability of Brazil's electronic voting machines.

On the day that the content was posted, a user reported it for violating Meta's Violence and Incitement Community Standard, which prohibits calls for forcible entry into high-risk locations. In total, four users reported the content seven times between 3 and 4 January. Following the first report, the content was reviewed by a content reviewer and found not to violate Meta's policies. The user appealed the decision, but it was upheld by a second content reviewer. The next day, the other six reports were reviewed by five different moderators, all of whom found that the content did not violate Meta's policies.

On 8 January, supporters of former president Bolsonaro broke into the National Congress, Supreme Court and presidential offices located in the "Three Powers Plaza" in Brasília, intimidating the police and destroying property. On 9 January, Meta declared the 8 January rioting a "violating event" under its Dangerous Individuals and Organisations policy and said that it would remove "content that supports or praises these actions". The company also announced that it had "designated Brazil as a temporary high-risk location" and had "been removing content calling for people to take up arms or forcibly invade Congress, the Presidential palace and other federal buildings".

As a result of the Board selecting this case, Meta determined that its repeated decisions to leave the content on Facebook were in error. On 20 January 2023, after the Board shortlisted this case, Meta removed the content.
Coordinating Harm and Publicising Crime100https://oversightboard.com/decision/FB-659EAWI8/39June 2023
53
Gender Identity and Nudity"Contains Breasts"https://oversightboard.com/decision/BUN-IH313ZHJ/Two posts depicting transgender and non-binary people with bare chests were removed.Two Instagram posts depicting transgender and non-binary people with bare chests (two separate cases considered together by Meta and the Oversight Board)Nudity & sexual activityJan20231removed 2x posts by a couple who identify as transgender and non-binary for violating Sexual Solicitation CS because they "contain breasts" and a link to a fundraising page.overturnedThe Adult Nudity and Sexual Activity CS prohibits images with female nipples unless for breastfeeding or gender confirmation surgery. Board found that Meta's policies on adult nudity result in "greater barriers to expression for women, trans and gender non-binary people"Post was restoredThe policy is too binary to apply it to those who exist outside of it and causes reviewers to make subjective assessments.130The Oversight Board has overturned Meta's original decisions to remove two Instagram posts depicting transgender and non-binary people with bare chests. It also recommends that Meta change its Adult Nudity and Sexual Activity Community Standard so that it is governed by clear criteria that respects international human rights standards.

About the case

In this decision, the Oversight Board considers two cases together for the first time. Two separate pieces of content were posted by the same Instagram account – one in 2021, the other in 2022. The account is maintained by a US-based couple who identify as transgender and non-binary.

Both posts feature images of the couple bare-chested with the nipples covered. The image captions discuss transgender healthcare and say that one member of the couple will soon undergo top surgery (gender-affirming surgery to create a flatter chest), which the couple are fundraising to pay for.

Following a series of alerts by Meta's automated systems and reports from users, the posts were reviewed multiple times for potential violations of various Community Standards. Meta ultimately removed both posts for violating the Sexual Solicitation Community Standard, seemingly because they contain breasts and a link to a fundraising page.

The users appealed to Meta and then to the Board. After the Board accepted the cases, Meta found that it had removed the posts in error and restored them.
Sexual Solicitation250https://oversightboard.com/decision/BUN-IH313ZHJ/37Jan 2023
54
Columbian Police CartoonThe Columbian "Cartoon Police"https://oversightboard.com/decision/FB-I964KKM6/A Facebook post depicting police violence in Colombia was removed.A FB post of a cartoon depicting police violence in Colombia.Dangerous individuals / organisationsJan20221removed content because it matched with an image in a Media Matching Service bank.overturnedit did not violate its policies. Meta wrongly added this cartoon to its media matching service bankPost was restored4The Oversight Board has overturned Meta's original decision to remove a Facebook post of a cartoon depicting police violence in Colombia. The Board is concerned that Media Matching Service banks, which can automatically remove images that violate Meta's rules, can amplify the impact of incorrect decisions to bank content. In response, Meta must urgently improve its procedures to quickly remove non-violating content from these banks.

About the case

In September 2020, a Facebook user in Colombia posted a cartoon resembling the official crest of the National Police of Colombia, depicting three figures in police uniform holding batons over their heads. They appear to be kicking and beating another figure who is lying on the ground with blood beneath their head. The text of the crest reads, in Spanish, "República de Colombia – Policía Nacional – Bolillo y Pata". Meta translated the text as "National Police – Republic of Colombia – Baton and Kick".

According to Meta, in January 2022, 16 months after the user posted the content, the company removed the content as it matched with an image in a Media Matching Service bank. These banks can automatically identify and remove images which have been identified by human reviewers as violating the company's rules. As a result of the Board selecting this case, Meta determined that the post did not violate its rules and restored it. The company also restored other pieces of content featuring this cartoon, which had been incorrectly removed by its Media Matching Service banks.
Dangerous Individuals and Organisations75https://oversightboard.com/decision/FB-I964KKM6/35Jan 2022
55
Mention of the Taliban in News ReportingPraising the Talibanhttps://oversightboard.com/decision/FB-U2HHA647/A post reported a positive announcement from the Taliban about women and girls' education was removed for in some way "praising" the Taliban.A post reported that Zabiullah Mujahid, a member of the Taliban regime in Afghanistan and its official central spokesperson, had announced that schools for women and girls would reopen in March 2022.Dangerous individuals / organisationsJan20221removed post because the policy prohibits "praise" of entities deemed to "engage in serious offline harms" including terrorist orgs. Meta eventually decided post should not have been removed because its rules allow "reporting on" terrorist orgsoverturnedCS permits content that "reports on" dangerous orgs.Post was restored6The Oversight Board has overturned Meta's original decision to remove a Facebook post from a news outlet Page reporting a positive announcement from the Taliban regime in Afghanistan on women and girls' education. Removing the post was inconsistent with Facebook's Dangerous Individuals and Organisations Community Standard, which permits reporting on terrorist groups and Meta's human rights responsibilities. The Board found that Meta should better protect users' freedom of expression when it comes to reporting on terrorist regimes and makes policy recommendations to help achieve this.

About the case

In January 2022, a popular Urdu-language newspaper based in India posted on its Facebook Page. The post reported that Zabiullah Mujahid, a member of the Taliban regime in Afghanistan and its official central spokesperson, had announced that schools and colleges for women and girls would reopen in March 2022. The post linked to an article on the newspaper's website and was viewed around 300 times.

Meta found that the post violated the Dangerous Individuals and Organisations policy, which prohibits "praise" of entities deemed to "engage in serious offline harms", including terrorist organisations. Meta removed the post, imposed "strikes" against the Page administrator who had posted the content and limited their access to certain Facebook features (such as going live on Facebook).

The user appealed, and after a second human reviewer assessed the post as violating, it was placed in a queue for the high-impact false positive override (HIPO) system. HIPO is a system that Meta uses to identify cases where it has acted incorrectly, for example, by wrongly removing content. However, as there were fewer then 50 Urdu-speaking reviewers allocated to HIPO at the time, and the post was not deemed high priority, it was never reviewed in the HIPO system.

After the Board selected the case, Meta decided that the post should not have been removed, as its rules allow "reporting on" terrorist organisations. It restored the content, reversed the strike and removed the restrictions on the user's account.
Dangerous Individuals and Organisations50https://oversightboard.com/decision/FB-U2HHA647/34Jan 2022
56
Russian PoemRussian army fascistshttps://oversightboard.com/decision/FB-MBGOTVN8/A post comparing the Russian army in Ukraine to Nazis was removed.A FB post comparing the Russian army in Ukraine to Nazis and quoting a poem that calls for the killing of fascists.Hate SpeechApril20224removed post because the imagery and the poem it quoted, reading "kil the fascist... Kill him! Kill him! Kill!" – Because of the violence, it claimed that the hate speech CS was violated. It then applied a warning screen under the image due to violent and grapic content policyoverturnedthey say the post argues that the soldiers acted like Nazis, drawing historical parallels. It urges the importance of context - the poem "Kill Him!" was an artistic and cultural reference employed as a rhetorical device. They also found the image does not include clear indicators of violencePost was restored8The Oversight Board has overturned Meta's original decision to remove a Facebook post comparing the Russian army in Ukraine to Nazis and quoting a poem that calls for the killing of fascists. It has also overturned Meta's finding that an image of what appears to be a dead body in the same post violated the Violent and Graphic Content policy. Meta had applied a warning screen to the image on the grounds that it violated the policy. This case raises some important issues about content moderation in conflict situations.

About the case

In April 2022, a Facebook user in Latvia posted an image of what appears to be a dead body, face down, in a street. No wounds are visible. Meta confirmed to the Board that the person was shot in Bucha, Ukraine.

The Russian text accompanying the image argues that the alleged atrocities that Soviet soldiers committed in Germany in World War II were excused on the basis that they avenged the crimes that Nazi soldiers had committed in the USSR. It draws a connection between the Nazi army and the Russian army in Ukraine, saying the Russian army "became fascist".

The post cites alleged atrocities committed by the Russian army in Ukraine and says that "after Bucha, Ukrainians will also want to repeat... and will be able to repeat". It ends by quoting the poem "Kill him!" by Soviet poet Konstantin Simonov, including the lines: "kill the fascist... Kill him! Kill him! Kill!"

The post was reported by another Facebook user and removed by Meta for violating its Hate Speech Community Standard. After the Board selected the case, Meta found that it had wrongly removed the post and restored it. Three weeks later, it applied a warning screen to the image under its Violent and Graphic Content policy.
Hate Speech100https://oversightboard.com/decision/FB-MBGOTVN8/32April 2022
57
UK Drill MusicUK Police vs Drill Musichttps://oversightboard.com/decision/IG-PT5WRTLW/A UK drill music clip with a potential "veiled threat" was removed.A UK drill music clip posted to instagram contained a potential veiled threat by referencing a shooting in 2017. The video was removed as the threat could incite further violence.Violence / Incitement / Graphic ContentJan20221removed post because it contained a track "Secrets Not Safe" that had lyrics they (with information provided by the Metropolitan Police) determined a "veiled threat" that referenced and could exacerbate gang violenceoverturnedlacked evidence to conclude content contained a credible threat – Meta should have given more weight to artistic nature of contentPost was restoredThis case raises concerns about Meta's relationships with governments, particularly where law enforcement requests lead to lawful content being reviewed against the Community Standards and removed. While law enforcement can sometimes provide context and expertise, not every piece of content that law enforcement would prefer to have taken down should be taken down.10The Oversight Board has overturned Meta's decision to remove a UK drill music video clip from Instagram. Meta originally removed the content following a request from the Metropolitan Police. This case raises concerns about Meta's relationships with law enforcement, which has the potential to amplify bias. The Board makes recommendations to improve respect for due process and transparency in these relationships.

About the case

In January 2022, an Instagram account that describes itself as publicising British music posted content highlighting the release of the UK drill music track, "Secrets Not Safe" by Chinx (OS), including a clip of the track's music video.

Shortly after, the Metropolitan Police, which is responsible for law enforcement in Greater London, emailed Meta requesting that the company review all content containing "Secrets Not Safe". Meta also received additional context from the Metropolitan Police. According to Meta, this covered information on gang violence, including murders, in London, and the Police's concern that the track could lead to further retaliatory violence.

Meta's specialist teams reviewed the content. Relying on the context provided by the Metropolitan Police, they found that it contained a "veiled threat", by referencing a shooting in 2017, which could potentially lead to further violence. The company removed the content from the account under review for violating its violence and incitement policy. It also removed 52 pieces of content containing the track "Secrets Not Safe" from other accounts, including Chinx (OS)'s. Meta's automated systems later removed the content another 112 times.

Meta referred this case to the Board. The Board requested that Meta also refer Chinx (OS)'s post of the content. However, Meta said that this was impossible as removing the "Secrets Not Safe" video from Chinx (OS)'s account ultimately led to the account being deleted, and its content was not preserved.
Violence and Incitement200https://oversightboard.com/decision/IG-PT5WRTLW/31Jan 2022
58
Video After Nigeria Church AttackOwo church massacrehttps://oversightboard.com/decision/IG-OZNR5J1Z/A video showing the aftermath of a terrorist attack in NIgeria was first warning-screened then removed.An Instagram video showing bodies following a terrorist attack in Nigeria was posted the day of the attack. In southwest Nigeria, 40 people were killed and injured, a video of the aftermath of this terrorist attack was posted to instagram with a caption referencing firearm collectors, gunfire sounds and "airsoft".Violence / Incitement / Graphic ContentJune20226first applied a warning screen (user not warned) then user added a caption that included references to firearms. Media Matching Service bank detected the post and removed it, which was upheld by a human reviewer. They said the hashtags could be read as glorifying violence / minimising the suffering of victimsoverturnedfound the post was non-violating. The Nigerian government censors the coverage of ongoing terrorist attacks. They do agree a warning screen is necessary. The hashtags raise awareness, not mock.Post was restored with a "disturbing content" warning screen9The Board has overturned Meta's decision to remove a video from Instagram showing the aftermath of a terrorist attack in Nigeria. The Board found that restoring the post with a warning screen protects victims' privacy while allowing for discussion of events that some states may seek to suppress.

About the case

On 5 June 2022, an Instagram user in Nigeria posted a video showing motionless, bloodied bodies on the floor. It appears to be the aftermath of a terrorist attack on a church in southwest Nigeria, in which at least 40 people were killed and many more injured. The content was posted on the same day as the attack. Comments on the post included prayers and statements about safety in Nigeria.

Meta's automated systems reviewed the content and applied a warning screen. However, the user was not alerted as Instagram users do not receive notifications when warning screens are applied.

The user later added a caption to the video. This described the incident as "sad", and used multiple hashtags, including references to firearms collectors, allusions to the sound of gunfire and the live-action game "airsoft" (where teams compete with mock weapons). The user had included similar hashtags on many other posts.

Shortly after, one of Meta's Media Matching Service banks, an "escalations bank", identified the video and removed it. Media Matching Service banks can automatically match users' posts to content that has previously been found violating. Content in an "escalations bank" has been found violating by Meta's specialist internal teams. Any matching content is identified and immediately removed.

The user appealed the decision to Meta and a human reviewer upheld the removal. The user then appealed to the Board.

When the Board accepted the case, Meta reviewed the content in the "escalations bank", found that it was non-violating and removed it. However, it upheld its decision to remove the post in this case, saying that the hashtags could be read as "glorifying violence and minimising the suffering of the victims". Meta found that this violates multiple policies, including the Violent and Graphic Content policy, which prohibits sadistic remarks.
Violent and Graphic Content100https://oversightboard.com/decision/IG-OZNR5J1Z/30June 2022
59
Iran Protest SloganShould Iranian protest slogans be allowed?https://oversightboard.com/decision/FB-ZT6AJS4X/A post protesting the Iranian government saying "death to Khamenei" was removed.A FB post in a group calling for freedom for Iran protested the Iranian government with a cartoon depicting Iran's Supreme Leader with a fist grasping a chained, blindfolded woman wearing a hijab , containing the slogan "marg bar Khamenei" ("death to Khamenei).Violence / Incitement / Graphic ContentJuly20227removed the post and limited account for 30 days because "marg bar" directly translates to "death to" – this violates violance and incitement CS. It was later restored with a newsworthiness allowanceoverturned"marg bar" rhetorically means "down with" especially in the context of a protest – post should not have been removed, Board calls for context162The Oversight Board has overturned Meta's original decision to remove a Facebook post protesting the Iranian government, which contains the slogan "marg bar... Khamenei". This literally translates as "death to Khamenei" but is often used as political rhetoric to mean "down with Khamenei". The Board has made recommendations to better protect political speech in critical situations, such as that in Iran, where historic, widespread protests are being violently suppressed. This includes permitting the general use of "marg bar Khamenei" during protests in Iran.

About the case

In July 2022, a Facebook user posted in a group that describes itself as supporting freedom for Iran. The post contains a cartoon of Iran's Supreme Leader, Ayatollah Khamenei, in which his beard forms a fist grasping a chained, blindfolded woman wearing a hijab. A caption below in Farsi states "marg bar" the "anti-women Islamic government" and "marg bar" its "filthy leader Khamenei".

The literal translation of "marg bar", is "death to". However, it is also used rhetorically to mean "down with". The slogan "marg bar Khamenei" has been used frequently during protests in Iran over the past five years, including the 2022 protests. The content in this case was posted days before Iran's "National Day of Hijab and Chastity", around which critics frequently organise protests against the government, including against Iran's compulsory hijab laws. In September 2022, Jina Mahsa Amini died in police custody in Iran, following her arrest for "improper hijab". Her death sparked widespread protests which have been violently suppressed by the state. This situation was ongoing as the Board deliberated this case.

After the post was reported by a user, a moderator found that it violated Meta's Violence and Incitement Community Standard, removed it, and applied a "strike" and two "feature limits" to its author's account. The feature limits imposed restrictions on creating content and engaging with groups for seven and 30 days respectively. The post's author appealed to Meta, but the company's automated systems closed the case without review. They then appealed to the Board.

After the Board selected the case, Meta reviewed its decision. It maintained that the content violated the Violence and Incitement Community Standard but applied a newsworthiness allowance and restored the post. A newsworthiness allowance permits otherwise violating content if the public interest outweighs the risk of harm.
Violence and Incitement200https://oversightboard.com/decision/FB-ZT6AJS4X/28July 2022
60
Knin CartoonA slur about ethnic Serbianshttps://oversightboard.com/decision/FB-JRQ1XP2M/A Facebook post depicted ethnic Serbs as rats was not removed.A facebook post of an edited version of Disney's cartoon of 'The Pied Piper' was removed. The cartoon depicted ethnic Serbs as rats.Hate SpeechDec202112decided the post did not violate Hate Speech because the rat association was implicit, rather than explicitoverturnedcomparing ethnic groups to animals, even implicitly = hate speech, dehumanizingpost was removed While Meta informed the 397 users who reported the post of its initial decision that the content did not violate its policies, the company did not tell these users that it later reversed this decision397 The Oversight Board has overturned Meta’s original decision to leave a post on Facebook which depicted ethnic Serbs as rats. While Meta eventually removed the post for violating its Hate Speech policy, about 40 moderators had previously decided that the content did not violate this policy. This suggests that moderators consistently interpreted the Hate Speech policy as requiring them to identify an explicit, rather than implicit, comparison between ethnic Serbs and rats before finding a violation.

About the case

In December 2021, a public Facebook page posted an edited version of Disney’s cartoon “The Pied Piper,” with a caption in Croatian which Meta translated as “The Player from Čavoglave and the rats from Knin.”

The video portrays a city overrun by rats. While the entrance to the city in the original cartoon was labelled “Hamelin,” the city in the edited video is labelled as the Croatian city of “Knin.” The narrator describes how the rats decided they wanted to live in a “pure rat country,” so they started harassing and persecuting the people living in the city.

The narrator continues that, when the rats took over the city, a piper from the Croatian village of Čavoglave appeared. After playing a melody on his “magic flute,” the rats start to sing “their favorite song” and follow the piper out of the city. The song’s lyrics commemorate Momčilo Dujić, a Serbian Orthodox priest who was a leader of Serbian resistance forces during World War II.

The piper herds the rats into a tractor, which then disappears. The narrator concludes that the rats “disappeared forever from these lands” and “everyone lived happily ever after.”

The content in this case was viewed over 380,000 times. While users reported the content to Meta 397 times, the company did not remove the content. After the case was appealed to the Board, Meta conducted an additional human review, finding, again, that the content did not violate its policies.

In January 2022, when the Board identified the case for full review, Meta decided that, while the post did not violate the letter of its Hate Speech policy, it did violate the spirit of the policy, and removed the post from Facebook. Later, when drafting an explanation of its decision for the Board, Meta changed its mind again, concluding that the post violated the letter of the Hate Speech policy, and all previous reviews were in error.

While Meta informed the 397 users who reported the post of its initial decision that the content did not violate its policies, the company did not tell these users that it later reversed this decision.
Hate Speech50https://oversightboard.com/decision/FB-JRQ1XP2M/26Dec 2021
61
Pro-Navalny Protests in Russia"Cowardly bot"https://oversightboard.com/decision/FB-6YHRXHZR/A comment calling a user a "cowardly bot" for criticizing Navalny support was removed.A Russian user responded to a comment made by a Protest Critic on his pro-Navalny post by calling the critic a "cowardly adult," he was reported, the post was removed by FBBullying & HarassmentJan20211Determined the comment violated the bullying and harassment policy and removed the commentoverturnedwhile the term "cowardly" was a negative character claim, FB did not consider the political context, public character, or heated tone of the conversation. This failed to protect the protestor's voicepost was restored23The Oversight Board has overturned Facebook's decision to remove a comment in which a supporter of imprisoned Russian opposition leader Alexei Navalny called another user a "cowardly bot". Facebook removed the comment for using the word "cowardly", which was construed as a negative character claim. The Board found that while the removal was in line with the Bullying and Harassment Community Standard, the current Standard was an unnecessary and disproportionate restriction on free expression under international human rights standards. It was also not in line with Facebook's values.

About the case

On 24 January, a user in Russia made a post consisting of several pictures, a video and text (root post) about the protests in support of opposition leader Alexei Navalny held in Saint Petersburg and across Russia on 23 January. Another user (the Protest Critic) responded to the root post and wrote that, while they did not know what happened in Saint Petersburg, the protesters in Moscow were all school children, mentally "slow", and were "shamelessly used".

Other users then challenged the Protest Critic in subsequent comments to the root post. A user who was at the protest (the Protester) appeared to be the last to respond to the Protest Critic. They claimed to be elderly and to have participated in the protest in Saint Petersburg. The Protester ended the comment by calling the Protest Critic a "cowardly bot".

The Protest Critic then reported the Protester's comment to Facebook for bullying and harassment. Facebook determined that the term "cowardly" was a negative character claim against a "private adult" and, as the "target" of the attack had reported the content, Facebook removed it. The Protester appealed against this decision to Facebook. Facebook determined that the comment violated the Bullying and Harassment Policy, under which a private individual can get Facebook to take down posts containing a negative comment on their character.
Bullying and Harassment50https://oversightboard.com/decision/FB-6YHRXHZR/23Jan 2021
62
Öcalan's IsolationThe human rights of a "dangerous person"https://oversightboard.com/decision/IG-I9DP23IB/An Instagram post discussing Abdullah Öcalan's solitary confinement was removed.An Instagram user in the US posted a photo of Abdullah Öcalan, a founding member of the PKK, with a caption that asked users to talk about his imprisonment and the inhumane nature of solitary confinement.Dangerous individuals / organisationsJan20211The post was removed because Öcalan is designated as a dangerous entity under FB's CS. This decision was upheld by 2x moderators.overturnedWhile Öcalan is dangerous, the CS specify that users can discuss the conditions of confinement of a 'dangerous' individual.post was restoredAfter the Board selected this case and assigned it to panel, Facebook found that a piece of internal guidance on the Dangerous Individuals and Organisations policy was "inadvertently not transferred" to a new review system in 2018. This guidance, developed in 2017 partly in response to concern about the conditions of Öcalan's imprisonment, allows discussion on the conditions of confinement for individuals designated as dangerous.12The Oversight Board has overturned Facebook's original decision to remove an Instagram post encouraging people to discuss the solitary confinement of Abdullah Öcalan, a founding member of the Kurdistan Workers' Party (PKK). After the user appealed and the Board selected the case for review, Facebook concluded that the content was removed in error and restored it. The Board is concerned that Facebook misplaced an internal policy exception for three years and that this may have led to many other posts being wrongly removed.

About the case

This case relates to Abdullah Öcalan, a founding member of the PKK. This group has used violence in seeking to achieve its aim of establishing an independent Kurdish state. Both the PKK and Öcalan are designated as dangerous entities under Facebook's Community Standard on dangerous individuals and organisations.

On 25 January 2021, an Instagram user in the United States posted a picture of Öcalan, which included the words "y'all ready for this conversation" in English. In a caption, the user wrote that it was time to talk about ending Öcalan's isolation in prison on Imrali island in Turkey. The user encouraged readers to engage in conversation about Öcalan's imprisonment and the inhumane nature of solitary confinement.

After being assessed by a moderator, the post was removed on 12 February under Facebook's rules on dangerous individuals and organisations as a call to action to support Öcalan and the PKK. When the user appealed this decision, they were told that their appeal could not be reviewed because of a temporary reduction in Facebook's review capacity due to COVID-19. However, a second moderator did carry out a review of the content and found that it violated the same policy. The user then appealed to the Oversight Board.

After the Board selected this case and assigned it to panel, Facebook found that a piece of internal guidance on the Dangerous Individuals and Organisations policy was "inadvertently not transferred" to a new review system in 2018. This guidance, developed in 2017 partly in response to concern about the conditions of Öcalan's imprisonment, allows discussion on the conditions of confinement for individuals designated as dangerous.

In line with this guidance, Facebook restored the content to Instagram on 23 April. Facebook told the Board that it is currently working on an update to its policies to allow users to discuss the human rights of designated dangerous individuals. The company asked the Board to provide insight and guidance on how to improve these policies. While Facebook updated its Community Standard on dangerous individuals and organisations on 23 June 2021, these changes do not directly affect the guidance that the company requested from the Board.
Dangerous Individuals and Organisations75https://oversightboard.com/decision/IG-I9DP23IB/22Jan 2021
63
Myanmar BotA slur against the Chinese governmenthttps://oversightboard.com/decision/FB-ZWQUPZLZ/A post in Burmese using offensive language referring to Chinese people was removed.A Facebook post in Myanmar discussed ways to limit financing the Myanmar military following the Feb 2021 coup. Part of the post mentioned "the fucking Chinese" and was removedHate SpeechApril20214The post used the phrase "fucking Chinese" ("sout ta-yote" in Burmese). The word "ta-yote" overlaps between China the country and Chinese people. FB argued it referred to Chinese people and removed it.overturnedthe post did not target Chinese people, but the Chinese state, using profanity to reference governmental policy in HK – the context was missing fromt he FB argumentpost was restored10The Oversight Board has overturned Facebook's decision to remove a post in Burmese under its Hate Speech Community Standard. The Board found that the post did not target Chinese people, but the Chinese state. Specifically, it used profanity to reference Chinese governmental policy in Hong Kong as part of a political discussion on the Chinese government's role in Myanmar.

About the case

In April 2021, a Facebook user who appeared to be in Myanmar posted in Burmese on their timeline. The post discussed ways to limit financing to the Myanmar military following the coup in Myanmar on 1 February 2021. It proposed that tax revenue be given to the Committee Representing Pyidaungsu Hluttaw (CRPH), a group of legislators opposed to the coup. The post received about half a million views and no Facebook users reported it.

Facebook translated the supposedly violating part of the user's post as "Hong Kong people, because the fucking Chinese tortured them, changed their banking to UK and now (the Chinese), they cannot touch them." Facebook removed the post under its Hate Speech Community Standard. This prohibits content targeting a person or group of people based on their race, ethnicity or national origin with "profane terms or phrases with the intent to insult".

The four content reviewers who examined the post all agreed that it violated Facebook's rules. In their appeal to the Board, the user stated that they posted the content to "stop the brutal military regime".
Hate Speech50https://oversightboard.com/decision/FB-ZWQUPZLZ/21April 2021
64
Shared Al Jazeera PostThreat of violence from Hamashttps://oversightboard.com/decision/FB-P93JPX02/A post sharing news about a threat of violence from Hamas' military wing was removed.A post from Al Jazeera with text in Arabic was shared that included imagery of the Al-Qassam Brigades with text by Abu Ubaida.Dangerous individuals / organisationsMay20215Removed the post because Al-Qasaam Brigades and Abu Ubaida are both designated as dangerous under FB Dangerous Orgs and Individuals CS. Upon review, FB revesed its decision, explaining that it did not violate the CS.overturnedThe post did not contain praise, support, or representation of the Al-Qassam brigades or Hamas. Individuals should have a right to post news stories as much as media orgs have a right to publish them in the first place.post was restoredThis caused FB to be accused of censoring Palestinian conflict due to Israeli govt demands. They declined to get more specific about the issue.26The Oversight Board agrees that Facebook was correct to reverse its original decision to remove content on Facebook that shared a news post about a threat of violence from the Izz al-Din al-Qassam Brigades, the military wing of the Palestinian group Hamas. Facebook originally removed the content under the Dangerous Individuals and Organisations Community Standard, and restored it after the Board selected this case for review. The Board concludes that removing the content did not reduce offline harm and restricted freedom of expression on an issue of public interest.

About the case

On 10 May 2021, a Facebook user in Egypt with more than 15,000 followers shared a post by the verified Al Jazeera Arabic Page consisting of text in Arabic and a photo.

The photo portrays two men in camouflage fatigues with faces covered, wearing headbands with the insignia of the Al-Qassam Brigades. The text states "The resistance leadership in the common room gives the occupation a respite until 18:00 to withdraw its soldiers from Al-Aqsa Mosque and Sheikh Jarrah neighbourhood otherwise he who warns is excused. Abu Ubaida – Al-Qassam Brigades military spokesman". The user shared Al Jazeera's post and added a single-word caption "Ooh" in Arabic. The Al-Qassam Brigades and their spokesperson Abu Ubaida are both designated as dangerous under Facebook's Dangerous Organisations and Individuals Community Standard.

Facebook removed the content for violating this policy and the user appealed the case to the Board. As a result of the Board selecting this case, Facebook concluded that it had removed the content in error and restored it.
Dangerous Individuals and Organisations75https://oversightboard.com/decision/FB-P93JPX02/19May 2021
65
Columbia ProtestsProtesting Columbia's presidenthttps://oversightboard.com/decision/FB-E5M6QZGA/A video of protesters in Colombia criticising of President Duque was removed.A post was shared that depicted a protest in Colombia. People can be heard chanting criticisms about Colombian president, President Duque.Hate SpeechMay20215FB translated phrases in the video to "son of a bitch" and "stop being a the fag on TV." They state this violates the hate speech CS which does not allow content that "describes or negatively targets people with slurs"overturnedArgued that the newsworthiness allowance allows this content to be posted to protect a platform of expression for protestorspost was restored18The Oversight Board has overturned Facebook's decision to remove a post showing a video of protesters in Colombia criticising the country's president, Ivan Duque. In the video, the protesters use a word designated as a slur under Facebook's Hate Speech Community Standard. Assessing the public interest value of this content, the Board found that Facebook should have applied the newsworthiness allowance in this case.

About the case

In May 2021, the Facebook Page of a regional news outlet in Colombia shared a post by another Facebook Page without adding any additional caption. This shared post is the content at issue in this case. The original root post contains a short video showing a protest in Colombia with people marching behind a banner that says "SOS COLOMBIA".

The protesters are singing in Spanish and address the Colombian president, mentioning the tax reform recently proposed by the Colombian government. As part of their chant, the protesters call the president "hijo de puta" once and say "deja de hacerte el marica en la tv" once. Facebook translated these phrases as "son of a bitch" and "stop being the fag on TV". The video is accompanied by text in Spanish expressing admiration for the protesters. The shared post was viewed around 19,000 times, with fewer than five users reporting it to Facebook.
Hate Speech75https://oversightboard.com/decision/FB-E5M6QZGA/18May 2021
66
Wampum BeltThe Wampum Belthttps://oversightboard.com/decision/FB-L1LANIA7/A post of a wampum belt referencing unmarked graves was removed.A post of a picture of a wampum belt that included a series of depictions inspired by "the Kamloops story," a reference to a discovery of unmarked graves at a former residential school for Indigenous children in BC, Canada.Hate SpeechAugust20218automated systems and a human reviewer removed the content for violating FB's hate speech CS due to the language used to describe the eventoverturnedthe content is an example of "counter speech," where hate speech is referenced to resist oppression / discriminationpost was restored8The Oversight Board has overturned Meta's original decision to remove a Facebook post from an Indigenous North American artist that was removed under Facebook's Hate Speech Community Standard. The Board found that the content is covered by allowances to the Hate Speech policy as it is intended to raise awareness of historic crimes against Indigenous people in North America.

About the case

In August 2021, a Facebook user posted a picture of a wampum belt, along with an accompanying text description in English. A wampum belt is a North American Indigenous art form in which shells are woven together to form images, recording stories and agreements. This belt includes a series of depictions which the user says were inspired by "the Kamloops story", a reference to the May 2021 discovery of unmarked graves at a former residential school for Indigenous children in British Columbia, Canada.

The text provides the artwork's title, "Kill the Indian/ Save the Man", and identifies the user as its creator. The user describes the series of images depicted on the belt: "Theft of the Innocent, Evil Posing as Saviours, Residential School / Concentration Camp, Waiting for Discovery, Bring Our Children Home". In the post, the user describes the meaning of their artwork as well as the history of wampum belts and their purpose as a means of education. The user states that the belt was not easy to create and that it was emotional to tell the story of what happened at Kamloops. They apologise for any pain the art causes survivors of Kamloops, noting that their "sole purpose is to bring awareness to this horrific story".

Meta's automated systems identified the content as potentially violating Facebook's Hate Speech Community Standard the day after it was posted. A human reviewer assessed the content as violating and removed it that same day. The user appealed against that decision to Meta prompting a second human review, which also assessed the content as violating. At the time of removal, the content had been viewed over 4,000 times, and shared over 50 times. No users reported the content.

As a result of the Board selecting this case, Meta identified its removal as an "enforcement error" and restored the content on 27 August. However, Meta did not notify the user of the restoration until 30 September – two days after the Board asked Meta for the contents of its messaging to the user. Meta explained that the late messaging was a result of human error.
Hate Speech50https://oversightboard.com/decision/FB-L1LANIA7/17August 2021
67
Ayahuasca BrewDiscussing Ayahuascahttps://oversightboard.com/decision/IG-0U6FLA5B/An Instagram post discussing the benefits of ayahuasca as "medicine" was removed.An account on Instagram posted a photo of a dark brown liquid in a jar that was described as ayahuasca with a caption describing its benefits.Regulated GoodsJuly20217Removed the post becasue it encouraged the use of ayahuasca. It was described with a "heart emoji, referred to as 'medicine' and stated 'it can help you.'"overturneddid not violate Insta community guidelines – only cover the sale / purchase of illegal / prescription drugspost was restoredMeta trying to apply FB CS to Instagram without transparently telling users it is doing so7The Oversight Board has overturned Meta's decision to remove a post discussing the plant-based brew ayahuasca. The Board found that the post did not violate Instagram's Community Guidelines as they were articulated at the time. Meta's human rights responsibilities also supported restoring the content. The Board recommended that Meta change its rules to allow users to discuss the traditional or religious uses of non-medical drugs in a positive way.

About the case

In July 2021, an Instagram account for a spiritual school based in Brazil posted a picture of a dark brown liquid in a jar and two bottles, described as ayahuasca in the accompanying text in Portuguese. Ayahuasca is a plant-based brew with psychoactive properties that has religious and ceremonial uses including among Indigenous groups in South America. The text states that "AYAHUASCA IS FOR THOSE WHO HAVE THE COURAGE TO FACE THEMSELVES" and includes statements that ayahuasca is for those who want to "correct themselves", "enlighten", "overcome fear" and "break free".

The post was flagged for review by Meta's automated systems because it had received around 4,000 views and was "trending". It was then reviewed by a human moderator and removed.
Regulated Goods75https://oversightboard.com/decision/IG-0U6FLA5B/16July 2021
68
Asking for AdderallHow to get Adderall from a doctor?https://oversightboard.com/decision/FB-Q72FD6YL/A post asking for advice on talking to a doctor about Adderall was removed.A post in a private group for adults with ADHD asked how to approach talking to a doctor about specific medications. Comments from group members provided advice on how to explain the situation to a doctor.Regulated GoodsJune20216Removed the post because it violated FB restricted goods and services CS.overturnedthe CS does not prohibit content that seeks advice on pharmaceutical drugs in the context of medical conditions – there was no direct connection between the content and the possibility of harmpost was restoredMeta's removal of the post generated a strike against the user, resulted in account being restricted for 30 days. This was not reversed before end of period.16The Oversight Board has overturned Meta's original decision to remove a Facebook post that asked for advice on how to talk to a doctor about the prescription medication Adderall ®. The Board did not find any direct or immediate connection between the content and the possibility of harm.

About the case

In June 2021, a Facebook user in the United States posted in a private group that claims to be for adults with attention deficit hyperactivity disorder (ADHD). The user identifies themselves as someone with ADHD and asks the group how to approach talking to a doctor about specific medication. The user states that they were given a Xanax prescription but that the medication Adderall has worked for them in the past, while other medications "zombie me out". They are concerned about presenting as someone with drug-seeking behaviour if they directly ask their doctor for a prescription. The post had comments from group members providing advice on how to explain the situation to a doctor.

In August 2021, Meta removed the content under Facebook's Restricted Goods and Services Community Standard. Following the removal, Meta restricted the user's account for 30 days. As a result of the Board selecting this case, Meta identified its removal as an "enforcement error" and restored the content.
Regulated Goods50https://oversightboard.com/decision/FB-Q72FD6YL/14June 2021
69
Swedish Journalist Reporting Sexual Violence Against MinorsSwedish leniency towards sex crimeshttps://oversightboard.com/decision/FB-P9PR9RSA/A post depicting incidents of sexual violence against minors was removed.A user made a post with a stock image of a girl that obscures her face with text that describes incidents of sexual violence against two minors.Nudity & sexual activitySept20219Removed the post under rules on child sexual exploitation, abuse and nudity.overturnedthe post makes a precise and clinical description of the aftermath of the rape, it does not constitute language that sexually exploited children or depicted a minor in a 'sexualised context' – Meta also removed the post without an adequate reason whypost was restored8The Oversight Board has overturned Meta's decision to remove a post describing incidents of sexual violence against two minors. The Board found that the post did not violate the Community Standard on child sexual exploitation, abuse and nudity. The broader context of the post makes it clear that the user was reporting on an issue of public interest and condemning the sexual exploitation of a minor.

About the case

In August 2019, a user in Sweden posted on their Facebook Page a stock photo of a young girl sitting down with her head in her hands in a way that obscures her face. The photo has a caption in Swedish describing incidents of sexual violence against two minors. The post contains details about the rapes of two unnamed minors, specifying their ages and the municipality in which the first crime occurred. The user also details the convictions that the two unnamed perpetrators received for their crimes.

The post argues that the Swedish criminal justice system is too lenient and incentivises crimes. The user advocates for the establishment of a sex offenders register in the country. They also provide sources in the comments section of the post, identifying the criminal cases by court reference numbers and linking to coverage of the crimes by local media.

The post provides graphic details of the harmful impact of the crime on the first victim. It also includes quotes attributed to the perpetrator reportedly bragging to friends about the rape and referring to the minor in sexually explicit terms. While the user posted the content to Facebook in August 2019, Meta removed it two years later, in September 2021, under its rules on child sexual exploitation, abuse and nudity.
Adult nudity and sexual activity50https://oversightboard.com/decision/FB-P9PR9RSA/13Sept 2021
70
Reclaiming Arabic WordsArabic slurs towards "effeminate" menhttps://oversightboard.com/decision/IG-2PJ00L4T/An Instagram post showing Arabic words used derogatorily was removed.A series of photos was posted on Instagram with a caption that explained each photo with a derogatory term that could be used against men with "effeminate mannerisms."Hate SpeechNov202111removed the content for violating the hate speech policy because of terms like "zamel," "foufou," and "tante/tanta." It was restored and then removed again.overturnedthe post contains slurs, but they are being "used self-referentially or in an empowering way." The user did not condone or encourage the use of the slur, but instead intended to reclaim the power of the slur.post was restoredThe Board also believes that to formulate nuanced lists of slur terms and give moderators proper guidance on applying exceptions to its Slurs Policy, Meta must regularly seek input from minorities targeted with slurs on a country and culture-specific level3The Oversight Board has overturned Meta's original decision to remove an Instagram post which, according to the user, showed pictures of Arabic words that can be used in a derogatory way towards men with "effeminate mannerisms". The content was covered by an exception to Meta's Hate Speech Policy and should not have been removed.

About the case

In November 2021, a public Instagram account that describes itself as a space for discussing queer narratives in Arabic culture posted a series of pictures in a carousel (a single Instagram post that can contain up to 10 images with a single caption). The caption, written in both Arabic and English, explained that each picture shows a different word that can be used in a derogatory way towards men with "effeminate mannerisms" in the Arabic-speaking world, including the terms "zamel", "foufou" and "tante/tanta". The user stated that the post intended "to reclaim [the] power of such hurtful terms".

Meta initially removed the content for violating its Hate Speech Policy, but restored it after the user appealed. After being reported by another user, Meta then removed the content again for violating its Hate Speech Policy. According to Meta, before the Board selected this case, the content was escalated for additional internal review, which determined that it did not, in fact, violate the company's Hate Speech Policy. Meta then restored the content to Instagram. Meta explained that its initial decisions to remove the content were based on reviews of the pictures containing the terms "z***l" and "t***e/t***a".
Hate Speech50https://oversightboard.com/decision/IG-2PJ00L4T/11Nov 2021
71
Myanmar Post About MuslimsSomething wrong with Muslimshttps://oversightboard.com/decision/FB-I2T6526K/A post removed for claiming something is psychologically wrong with Muslims.A post was made in a FB group in Burmese by a person in Myanmar of a Syrian toddler of Kurdish ethnicity who drowned. It stated that something was wrong psychologically with Muslims and implies the child might have grown up to be an extremist.Hate SpeechOct202010Removed because it violated hate speech Community Standard that prohibits generalised statements of inferiority about the mental deficiencies of a group on the basis of their religion.overturnedThe Board asked the post to be read as a whole, and when presented with the full context, they recognise that, while hate speech against Muslims is common and severe in Myanmar, statements about them being mentally unwell are not a strong part of the rhetoric.post was restored with a warning screen under the violent and graphic contentConsidering international human rights standards on limiting freedom of expression, the Board found that, while the post might be considered pejorative or offensive towards Muslims, it did not advocate hatred or intentionally incite any form of imminent harm. As such, the Board does not consider its removal to be necessary to protect the rights of others.11The Oversight Board has overturned Facebook's decision to remove a post under its hate speech Community Standard. The Board found that, while the post might be considered offensive, it did not reach the level of hate speech.

About the case

On 29 October 2020, a user in Myanmar posted in a Facebook group in Burmese. The post included two widely shared photographs of a Syrian toddler of Kurdish ethnicity who drowned attempting to reach Europe in September 2015.

The accompanying text stated that there is something wrong with Muslims (or Muslim men) psychologically or with their mindset. It questioned the lack of response by Muslims generally to the treatment of Uyghur Muslims in China, compared to killings in response to cartoon depictions of the Prophet Muhammad in France. The post concludes that recent events in France reduce the user's sympathies for the depicted child, and seems to imply the child may have grown up to be an extremist.

Facebook removed this content under its hate speech Community Standard.
Hate Speech75https://oversightboard.com/decision/FB-I2T6526K/7Oct 2020
72
Breast Cancer Symptoms and NudityBreast Cancer Survivors & Nudityhttps://oversightboard.com/decision/IG-7THR3SI1/An Instagram post raising breast cancer awareness with photos of female nipples and breasts was auto-removed by moderation bots.A Brazilian user posted a photo to Instagram to raise awareness about breast cancer that showed various photographs with breast cancer symptoms. Some of the images showed female nipples, three included breasts. The post was removed by FB bots.Nudity & sexual activityOct202010Bots originally removed the post, but the company eventually restored it. The detection and removal of the post was completely automatedoverturnedThe Board blamed lack of human oversight, as the FB automated systems neglected to see the words "breast cancer." The community guidelines allow nudity when the user is raising awareness, or for medical or educational purposes.post was restoredThe Board recommends that Facebook:

Revise the "short" explanation of the Instagram Community Guidelines to clarify that the ban on adult nudity is not absolute.
Revise the "long" explanation of the Instagram Community Guidelines to clarify that visible female nipples can be shown to raise breast cancer awareness.
Clarify that the Instagram Community Guidelines are interpreted in line with the Facebook Community Standards, and where there are inconsistencies the latter take precedence.
24The Oversight Board has overturned Facebook's decision to remove a post on Instagram. After the Board selected this case, Facebook restored the content. Facebook's automated systems originally removed the post for violating the company's Community Standard on adult nudity and sexual activity. The Board found that the post was allowed under a policy exception for "breast cancer awareness" and Facebook's automated moderation in this case raises important human rights concerns.

About the case

In October 2020, a user in Brazil posted a picture to Instagram with a title in Portuguese indicating that it was to raise awareness of signs of breast cancer. The image was pink, in line with "Pink October", an international campaign to raise awareness of this disease. Eight photographs within the picture showed breast cancer symptoms with corresponding descriptions. Five of them included visible and uncovered female nipples, while the remaining three photographs included female breasts, with the nipples either out of shot or covered by a hand. The post was removed by an automated system enforcing Facebook's Community Standard on adult nudity and sexual activity. After the Board selected the case, Facebook determined this was an error and restored the post.
Adult nudity and sexual activity50https://oversightboard.com/decision/IG-7THR3SI1/6Oct 2020
73
Goebbels quoteGoebbel's "quote"https://oversightboard.com/decision/FB-2RDRCAVQ/A removed post incorrectly attributing a quote to Nazi Joseph Goebbels, comparing him to Trump, was taken down.A user made a FB post that incorrectly attributed a quote to Joseph Goebbels, the Reich Minister of Propaganda in Nazi Germany. The quote claimed that arguments should appeal to emotions, rather than intellectuals. The intent was to draw comparison between the quote's sentiment and Donald Trump's presidencyDangerous individuals / organisationsOct202010FB originally removed the post, but saw that it was made to draw comparisons to Trump, not support Nazism. Their rules on "dangerous individuals" and speech were not specific enough to remove itoverturnedtheir policies state that sharing a quote attributed to a dangerous individual is treated as expressing support, unless the user makes their intent specific.post was restored12The Oversight Board has overturned Facebook's decision to remove a post which the company claims violated its Community Standard on dangerous individuals and organisations. The Board found that these rules were not made sufficiently clear to users.

About the case

In October 2020, a user posted a quote which was incorrectly attributed to Joseph Goebbels, the Reich Minister of Propaganda in Nazi Germany. The quote, in English, claimed that, rather than appealing to intellectuals, arguments should appeal to emotions and instincts. It stated that the truth does not matter and is subordinate to tactics and psychology. There were no pictures of Joseph Goebbels or Nazi symbols in the post. In their statement to the Board, the user said that their intent was to draw a comparison between the sentiment in the quote and the presidency of Donald Trump.

The user first posted the content two years earlier and was prompted to share it again by Facebook's "memory" function, which allows users to see what they posted on a specific day in a previous year, with the option of resharing the post.

Facebook removed the post for violating its Community Standard on dangerous individuals and organisations.
Dangerous Individuals and Organisations50https://oversightboard.com/decision/FB-2RDRCAVQ/5Oct 2020
74
Claimed COVID-19 Cureclaimed COVID-19 curehttps://oversightboard.com/decision/FB-XWJQBU9A/A removed post alleging a scandal at a French health agency regarding COVID treatments was removed.A user posted a video with a caption that a scandal at a French agency responsible for regulating health products refused to authorise hydroxychloroquine with azithromycin to treat COVID, but authorised remdesivir.miscOct202010FB originally removed the post because it contained claims for a cure for COVID, which could lead people to ignore health guidance, self-medicate. Citing its misinformation and imminent harm rules, part of its violence incitement standard, they removed it. overturnedThe Board used context to prove the post would not rise to the level of imminent harm.post was restoredA patchwork of policies found on different parts of Facebook's website make it difficult for users to understand what content is prohibited. Changes to Facebook's COVID-19 policies announced in the company's Newsroom have not always been reflected in its Community Standards, while some of these changes even appear to contradict them.8The Oversight Board has overturned Facebook's decision to remove a post which it claimed, "contributes to the risk of imminent… physical harm". The Board found Facebook's misinformation and imminent harm rule (part of its violence and incitement Community Standard) to be inappropriately vague and recommended, among other things, that the company create a new Community Standard on health misinformation.

About the case

In October 2020, a user posted a video and accompanying text in French in a public Facebook group related to COVID-19. The post alleged a scandal at the Agence Nationale de Sécurité du Médicament (the French agency responsible for regulating health products), which refused to authorise hydroxychloroquine combined with azithromycin for use against COVID-19, but authorised and promoted remdesivir. The user criticised the lack of a health strategy in France and stated that "[Didier] Raoult's cure" is being used elsewhere to save lives. The user's post also questioned what society had to lose by allowing doctors to prescribe in an emergency a "harmless drug" when the first symptoms of COVID-19 appear.

In its referral to the Board, Facebook cited this case as an example of the challenges of addressing the risk of offline harm that can be caused by misinformation about the COVID-19 pandemic.
Health, Misinformation, Safety50https://oversightboard.com/decision/FB-XWJQBU9A/4Oct 2020
75
Protest in India Against FranceMacron is the devilhttps://oversightboard.com/decision/FB-R9K87402/A meme suggesting violence against Muslims in Hindi text was removed.A user posted a meme from a Turkish television show with words in Hindi in an overlay. The text and imagery could be read as religious speech and a possible threat of violence.Violence / Incitement / Graphic ContentOct202010FB removed post due to community standard that states users should not threaten violence – the sword imagery was determined to be a veiled threat against "kafirs" – this term too was interpreted as derogatory.overturnedThe Board was not convinced the post would cause harm, merely calls to action (boycott of French products)post was restoredThe Board explicitly stated that the decision to restore the post does not imply endorsement of the content.2The Oversight Board has overturned Facebook's decision to remove a post under its Community Standard on violence and incitement. While the company considered that the post contained a veiled threat, a majority of the Board believed that it should be restored. This decision should only be implemented pending user notification and consent.

About the case

In late October 2020, a Facebook user posted in a public group described as a forum for Indian Muslims. The post contained a meme featuring an image from the Turkish television show "Diriliş: Ertuğrul", depicting one of the show's characters in leather armour holding a sheathed sword. The meme had a text overlay in Hindi. Facebook's translation of the text into English reads: "if the tongue of the kafir starts against the Prophet, then the sword should be taken out of the sheath." The post also included hashtags referring to President Emmanuel Macron of France as the devil and calling for the boycott of French products.

In its referral, Facebook noted that this content highlighted the tension between what it considered religious speech and a possible threat of violence, even if not made explicit.
Violence and Incitement50https://oversightboard.com/decision/FB-R9K87402/3Oct 2020
76
Punjabi Concern Over the RSS in IndiaPunjabi concern over the RSS in Indiahttps://oversightboard.com/decision/FB-H6OZKDS3/A video accusing Prime Minister Modi of threatening Sikhs was removed.A user shared a video of a social activist and supporter of Punjabi culture with a caption that accused RSS of threatening to kill Sikhs in India with the help of PM ModiDangerous individuals / organisationsNov202011A human reviewer at FB determined it violated FB dangerous individuals and orgs community standard and removed it. Before being reviewed by The Board, FB restored the content because none of the groups mentioned are "dangerous"overturnedThe decision to remove was not consistent with company's community standards or human rights responsibilitiespost was restoredConsidering the above, the Board found the account restrictions that excluded the user from Facebook particularly disproportionate. It also expressed concerns that Facebook's rules on such restrictions are spread across many locations and not all found in the Community Standards, as one would expect.6The Oversight Board has overturned Facebook's decision to remove a post under its Dangerous Individuals and Organisations Community Standard. After the Board identified this case for review, Facebook restored the content. The Board expressed concerns that Facebook did not review the user's appeal against its original decision. The Board also urged the company to take action to avoid mistakes that silence the voices of religious minorities.

About the case

In November 2020, a user shared a video post from Punjabi-language online media company Global Punjab TV. This featured a 17-minute interview with Professor Manjit Singh who is described as "a social activist and supporter of the Punjabi culture." The post also included a caption mentioning Hindu nationalist organisation Rashtriya Swayamsevak Sangh (RSS) and India's ruling party Bharatiya Janata Party (BJP): "RSS is the new threat. Ram Naam Satya Hai. The BJP moved towards extremism."

In text accompanying the post, the user claimed that the RSS was threatening to kill Sikhs, a minority religious group in India, and to repeat the "deadly saga" of 1984 when Hindu mobs massacred and burned Sikh men, women and children. The user alleged that Prime Minister Modi himself is formulating the threat of "Genocide of the Sikhs" on advice of the RSS President, Mohan Bhagwat. The user also claimed that Sikh regiments in the army have warned Prime Minister Modi of their willingness to die to protect the Sikh farmers and their land in Punjab.

After being reported by one user, a human reviewer determined that the post violated Facebook's Dangerous Individuals and Organisations Community Standard and removed it. This triggered an automatic restriction on the user's account. Facebook told the user that they could not review their appeal of the removal because of a temporary reduction in review capacity due to COVID-19.
Dangerous Individuals and Organisations50https://oversightboard.com/decision/FB-H6OZKDS3/2Nov 2020
77
"Two Buttons" MemeThe "two buttons" memehttps://oversightboard.com/decision/FB-RZL57QHJ/A comment characterizing Armenians as terrorists in relation to Turkey was removed.A user posted a comment on FB about a meme that depicted Turkey, the Armenian genocide, and characterising Armenians as terrorists.miscDec202012The phrase "The Armenians were terrorists and deserved it" claims that Armenians were criminals based on nationality & ethnicity. This violated hate speech CS. Was not covered by exception that allows hateful content to condemn / raise awareness.overturnedthe 'two buttons' meme implies contrasting two options to show contradictions, not support for them – posted meme to raise awareness of / condemn Turkish government efforts to deny genocidepost and comments were restored23The Oversight Board has overturned Facebook's decision to remove a comment under its Hate Speech Community Standard. A majority of the Board found that it fell into Facebook's exception for content condemning or raising awareness of hatred.

About the case

On 24 December 2020, a Facebook user in the United States posted a comment with an adaptation of the 'daily struggle' or 'two buttons' meme. This featured the split-screen cartoon from the original 'two buttons' meme, but with a Turkish flag substituted for the cartoon character's face. The cartoon character has its right hand on its head and appears to be sweating. Above the character, in the other half of the split screen, are two red buttons with corresponding statements in English: "The Armenian Genocide is a lie" and "The Armenians were terrorists that deserved it".

While one content moderator found that the meme violated Facebook's Hate Speech Community Standard, another found that it violated its Cruel and Insensitive Community Standard. Facebook removed the comment under the Cruel and Insensitive Community Standard and informed the user of this.

After the user's appeal, however, Facebook found that the content should have been removed under its Hate Speech Community Standard. The company did not tell the user that it upheld its decision under a different Community Standard.
Cruel and Insensitive100https://oversightboard.com/decision/FB-RZL57QHJ/1Dec 2020
78
Sharing Private Residential InformationSharing private residential informationhttps://oversightboard.com/decision/PAO-2021-01/Meta asked the Board for policy advice on the sharing of private residential addresses and imagesMeta asked the Board for policy advice on the sharing of private residential addresses and imagesmiscFeb20222found this difficult because, while access to such information can be relevant to journalism / civic activism, "exposing this information without consent can create a risk to residents' safety and infringe on an individual's privacy."mixedThe Board recommends that Meta remove the exception to the Privacy Violations CS that allows the sharing of private residential information when it is considered 'publicly available.'Once this information has been shared, the harms that can result, such as doxing, are difficult to remedy. Harms resulting from doxing disproportionately affect groups such as women, children and LGBTQIA+ people, and can include emotional distress, loss of employment and even physical harm or death.

As the potential for harm is particularly context specific, it is challenging to develop objective and universal indicators that would allow content reviewers to distinguish the sharing of content that would be harmful from shares that would not be. That is why the Board believes that the Privacy Violations policy should be more protective of privacy.
Last year, Meta requested a policy advisory opinion from the Board on the sharing of private residential addresses and images, and the contexts in which this information may be published on Facebook and Instagram. Meta considers this to be a difficult question as while access to such information can be relevant to journalism and civic activism, "exposing this information without consent can create a risk to residents' safety and infringe on an individual's privacy".

Meta's request noted several potential harms linked to releasing personal information, including residential addresses and images. These include "doxing", (which refers to the release of documents, abbreviated as "dox") where information which can identify someone is revealed online. Meta noted that doxing can have negative real-world consequences, such as harassment or stalking.
Policy Advisory – Journalism, Marginalised Communities100https://oversightboard.com/decision/PAO-2021-01/27Feb 2022
79
Meta's Cross-Check ProgrammeMeta's own 'Cross-Check' programmehttps://oversightboard.com/decision/PAO-NR730OFI/The cross-check program adds layers of human review when checking content for policy violations. This has come with some drawbacks.This opinion analyses Meta's cross-check program, raising important questions around how Meta treats its most powerful users.miscOct202110performs ±100 million enforcement attempts each day. The cross-check program adds layers of human review when checking content for policy violations. This has come with some drawbacks.mixedaccuses cross-check policies of satisfying business interests rather than focusing on protecting human rights. It unfairly provides extra protection to certain users according to business interests. The Board made recommendations to Meta on how to improve.In October 2021, following disclosures about Meta's cross-check programme in the Wall Street Journal, the Oversight Board accepted a request from the company to review cross-check and make recommendations for how it could be improved. This policy advisory opinion is our response to this request. It analyses cross-check in light of Meta's human rights commitments and stated values, raising important questions around how Meta treats its most powerful users.

To read the full version of our policy advisory opinion on Meta's cross-check programme, click here.

Please note: While translations of the summary of our policy advisory opinion are already available, the full opinion is currently only available in English. Translations into other languages are underway and will be uploaded to our website as soon as possible in 2023.

As the Board began to study this policy advisory opinion, Meta shared that, at the time, it was performing about 100 million enforcement attempts on content every day. At this volume, even if Meta were able to make content decisions with 99% accuracy, it would still make one million mistakes a day. In this respect, while a content review system should treat all users fairly, the cross-check programme responds to broader challenges in moderating immense volumes of content.

According to Meta, making decisions about content at this scale means that it sometimes mistakenly removes content that does not violate its policies. The cross-check programme aims to address this by providing additional layers of human review for certain posts initially identified as breaking its rules. When users on Meta's cross-check lists post such content, it is not immediately removed as it would be for most people, but is left up, pending further human review. Meta refers to this type of cross-check as "Early Response Secondary Review" (ERSR). In late 2021, Meta broadened cross-check to include certain posts flagged for further review based on the content itself, rather than the identity of the person who posted it. Meta refers to this type of cross-check as "General Secondary Review" (GSR).

In our review, we found several shortcomings in Meta's cross-check programme. While Meta told the Board that cross-check aims to advance Meta's human rights commitments, we found that the programme appears more directly structured to satisfy business concerns. The Board understands that Meta is a business, but by providing extra protection to certain users selected largely according to business interests, cross-check allows content that would otherwise be removed quickly to remain up for a longer period, potentially causing harm. We also found that Meta has failed to track data on whether cross-check results in more accurate decisions, and we expressed concern about the lack of transparency around the programme.

In response, the Board made several recommendations to Meta. Any mistake prevention system should prioritise expression, which is important for human rights, including expression of public importance. As Meta moves towards improving its processes for all users, the company should take steps to mitigate the harm caused by content left up during additional review, and radically increase transparency around its systems.
Policy Advisory250https://oversightboard.com/decision/PAO-NR730OFI/10Oct 2021