Facebook’s AI treats Palestinian activists like it treats American Black activists. It blocks them.

Palestinian activists are fighting back against a history of takedowns with one-star reviews and ancient Arabic

Just days after violent conflict erupted in Israel and the Palestinian territories, both Facebook and Twitter copped to major faux pas: The companies had wrongly blocked or restricted millions of mostly pro-Palestinian posts and accounts related to the crisis.

Activists around the world charged the companies with failing a critical test: whether their services would enable the world to watch an important global event unfold unfettered through the eyes of those affected.

The companies blamed the errors on glitches in artificial intelligence software.

In Twitter’s case, the company said its service mistakenly identified the rapid-firing tweeting during the confrontations as spam, resulting in hundreds of accounts being temporarily locked and the tweets not showing up when searched for. Facebook-owned Instagram gave several explanations for its problems, including a software bug that temporarily blocked video-sharing and saying its hate speech detection software misidentified a key hashtag as associated with a terrorist group.

The companies said the problems were quickly resolved and the accounts restored. But some activists say many posts are still being censored. Experts in free speech and technology said that’s because the issues are connected to a broader problem: overzealous software algorithms that are designed to protect but end up wrongly penalizing marginalized groups that rely on social media to build support. Black Americans, for example, have complained for years that posts discussing race are incorrectly flagged as problematic by AI software on a routine basis, with little recourse for those affected.

Despite years of investment, many of the automated systems built by social media companies to stop spam, disinformation and terrorism are still not sophisticated enough to detect the difference between desirable forms of expression and harmful ones. They often overcorrect, as in the most recent errors during the Israeli-Palestinian conflict, or they under-enforce, allowing harmful misinformation and violent and hateful language to proliferate, including hoaxes about coronavirus vaccines and violent posts ahead of the U.S. Capitol insurrection on Jan. 6.

The Palestinian situation erupted into a full-blown public relations and internal crisis for Facebook. Last week, CEO Mark Zuckerberg dispatched the company’s top policy executive, Nick Clegg, to meet with Israeli and Palestinian leadership, according to the company. Meanwhile, Palestinians launched a campaign to knock down Facebook’s ranking in app stores by leaving one-star reviews. The incident was designated “severity 1” — the company’s term for a sitewide emergency, according to internal documents reviewed by The Washington Post and first reported by NBC. The documents noted that Facebook executives reached out to Apple, Google, and Microsoft to request that the posts be deleted.

Meanwhile, a group of 30 Facebook employees, some of whom said they had friends and family affected by the conflict, have complained of “over-enforcement” on the Palestinian content in an open letter on the company’s workforce messaging boards, according to another set of internal documents reviewed by The Post. The group has filed at least 80 tickets to report “false positives” with the company’s automation systems in relation to the conflict, noting many of the problems were with the AI mistakenly labeling images of protests as “harassment or bullying.”

Jillian York, a director at the Electronic Frontier Foundation, an advocacy group that opposes government surveillance, has researched tech company practices in the Middle East. She said she doesn’t believe that content moderation — human or algorithmic — can work at scale.

“Ultimately, what we’re seeing here is existing offline repression and inequality being replicated online, and Palestinians are left out of the policy conversation,” York said.

Facebook spokeswoman Dani Lever said the company’s “policies are designed to give everyone a voice while keeping them safe on our apps, and we apply these policies equally.” She added that Facebook has a dedicated team of Arabic and Hebrew speakers closely monitoring the situation on the ground, but declined to say whether any were Palestinian. In an Instagram post May 7, Facebook also gave an account of what it said led to the glitch.

Twitter spokeswoman Katie Rosborough said the enforcement actions were “more severe than intended under our policies” and that the company had reinstated the accounts where appropriate. “Defending and respecting the voices of the people who use our service is one of our core values at Twitter,” she said.

Palestinian activists took to the social media platforms as they began staging protests in late April ahead of an impending Israeli Supreme Court case over whether settlers had the right to evict families from their homes in the Jerusalem neighborhood of Sheikh Jarrah. Potential evictees live-streamed confrontations and documented footage of injuries after Israeli police stormed al-Aqsa Mosque, one of the holiest sites in Islam.

The conflict descended into war after terrorist group Hamas, which governs Gaza, fired explosive rockets into Israel. Israel responded with an 11-day bombing campaign that killed 254 Palestinians, including 66 children. Twelve people in Israel were killed, including two children.

During the barrage, Palestinians posted photos on Twitter showing homes covered in rubble and children’s coffins. A cease-fire took effect May 20.

Palestinian activists and experts who study social movements say it was another watershed historical moment in which social media helped alter the course of events. They compared it to a decade ago, when social media platforms were key to organizing the pro-Democracy uprising known as the Arab Spring. But at the time, tech companies didn’t rely on policing algorithms, rather humans making decisions. And while mistakes were made, nothing occurred on the scale of today, York said.

Even after the companies said the glitches were fixed, 170 Instagram posts and five Twitter posts that activists believe were wrongly removed were still offline, according to 7amleh, the Arab Center for the Advancement of Social Media, a group that advocates for Palestinian digital rights. The group said in a report in late May that it was told by the companies that some of the remaining posts are under review.

Facebook declined to comment. Twitter’s Rosborough said she could not comment without seeing the tweets.

During the early protests in East Jerusalem, some posts on Facebook, Twitter and Instagram were taken down for using the hashtag #SaveSheikhJarrah, the name of the neighborhood in dispute, said Iyad Alrefaie, director of Sada Social, a group that tracks digital rights in the Palestinian territories.

Mariam Barghouti, a Palestinian-American journalist who covers the West Bank for Al Jazeera and other outlets, posted on Instagram that she had her account restricted by Twitter for purportedly violating the company’s social media policy while covering a protest. She said in media interviews that she did not know which tweets broke the rules. The company later restored her account and tweets, saying it made an error, according to spokeswoman Rosborough.

Digital rights groups Access Now, 7amleh and other organizations have spent the years since the Arab Spring documenting problems with how social media companies handle Palestinian content, as well as content from the region at large.

In 2016, Facebook blocked the accounts of several editors at two Palestinian news organizations without giving a reason, Al Jazeera reported at the time. After complaints, the social media company reversed the bans and said they had been accidental. In 2019, Twitter suspended accounts run by a Palestinian news organization, Quds News Network, in a sweep of terrorist accounts (which have since been reinstated, Twitter said). In May 2020, Facebook deactivated the accounts of more than 50 Palestinian journalists and activists without providing an explanation, activists said, including from journalists who posted footage of attacks by Israeli settlers on Palestinian farmers in occupied territories.

Facebook declined to comment on those examples.

Facebook took down a post from a father wishing his infant son, named Qassam, a happy birthday, according to Alrefaie, the director of Sada Social. The group assumed that it was because the company blocks many posts about al-Qassam Brigades, Hamas’s military wing.

“These words are part of our discourse, it’s a part of our culture,” Alrefaie said. “Facebook didn’t differentiate between any context.” Facebook declined to comment on that incident.

Marwa Fatafta, digital rights policy manager for the Middle East and North Africa region for Access Now, said other keywords, such as the term Zionist, are often banned when Palestinians use them because it’s assumed to be antisemitic.

“Under our current policies, we allow the term ‘Zionist’ in political discourse, but remove attacks against Zionists in specific circumstances, when there’s context to show it’s being used as a proxy for Jews or Israelis, which are protected characteristics under our hate speech policy,” Facebook’s Lever said.

Some activists have developed workarounds to the algorithms, including using an ancient method of writing Arabic, according to an article by independent Egyptian news website Mada Masr. Some U.S. activists use similar tactics, purposely misspelling common words like “white” to avoid algorithmic censorship during discussions of race, The Post has reported.

Activists have also decried tech companies’ relationship with the Israeli government, and in particular the Ministry of Justice’s Cyber Unit — which has a direct channel to technology companies to report potential content violations. They have asked tech companies to be transparent about when the government secretly refers accounts to be blocked or content to be removed, including whether the unit was involved in takedowns during the war.

Facebook, Google and Twitter all said they comply with local laws and regularly respond to takedown requests from governments, which they publish in biannual transparency reports. Twitter said the spam filter issue had nothing to do with Israeli authorities. Facebook did not respond to several requests for comment about the nature of reports by Israeli authorities during the recent crisis. A Google spokesman declined to say whether it received bulk requests from the Cyber Unit.

Journalists and activists have also complained that Google hasn’t updated its maps of Gaza with higher-resolution images, despite a U.S. law limiting the degree of detail in public maps of the area being lifted in 2020. Detailed maps help document the damage from airstrikes.

Google declined to comment on why the Gaza maps have not been updated.

Payment app Venmo also mistakenly suspended transactions of humanitarian aid to Palestinians during the war. The company said it was trying to comply with U.S. sanctions and had resolved the issues.

Tech companies are caught between governments trying to stop unrest or violence and activists advocating for free democratic expression, said James Grimmelmann, a law professor at Cornell Tech.

“So the platforms really have to make deeply political choices,” he said.

The latest issues began May 5, when Instagram started receiving reports that people participating in protests in Colombia could not share video, the company later said in a post in which it apologized for its errors. The next day, similar reports came from people participating in demonstrations in Canada and in East Jerusalem. Executives discovered a glitch in a long-planned update to video-sharing products, called Stories. In its apology, the company noted that the bug had nothing to do with these particular events, and in fact had affected more users in the United States than elsewhere.

Several days later, citizens and activists began reporting their posts about al-Aqsa Mosque, using the hashtag #AlAqsa or its Arabic counterparts, were being restricted. The restrictions were often accompanied by a pop-up that said the term was associated with “violence or dangerous organizations.”

On May 11, a Facebook employee filed a grievance, according to a report by BuzzFeed. Facebook said in response that the name of the mosque was a designated terrorist organization. Facebook later told The Post that the hashtag had been restricted in several ways, including limiting people’s ability to search for it.

After publication, Facebook’s Lever added that human error led to the issue of restricting the al-Aqsa Mosque hashtag.

Palestinian activists and Facebook employees began to protest in the coming days that many posts about the conflict were being taken down automatically.

Around the same time, Twitter began fielding reports that influential accounts tweeting about the conflict were being unexpectedly suspended, the company said, due to AI mistaking posts for spam. The company says it restored the accounts a few hours later.

Twitter spokeswoman Rosborough noted that similar incidents of overly severe enforcement took place during the 2020 presidential debates and during protests against a coup this spring in Myanmar.

And sometimes, she pointed out, algorithms get things right: At one point during the conflict, an algorithm also automatically restricted the Israeli army’s official account. The account was trying to post the same tweet twice, of emergency sirens going off in the southern city of Beersheba, and Twitter blocked it.

“We know it’s repetitive — but that’s the reality for Israelis all over the country,” the tweet said.

Source: Washington Post

Related Articles

Back to top button