Connect with us

social

Facebook Knew About Abusive Content Globally but Failed to Police: Former Employees

Published

on

Facebook employees have warned for years that as the company raced to become a global service it was failing to police abusive content in countries where such speech was likely to cause the most harm, according to interviews with five former employees and internal company documents viewed by Reuters.

For over a decade, Facebook has pushed to become the world’s dominant online platform. It currently operates in more than 190 countries and boasts more than 2.8 billion monthly users who post content in more than 160 languages. But its efforts to prevent its products from becoming conduits for hate speech, inflammatory rhetoric and misinformation – some which has been blamed for inciting violence – have not kept pace with its global expansion.

Internal company documents viewed by Reuters show Facebook has known that it hasn’t hired enough workers who possess both the language skills and knowledge of local events needed to identify objectionable posts from users in a number of developing countries. The documents also showed that the artificial intelligence systems Facebook employs to root out such content frequently aren’t up to the task, either; and that the company hasn’t made it easy for its global users themselves to flag posts that violate the site’s rules.

Advertisement

Those shortcomings, employees warned in the documents, could limit the company’s ability to make good on its promise to block hate speech and other rule-breaking posts in places from Afghanistan to Yemen.

In a review posted to Facebook’s internal message board last year regarding ways the company identifies abuses on its site, one employee reported “significant gaps” in certain countries at risk of real-world violence, especially Myanmar and Ethiopia.

The documents are among a cache of disclosures made to the US Securities and Exchange Commission and Congress by Facebook whistleblower Frances Haugen, a former Facebook product manager who left the company in May. Reuters was among a group of news organisations able to view the documents, which include presentations, reports, and posts shared on the company’s internal message board. Their existence was first reported by The Wall Street Journal.

Facebook spokesperson Mavis Jones said in a statement that the company has native speakers worldwide reviewing content in more than 70 languages, as well as experts in humanitarian and human rights issues. She said these teams are working to stop abuse on Facebook’s platform in places where there is a heightened risk of conflict and violence.

“We know these challenges are real and we are proud of the work we’ve done to date,” Jones said.

Still, the cache of internal Facebook documents offers detailed snapshots of how employees in recent years have sounded alarms about problems with the company’s tools – both human and technological – aimed at rooting out or blocking speech that violated its own standards. The material expands upon Reuters’ previous reporting on Myanmar and other countries, where the world’s largest social network has failed repeatedly to protect users from problems on its own platform and has struggled to monitor content across languages.

Among the weaknesses cited were a lack of screening algorithms for languages used in some of the countries Facebook has deemed most “at-risk” for potential real-world harm and violence stemming from abuses on its site.

Advertisement

The company designates countries “at-risk” based on variables including unrest, ethnic violence, the number of users and existing laws, two former staffers told Reuters. The system aims to steer resources to places where abuses on its site could have the most severe impact, the people said.

Facebook reviews and prioritises these countries every six months in line with United Nations guidelines aimed at helping companies prevent and remedy human rights abuses in their business operations, spokesperson Jones said.

In 2018, United Nations experts investigating a brutal campaign of killings and expulsions against Myanmar’s Rohingya Muslim minority said Facebook was widely used to spread hate speech toward them. That prompted the company to increase its staffing in vulnerable countries, a former employee told Reuters. Facebook has said it should have done more to prevent the platform being used to incite offline violence in the country.

Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa, who left in 2017, said the company’s approach to global growth has been “colonial,” focused on monetisation without safety measures.

More than 90 percent of Facebook’s monthly active users are outside the United States or Canada.

Advertisement

Language issues

Facebook has long touted the importance of its artificial-intelligence (AI) systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. Machine-learning systems can detect such content with varying levels of accuracy.

But languages spoken outside the United States, Canada and Europe have been a stumbling block for Facebook’s automated content moderation, the documents provided to the government by Haugen show. The company lacks AI systems to detect abusive posts in a number of languages used on its platform. In 2020, for example, the company did not have screening algorithms known as “classifiers” to find misinformation in Burmese, the language of Myanmar, or hate speech in the Ethiopian languages of Oromo or Amharic, a document showed.

These gaps can allow abusive posts to proliferate in the countries where Facebook itself has determined the risk of real-world harm is high.

Reuters this month found posts in Amharic, one of Ethiopia’s most common languages, referring to different ethnic groups as the enemy and issuing them death threats. A nearly year-long conflict in the country between the Ethiopian government and rebel forces in the Tigray region has killed thousands of people and displaced more than 2 million.

Advertisement

Facebook spokesperson Jones said the company now has proactive detection technology to detect hate speech in Oromo and Amharic and has hired more people with “language, country and topic expertise,” including people who have worked in Myanmar and Ethiopia.

In an undated document, which a person familiar with the disclosures said was from 2021, Facebook employees also shared examples of “fear-mongering, anti-Muslim narratives” spread on the site in India, including calls to oust the large minority Muslim population there. “Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned,” the document said. Internal posts and comments by employees this year also noted the lack of classifiers in the Urdu and Pashto languages to screen problematic content posted by users in Pakistan, Iran and Afghanistan.

Jones said Facebook added hate speech classifiers for Hindi in 2018 and Bengali in 2020, and classifiers for violence and incitement in Hindi and Bengali this year. She said Facebook also now has hate speech classifiers in Urdu but not Pashto.

Facebook’s human review of posts, which is crucial for nuanced problems like hate speech, also has gaps across key languages, the documents show. An undated document laid out how its content moderation operation struggled with Arabic-language dialects of multiple “at-risk” countries, leaving it constantly “playing catch up.” The document acknowledged that, even within its Arabic-speaking reviewers, “Yemeni, Libyan, Saudi Arabian (really all Gulf nations) are either missing or have very low representation.”

Facebook’s Jones acknowledged that Arabic language content moderation “presents an enormous set of challenges.” She said Facebook has made investments in staff over the last two years but recognises “we still have more work to do.”

Advertisement

Three former Facebook employees who worked for the company’s Asia Pacific and Middle East and North Africa offices in the past five years told Reuters they believed content moderation in their regions had not been a priority for Facebook management. These people said leadership did not understand the issues and did not devote enough staff and resources.

Facebook’s Jones said the California company cracks down on abuse by users outside the United States with the same intensity applied domestically.

The company said it uses AI proactively to identify hate speech in more than 50 languages. Facebook said it bases its decisions on where to deploy AI on the size of the market and an assessment of the country’s risks. It declined to say in how many countries it did not have functioning hate speech classifiers.

Facebook also says it has 15,000 content moderators reviewing material from its global users. “Adding more language expertise has been a key focus for us,” Jones said.

In the past two years, it has hired people who can review content in Amharic, Oromo, Tigrinya, Somali, and Burmese, the company said, and this year added moderators in 12 new languages, including Haitian Creole.

Advertisement

Facebook declined to say whether it requires a minimum number of content moderators for any language offered on the platform.

Lost in translation

Facebook’s users are a powerful resource to identify content that violates the company’s standards. The company has built a system for them to do so, but has acknowledged that the process can be time consuming and expensive for users in countries without reliable Internet access. The reporting tool also has had bugs, design flaws and accessibility issues for some languages, according to the documents and digital rights activists who spoke with Reuters.

Next Billion Network, a group of tech civic society groups working mostly across Asia, the Middle East and Africa, said in recent years it had repeatedly flagged problems with the reporting system to Facebook management. Those included a technical defect that kept Facebook’s content review system from being able to see objectionable text accompanying videos and photos in some posts reported by users. That issue prevented serious violations, such as death threats in the text of these posts, from being properly assessed, the group and a former Facebook employee told Reuters. They said the issue was fixed in 2020.

Facebook said it continues to work to improve its reporting systems and takes feedback seriously.

Advertisement

Language coverage remains a problem. A Facebook presentation from January, included in the documents, concluded “there is a huge gap in the Hate Speech reporting process in local languages” for users in Afghanistan. The recent pullout of US troops there after two decades has ignited an internal power struggle in the country. So-called “community standards” – the rules that govern what users can post – are also not available in Afghanistan’s main languages of Pashto and Dari, the author of the presentation said.

A Reuters review this month found that community standards weren’t available in about half the more than 110 languages that Facebook supports with features such as menus and prompts.

Facebook said it aims to have these rules available in 59 languages by the end of the year, and in another 20 languages by the end of 2022.

© Thomson Reuters 2021

Advertisement

What’s most interesting about Apple’s new MacBook Pros, M1 Pro and M1 Max silicon, AirPods (3rd Generation), and Apple Music Voice plan? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

Source link

Advertisement
Comments

social

Meta Begins Testing Super Live Streaming Platform With Creators: Report

Published

on

Meta, the company that owns Facebook and Instagram, has been secretly testing Super, a live-streaming platform modelled after Twitch, according to a Business Insider report. According to a report, Meta had reached out to influencers to test the platform, along with a complete deck of slides used to pitch the service to creators. Super has only been used by a hundred creators so far, allows users to sign in with their Google account, and currently supports streaming to viral video platform TikTok Live.

A Meta representative said in a statement given to Business Insider that Super is a totally distinct product and not a part of its other platforms, like Facebook or Instagram.

Super’s website is currently accessible to all users, and the website’s footer states that the service is provided by “NPE Team from Meta.” The developer team at Meta known as NPE works on the release of new applications. There don’t seem to be any additional references to Meta on Super’s website.

Advertisement

Super has been discussed in news reports before. The product appears to be different, in comparison to the one described in a Bloomberg report in 2020. Super was promoted at the time as a “Cameo-inspired tool” that would enable Facetime-style calling between famous people and their fans.

Some features, like the ability to take selfies with creators, do appear to have been carried over. The platform appears to have changed course to become more of a Twitch rival for live streaming, though.

According to the pitch deck, Super will give creators a similar opportunity to monetize their streams as Twitch does. Viewers can donate to their favourite creator and purchase additional content through tiered subscriptions.

For the time being, creators will keep all of their earnings. The pitch deck also features a sponsorship programme where companies can pay to have their marketing materials heavily integrated into a creator’s Super stream.

Creators wouldn’t need a lot of technical or graphic design expertise to set up a well-designed livestream because Super appears to have integrated specific video layouts directly into its product. Additionally, there are pre-built features like trivia and giveaway modules that enable creators to quickly incorporate those activities into a stream.

Some influencers have received payments of up to $3,000 (roughly Rs. 2,40,000) to test out Super for 30 minutes, according to the report. According to another source who spoke with the outlet, there were also “paid incentives based on the performance of the live stream.”

It’s interesting to note that Super and Meta’s other products, like Instagram and Facebook, don’t appear to be integrated. On Super’s website, users only have the choice to sign in with Google after clicking Login. At the moment, TikTok is the only other platform mentioned in the website’s FAQ. In the section where Super explains how to simulcast your stream to TikTok Live, the viral video platform is mentioned, according to the website’s FAQ.

Advertisement

Super is currently in early testing, according to Meta, and it is currently unknown when it will be made available to the general public is unknown. Currently, creators can register with an email address and request early access to the platform.


What should you make of Realme’s three new offerings? We discuss them on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

Source link

Continue Reading

social

Elon Musk Challenges Twitter CEO Parag Agrawal to Public Debate Over Bot Users, Says Deal Cold Move Ahead

Published

on

Elon Musk said Saturday that his planned $44 billion (roughly Rs. 3.5 lakh crore) takeover of Twitter should move forward if the company can confirm some details about how it measures whether user accounts are ‘spam bots’ or real people.

The billionaire and Tesla CEO has been trying to back out of his April agreement to buy the social media company, leading Twitter to sue him last month to complete the acquisition. Musk countersued, accusing Twitter of misleading his team about the true size of its user base and other problems he said amounted to fraud and breach of contract.

Both sides are headed toward an October trial in a Delaware court.

Advertisement

“If Twitter simply provides their method of sampling 100 accounts and how they’re confirmed to be real, the deal should proceed on original terms,” Musk tweeted early Saturday. “However, if it turns out that their SEC filings are materially false, then it should not.”

Musk, who has more than 100 million Twitter followers, went on to challenge Twitter CEO Parag Agrawal to a “public debate about the Twitter bot percentage.”

The company has repeatedly disclosed to the Securities and Exchange Commission an estimate that fewer than 5 percent of user accounts are fake or spam, with a disclaimer that it could be higher. Musk waived his right to further due diligence when he signed the April merger agreement.

Twitter has argued in court that Musk is deliberately trying to tank the deal and using the bot question as an excuse because market conditions have deteriorated and the acquisition no longer serves his interests. In a court filing Thursday, it describes his counterclaims as an imagined story “contradicted by the evidence and common sense.”

“Musk invents representations Twitter never made and then tries to wield, selectively, the extensive confidential data Twitter provided him to conjure a breach of those purported representations,” company attorneys wrote.

While Musk has tried to keep the focus on bot disclosures, Twitter’s legal team has been digging for information about a host of tech investors and entrepreneurs connected to Musk in a wide-ranging subpoena that could net some of their private communications with the Tesla CEO.


What should you make of Realme’s three new offerings? We discuss them on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

Source link

Advertisement

Continue Reading

social

Instagram Will Soon Test Tall Photos for Compatibility With Fullscreen Reels

Published

on

Photo and video sharing platform Instagram might have halted its controversial redesign, but that doesn’t mean the company plans to stop focusing on full-screen content. During the weekly Ask Me Anything, CEO Adam Mosseri confirmed that Instagram will begin testing ultra-tall 9:16 photos “in a week or two.” “You can have tall videos, but you cannot have tall photos on Instagram. So, we thought maybe we should make sure that we treat both equally,” Mosseri said.

Currently, Instagram tops out around 4:5 when displaying vertical images that have been cropped accordingly. But introducing support for slimmer, taller 9:16 photos will help them fill the entire screen as you scroll through the app’s feed. CEO Adam Mosseri confirmed that Instagram will be testing this feature during the weekly Ask Me Anything.

Recently, Instagram pulled its TikTok-like redesign. Several photographers criticised Instagram’s TikTok-like redesign for the way it forces all photos to awkwardly display in a 9:16 frame. The new feed also added overlay gradients to the bottom of posts so that text would be easier to read. But that clashed with the original appearance of photographers’ work.

During the course of Instagram’s shaky redesign test with users, Mosseri admitted more than once that the full-screen experience was less than ideal for photos. Now Instagram very much still intends to showcase that ultra-tall photo experience, but without mandating it across the board.


What should you make of Realme’s three new offerings? We discuss them on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

Advertisement

Source link

Continue Reading

Most Popular