Connect with us

social

Meta Said to Curtail Election Misinformation Efforts as US Midterm Vote Approaches: Details

Published

on

Facebook owner Meta is quietly curtailing some of the safeguards designed to thwart voting misinformation or foreign interference in US elections as the November midterm vote approaches.

It’s a sharp departure from the social media giant’s multibillion-dollar efforts to enhance the accuracy of posts about US elections and regain trust from lawmakers and the public after their outrage over learning the company had exploited people’s data and allowed falsehoods to overrun its site during the 2016 campaign.

The pivot is raising alarm about Meta’s priorities and about how some might exploit the world’s most popular social media platforms to spread misleading claims, launch fake accounts and rile up partisan extremists.

Advertisement

“They’re not talking about it,” said former Facebook policy director Katie Harbath, now the CEO of the tech and policy firm Anchor Change. “Best case scenario: They’re still doing a lot behind the scenes. Worst case scenario: They pull back, and we don’t know how that’s going to manifest itself for the midterms on the platforms.”

Since last year, Meta has shut down an examination into how falsehoods are amplified in political ads on Facebook by indefinitely banishing the researchers from the site.

CrowdTangle, the online tool that the company offered to hundreds of newsrooms and researchers so they could identify trending posts and misinformation across Facebook or Instagram, is now inoperable on some days.

Public communication about the company’s response to election misinformation has gone decidedly quiet. Between 2018 and 2020, the company released more than 30 statements that laid out specifics about how it would stifle US election misinformation, prevent foreign adversaries from running ads or posts around the vote and subdue divisive hate speech.

Top executives hosted question and answer sessions with reporters about new policies. CEO Mark Zuckerberg wrote Facebook posts promising to take down false voting information and authored opinion articles calling for more regulations to tackle foreign interference in US elections via social media.

But this year Meta has only released a one-page document outlining plans for the fall elections, even as potential threats to the vote remain clear. Several Republican candidates are pushing false claims about the US election across social media. In addition, Russia and China continue to wage aggressive social media propaganda campaigns aimed at further political divides among American audiences.

Meta says that elections remain a priority and that policies developed in recent years around election misinformation or foreign interference are now hard-wired into company operations.

Advertisement

“With every election, we incorporate what we’ve learned into new processes and have established channels to share information with the government and our industry partners,” Meta spokesman Tom Reynolds said.

He declined to say how many employees would be on the project to protect US elections full time this year.

During the 2018 election cycle, the company offered tours and photos and produced head counts for its election response war room. But The New York Times reported the number of Meta employees working on this year’s election had been cut from 300 to 60, a figure Meta disputes.

Reynolds said Meta will pull hundreds of employees who work across 40 of the company’s other teams to monitor the upcoming vote alongside the election team, with its unspecified number of workers.

The company is continuing many initiatives it developed to limit election misinformation, such as a fact-checking program started in 2016 that enlists the help of news outlets to investigate the veracity of popular falsehoods spreading on Facebook or Instagram. The Associated Press is part of Meta’s fact-checking program.

Advertisement

This month, Meta also rolled out a new feature for political ads that allows the public to search for details about how advertisers target people based on their interests across Facebook and Instagram.

Yet, Meta has stifled other efforts to identify election misinformation on its sites.

It has stopped making improvements to CrowdTangle, a website it offered to newsrooms around the world that provides insights about trending social media posts. Journalists, fact-checkers and researchers used the website to analyse Facebook content, including tracing popular misinformation and who is responsible for it.

That tool is now “dying,” former CrowdTangle CEO Brandon Silverman, who left Meta last year, told the Senate Judiciary Committee this spring.

Silverman told the AP that CrowdTangle had been working on upgrades that would make it easier to search the text of internet memes, which can often be used to spread half-truths and escape the oversight of fact-checkers, for example.

Advertisement

“There’s no real shortage of ways you can organise this data to make it useful for a lot of different parts of the fact-checking community, newsrooms and broader civil society,” Silverman said.

Not everyone at Meta agreed with that transparent approach, Silverman said. The company has not rolled out any new updates or features to CrowdTangle in more than a year, and it has experienced hourslong outages in recent months.

Meta also shut down efforts to investigate how misinformation travels through political ads.

The company indefinitely revoked access to Facebook for a pair of New York University researchers who they said collected unauthorised data from the platform. The move came hours after NYU professor Laura Edelson said she shared plans with the company to investigate the spread of disinformation on the platform around the January 6, 2021, attack on the US Capitol, which is now the subject of a House investigation.

“What we found, when we looked closely, is that their systems were probably dangerous for a lot of their users,” Edelson said.

Advertisement

Privately, former and current Meta employees say exposing those dangers around the American elections have created public and political backlash for the company.

Republicans routinely accuse Facebook of unfairly censoring conservatives, some of whom have been kicked off for breaking the company’s rules. Democrats, meanwhile, regularly complain the tech company hasn’t gone far enough to curb disinformation.

“It’s something that’s so politically fraught, they’re more trying to shy away from it than jump in head first.” said Harbath, the former Facebook policy director. “They just see it as a big old pile of headaches.”

Meanwhile, the possibility of regulation in the US no longer looms over the company, with lawmakers failing to reach any consensus over what oversight the multibillion-dollar company should be subjected to.

Free from that threat, Meta’s leaders have devoted the company’s time, money and resources to a new project in recent months.

Advertisement

Zuckerberg dived into this massive rebranding and reorganisation of Facebook last October, when he changed the company’s name to Meta Platforms. He plans to spend years and billions of dollars evolving his social media platforms into a nascent virtual reality construct called the “metaverse” — sort of like the internet brought to life, rendered in 3D.

His public Facebook page posts now focus on product announcements, hailing artificial intelligence, and photos of him enjoying life. News about election preparedness is announced in company blog posts not written by him.

In one of Zuckerberg’s posts last October, after an ex-Facebook employee leaked internal documents showing how the platform magnifies hate and misinformation, he defended the company. He also reminded his followers that he had pushed Congress to modernise regulations around elections for the digital age.

“I know it’s frustrating to see the good work we do get mischaracterised, especially for those of you who are making important contributions across safety, integrity, research and product,” he wrote on October 5. “But I believe that over the long term if we keep trying to do what’s right and delivering experiences that improve people’s lives, it will be better for our community and our business.”

It was the last time he discussed the Menlo Park, California-based company’s election work in a public Facebook post.

Advertisement


Is Pixel 6a the best camera phone under Rs. 50,000? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

Source link

social

Delhi High Court Grants Time for Government to Reveal Plans to Regulate De-Platforming of Social Media Users

Published

on

The Delhi High Court on Wednesday granted time to the Centre to inform if it was drafting any regulations to govern the issue of de-platforming of users from social media. Justice Yashwanth Varma was hearing a batch of petitions concerning the suspension and deletion of accounts of several social media users, including Twitter users.

Central government counsel Kirtiman Singh urged the court to list the cases after two weeks to enable him to come back with further instructions concerning any draft policy on the de-platforming of social media users.

Senior counsel for one of the social media platforms said that in case such guidelines are formulated, the scope of proceedings before the court can be navigated accordingly.

Advertisement

The court listed the case for further hearing in September while asking the Centre to state its stand.

In its affidavit filed in one of the cases against the suspension of the petitioner’s Twitter account, the Centre has said that an individual’s liberty and freedom cannot be “waylaid or jettisoned in the slipstream of social and technological advancement” and the social media platforms must respect the fundamental rights of the citizens and conform to the Constitution of India.

It has said that social media platforms should not take down the account itself or completely suspend it in all cases and complete de-platforming is against the spirit of Articles 14, 19, and 21 of the Constitution of India.

Stating that it is the custodian of the users’ fundamental rights in cyberspace, the Centre has said that a social media account can be suspended or de-platformed only in cases such as in the interest of the sovereignty, security, and integrity of India, friendly relations with foreign States or public order or pursuant to a court order or the content is grossly unlawful such as sexual abuse material.


Gaana CEO and Spotify’s India chief join us on Orbital, the Gadgets 360 podcast, to discuss India’s unique music streaming landscape. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

Source link

Advertisement

Continue Reading

social

Social Media Firms Introduce Few Changes Ahead of Upcoming US Midterm Elections

Published

on

Social media companies are offering few specifics as they share their plans for safeguarding the US midterm elections. Platforms like Facebook and Twitter are generally staying the course from the 2020 voting season, which was marred by conspiracies and culminated in the January 6 insurrection at the US Capitol.

Video app TikTok, which has soared in popularity since the last election cycle while also cementing its place as a problem spot for misinformation, announced Wednesday it is launching an election center that will help people find voting locations and candidate information.

The center will show up in the feeds of users who search election-related hashtags. TikTok is also partnering with voting advocacy groups to provide specialized voting information for college students, people who are deaf, military members living overseas and those with past criminal convictions.

Advertisement

TikTok, like other platforms, would not provide details on the number of full-time employees or how much money it is dedicating to US midterm efforts, which aim to push accurate voting information and counter misinformation.

The company said it is working with over a dozen fact-checking organizations, including US-based PolitiFact and Lead Stories, on debunking misinformation. TikTok declined to say how many videos have been fact-checked on its site. The company will use a combination of humans and artificial intelligence to detect and remove threats against election workers as well as voting misinformation.

TikTok said it’s also also watching for influencers who break its rules by accepting money off platform to promote political issues or candidates, a problem that came to light during the 2020 election, said TikTok’s head of safety Eric Han. The company is trying to educate creators and agencies about its rules, which include bans on political advertising.

“With the work we do, there is no finish line,” Han said.

Meta, which owns Facebook, Instagram, and WhatsApp, announced Tuesday that its approach to this election cycle is “largely consistent with the policies and safeguards” from 2020.

“As we did in 2020, we have a dedicated team in place to combat election and voter interference while also helping people get reliable information about when and how to vote,” Nick Clegg, Meta’s president of global affairs, wrote in a blog post Tuesday.

Meta declined to say how many people it has dedicated to its election team responsible for monitoring the midterms, only that it has “hundreds of people across more than 40 teams.”

As in 2020, Clegg wrote, the company will remove misinformation about election dates, voting locations, voter registration and election outcomes. For the first time, Meta said it will also show US election-related notifications in languages other than English.

Advertisement

Meta also said it will reduce how often it uses labels on election-related posts directing people toward reliable information. The company said its users found the labels over-used. Some critics have also said the labels were often too generic and repetitive.

Compared with previous years, though, Meta’s public communication about its response to election misinformation has gone decidedly quiet, The Associated Press reported earlier this month.

Between 2018 and 2020, the company released more than 30 statements that laid out specifics about how it would stifle US election misinformation, prevent foreign adversaries from running ads or posts around the vote and subdue divisive hate speech. Until Tuesday’s blog post, Meta had only released a one-page document outlining plans for the fall elections, even as potential threats to the vote persist.

Twitter, meanwhile, is sticking with its own misinformation labels, though it has redesigned them since 2020 based in part on user feedback. The company activated its “civic integrity policy” last week, which means tweets containing harmful misinformation about the election are labeled with links to credible information. The tweets themselves won’t be promoted or amplified by the platform.

The company, which like TikTok does not allow political advertisements, is focusing on putting verified, reliable information before its users. That can include links to state-specific hubs for local election information as well as nonpartisan public service announcements for voters.

Advertisement

Do the Galaxy Z Fold 4 and Z Flip 4 offer enough over last year’s models? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

Source link

Continue Reading

social

TikTok Bans Paid Political Influencer Posts Ahead of Upcoming US Midterm Elections

Published

on

TikTok will work to prevent content creators from posting paid political messages on the short-form video app, as part of its preparation for the US midterm election in November, the company said on Wednesday.

Critics and lawmakers accuse TikTok and rival social media companies including Meta Platforms and Twitter of doing too little to stop political misinformation and divisive content from spreading on their apps.

While TikTok has banned paid political ads since 2019, campaign strategists have skirted the ban by paying influencers to promote political issues. The company seeks to close the loophole by hosting briefings with creators and talent agencies to remind them that posting paid political content is against TikTok’s policies, said Eric Han, TikTok’s head of US safety, during a briefing with reporters.

Advertisement

He added that internal teams, including those that work on trust and safety, will monitor for signs that creators are being paid to post political content, and the company will also rely on media reports and outside partners to find violating posts.

“We saw this as an issue in 2020,” Han said. “Once we find out about it … we will remove it from our platform.”

TikTok broadcast its plan following similar updates from Meta and Twitter.

Meta, which owns Facebook and Instagram, said Tuesday it will restrict political advertisers from running new ads a week before the election, an action it also took in 2020.

Last week, Twitter said it planned to revive previous strategies for the midterm election, including placing labels in front of some misleading tweets and inserting reliable information into timelines to debunk false claims before they spread further online. Civil and voting rights experts said the plan was not adequate to prepare for the election.

Advertisement

© Thomson Reuters 2022


What should you make of Realme’s three new offerings? We discuss them on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

Source link

Advertisement
Continue Reading

Most Popular