FB claims to Remove, Reduce, Inform

I am a little less than totally impressed

I am a little less than totally impressed

-roy-

Source: Remove, Reduce, Inform: New Steps to Manage Problematic Content | Meta

An update on our ongoing work to keep people safe and maintain the integrity of information that flows through our apps.

———————————————————————

(COPY TAKEN)

By Guy Rosen, VP of Integrity, and Tessa Lyons, Head of News Feed Integrity

Since 2016, we have used a strategy called “remove, reduce, and inform” to manage problematic content across the Facebook family of apps. This involves removing content that violates our policies, reducing the spread of problematic content that does not violate our policies and informing people with additional information so they can choose what to click, read or share. This strategy applies not only during critical times like elections, but year-round.

Today in Menlo Park, we met with a small group of journalists to discuss our latest remove, reduce and inform updates to keep people safe and maintain the integrity of information that flows through the Facebook family of apps:

REMOVE  (read more)

  • Rolling out a new section on the Facebook Community Standards site where people can track the updates we make each month.
  • Updating the enforcement policy for Facebook groups and launching a new Group Quality feature.

REDUCE (read more)

  • Kicking off a collaborative process with outside experts to find new ways to fight more false news on Facebook, more quickly.
  • Expanding the content the Associated Press will review as a third-party fact-checker.
  • Reducing the reach of Facebook Groups that repeatedly share misinformation.
  • Incorporating a “Click-Gap” signal into News Feed ranking to ensure people see less low-quality content in their News Feed.

INFORM (read more)

  • Expanding the News Feed Context Button to images. (Updated on April 10, 2019 at 11AM PT to include this news.)
  • Adding Trust Indicators to the News Feed Context Button on English and Spanish content.
  • Adding more information to the Facebook Page Quality tab.
  • Allowing people to remove their posts and comments from a Facebook Group after they leave the group.
  • Combatting impersonations by bringing the Verified Badge from Facebook into Messenger.
  • Launching Messaging Settings and an Updated Block feature on Messenger for greater control.
  • Launched Forward Indicator and Context Button on Messenger to help prevent the spread of misinformation.

Remove

Facebook

We have Community Standards that outline what is and isn’t allowed on Facebook. They cover things like bullying, harassment and hate speech, and we remove content that goes against our standards as soon as we become aware of it. Last year, we made it easier for people to understand what we take down by publishing our internal enforcement guidelines and giving people the right to appeal our decisions on individual posts.

The Community Standards apply to all parts of Facebook, but different areas pose different challenges when it comes to enforcement. For the past two years, for example, we’ve been working on something called the Safe Communities Initiative, with the mission of protecting people from harmful groups and harm in groups. By using a combination of the latest technology, human review and user reports, we identify and remove harmful groups, whether they are public, closed or secret. We can now proactively detect many types of violating content posted in groups before anyone reports them and sometimes before few people, if any, even see them.

Similarly, Stories presents its own set of enforcement challenges when it comes to both removing and reducing the spread of problematic content. The format’s ephemerality means we need to work even faster to remove violating content. The creative tools that give people the ability to add text, stickers and drawings to photos and videos can be abused to mask violating content. And because people enjoy stringing together multiple Story cards, we have to view Stories as holistic — if we evaluate individual story cards in a vacuum, we might miss standards violations.

In addition to describing this context and history, today we discussed how we will be:

  • Rolling out a new section on the Community Standards site where people can track the updates we make each month. We revisit existing policies and draft new ones for several reasons, including to improve our enforcement accuracy or to get ahead of new trends raised by content reviewers, internal discussion, expert critique or external engagement. We’ll track all policy changes in this new section and share specifics on why we made the more substantive ones. Starting today, in English.
  • Updating the enforcement policy for groups and launching a new Group Quality feature. As part of the Safe Communities Initiative, we will be holding the admins of Facebook Groups more accountable for Community Standards violations. Starting in the coming weeks, when reviewing a group to decide whether or not to take it down, we will look at admin and moderator content violations in that group, including member posts they have approved, as a stronger signal that the group violates our standards. We’re also introducing a new feature called Group Quality, which offers an overview of content removed and flagged for most violations, as well as a section for false news found in the group. The goal is to give admins a clearer view into how and when we enforce our Community Standards. Starting in the coming weeks, globally.

For more information on Facebook’s “remove” work, see these videos on the people and process behind our Community Standards development.

Reduce

Facebook

There are types of content that are problematic but don’t meet the standards for removal under our Community Standards, such as misinformation and clickbait. People often tell us that they don’t like seeing this kind of content and while we allow it to be posted on Facebook, we want to make sure it’s not broadly distributed.

Over the last two years, we’ve focused heavily on reducing misinformation on Facebook. We’re getting better at enforcing against fake accounts and coordinated inauthentic behavior; we’re using both technology and people to fight the rise in photo and video-based misinformation; we’ve deployed new measures to help people spot false news and get more context about the stories they see in News Feed; and we’ve grown our third-party fact-checking program to include 45 certified fact-checking partners who review content in 24 languages.

Today, members of the Facebook News Feed team discussed how we will be:

  • Kicking off a collaborative process with outside experts to find new ways to fight more false news, more quickly. Our professional fact-checking partners are an important piece of our strategy against misinformation, but they face challenges of scale: There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time. One promising idea to bolster their work, which we’ve been exploring since 2017, involves groups of Facebook users pointing to journalistic sources to corroborate or contradict claims made in potentially false content. Over the next few months, we’re going to build on those explorations, continuing to consult a wide range of academics, fact-checking experts, journalists, survey researchers and civil society organizations to understand the benefits and risks of ideas like this. We need to find solutions that support original reporting, promote trusted information, complement our existing fact-checking programs and allow for people to express themselves freely — without having Facebook be the judge of what is true. Any system we implement must have safeguards from gaming or manipulation, avoid introducing personal biases and protect minority voices. We’ll share updates with the public throughout this exploratory process and solicit feedback from broader groups of people around the world. Starting today, globally.
  • Expanding the role of The Associated Press as part of the third-party fact-checking program. As part of our third-party fact-checking program, AP will be expanding its efforts by debunking false and misleading video misinformation and Spanish-language content appearing on Facebook in the US. Starting today, in the US.
  • Reducing the reach of Groups that repeatedly share misinformation. When people in a group repeatedly share content that has been rated false by independent fact-checkers, we will reduce that group’s overall News Feed distribution. Starting today, globally.
  • Incorporating a “Click-Gap” signal into News Feed ranking. Ranking uses many signals to ensure people see less low-quality content in their News Feed. This new signal, Click-Gap, relies on the web graph, a conceptual “map” of the internet in which domains with a lot of inbound and outbound links are at the center of the graph and domains with fewer inbound and outbound links are at the edges. Click-Gap looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph. This can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it and is producing low-quality content. Starting today, globally.

For more information about how we set goals for our “reduce” initiatives on Facebook, read this blog post.

Instagram

Today we discussed how Instagram is working to ensure that the content we recommend to people is both safe and appropriate for the community. We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines, limiting those types of posts from being recommended on our Explore and hashtag pages. For example, a sexually suggestive post will still appear in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore or hashtag pages.

Facebook

We’re investing in features and products that give people more information to help them decide what to read, trust and share. In the past year, we began offering more information on articles in News Feed with the Context Button, which shows the publisher’s Wikipedia entry, the website’s age, and where and how often the content has been shared. We helped Page owners improve their content with the Page Quality tab, which shows them which posts of theirs were removed for violating our Community Standards or were rated “False,” “Mixture” or “False Headline” by third-party fact-checkers. We also discussed how we will be:

  • Expanding the Context Button to images. Originally launched in April 2018, the Context Button feature provides people more background information about the publishers and articles they see in News Feed so they can better decide what to read, trust and share. We’re testing enabling this feature for images that have been reviewed by third-party fact-checkers. Testing now in the US. (Updated on April 10, 2019 at 11AM PT to include this news.)
  • Adding Trust Indicators to the Context Button. The Trust Indicators are standardized disclosures, created by a consortium of news organizations known as the Trust Project, that provide clarity on a news organization’s ethics and other standards for fairness and accuracy. The indicators we display in the context button cover the publication’s fact-checking practices, ethics statements, corrections, ownership and funding and editorial team. Started March 2019, on English and Spanish content.
  • Adding more information to the Page Quality tab. We’ll be providing more information in the tab over time, starting with more information in the coming months on a Page’s status with respect to clickbait. Starting soon, globally.
  • Allowing people to remove their posts and comments from a group after they leave the group. People will have this ability even if they are no longer a member of the group. With this update, we aim to bring greater transparency and personal control to groups. Starting soon, globally.

Messenger

At today’s event, Messenger highlighted new and updated privacy and safety features that give people greater control of their experience and help people stay informed.

  • Combatting impersonations by bringing the Verified Badge from Facebook into Messenger. This tool will help people avoid scammers that pretend to be high-profile people by providing a visible indicator of a verified account. Messenger continues to encourage use of the Report Impersonations tool, introduced last year, if someone believes they are interacting with a someone pretending to be a friend. Starting this week, globally.
  • Launching Messaging Settings and an Updated Block feature for greater control. Messaging Settings allow you to control whether people you’re not connected to, such as friends of your friends, people with your phone number or people who follow you on Instagram can reach your Chats list. The Updated Block feature makes it easier to block and avoid unwanted contact. Starting this week, globally.
  • Launched Forward Indicator and Context Button to help prevent the spread of misinformation. The Forward Indicator lets someone know if a message they received was forwarded by the sender, while the Context Button provides more background on shared articles. Started earlier this year, globally

 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to content