Facebook wants content creators to earn money, but not at the expense of the family friendly social network it’s built, or the integrity of its advertising clients. So today Facebook established formal rules for what kinds of content can’t be monetized with Branded Content, Instant Articles, and mid-roll video Ad Breaks. These include depictions of death or incendiary social issues even as part of news or an awareness campaign.
This is a big deal because it could shape the styles of content created for Facebook Watch, the new original programming hub its launched where publishers earn 55% of ad revenue.
Facebook also plans to give advertisers more transparency into who sees their campaigns and where so they know their brand isn’t being placed next to disagreeable content.
In the coming months Facebook will begin showing pre-campaign analytics of which publishers are eligible to carry an advertiser’s campaigns on their Instant Articles, video ad breaks, and off-site Audience Network inventory. That will start rolling out next week with full-lists available by October. And in the coming months, Facebook will provide post-campaign reporting on all the placements where ads were shown.
Finally, Facebook is taking more steps towards third-party verification against ad fraud. Facebook admits it’s been accused of “grading our own homework”, VP of global marketing solutions Carolyn Everson writes. That’s after several scandals involving bugs messing up ad metrics reported to clients, and agency execs claiming video ad viewability rates are only 20% to 30%, well below industry benchmarks.
That’s why Facebook is joining the Trustworthy Accountability Group (TAG) “Certified Against Fraud” program. It’s also seeking accreditation from the Media Ratings Council over the next 18 months for its ad impression reporting, third-party viewability, and two-second minimum view time video ad buying. It’s also working on adding DoubleVerify and Meetrics to its existing list of 24 third-party ad measurement partners.
These certifications and analytics could give Facebook’s advertisers confidence that their ads aren’t being shown next to bad content, are actually being viewed, are being counted properly. That could in turn convince them to pour more cash into Facebook, which earned $9.32 billion in revenue and $3.89 billion in profit last quarter.
Today’s formalization of monetization rules unifies Facebook’s existing Community Standards, Page Terms, and Payment terms, plus goes into more specificity about exactly what can’t be monetized. Facebook says it will notify publishers if ads are removed from their content, and they can appeal the decisions.
Here’s Facebook’s list of prohibited content types:
Misappropriation of Children’s Characters – Content that depicts family entertainment characters engaging in violent, sexualized, or otherwise inappropriate behavior, including videos positioned in a comedic or satirical manner. For example, situations where characters sustain serious personal injury, are involved in vile or shocking acts, or involved in behavior such as smoking or drinking .
Tragedy & Conflict -Content that focuses on real world tragedies, including but not limited to depictions of death, casualties, physical injuries, even if the intention is to promote awareness or education. For example, situations like natural disasters, crime, self-harm, medical conditions and terminal illnesses.
Debated Social Issues – Content that is incendiary, inflammatory, demeaning or disparages people, groups, or causes is not eligible for ads. Content that features or promotes issues attacks on people or groups is generally not eligible for ads, even if in the context of news or awareness purposes.
Violent Content – Content that is depicting threats or acts of violence against people or animals, where this is the focal point and is not presented with additional context. Examples includes content featuring fights, gore, beatings of either animals or people, or excessively graphic violence in the course of video gameplay.
Adult content – Content where the focal point is nudity or adult content, including depictions of people in explicit or suggestive positions, or activities that are overly suggestive or sexually provocative.
Prohibited Activity – Content that depicts, constitutes, facilitates, or promotes the sale or use of illegal or illicit products, services or activities. Examples include content that features coordinated criminal activity, drug use, or vandalism.
Explicit Content – Content that depicts overly graphic images, blood, open wounds, bodily fluids, surgeries, medical procedures, or gore that is intended to shock or scare.
Drugs or Alcohol use – Content depicting or promoting the excessive consumption of alcohol, smoking, or drug use.
Inappropriate Language – Content should not contain excessive use of derogatory language, including language intended to offend or insult particular groups of people.
Most interestingly, Facebook isn’t distinguishing between some publishers that glorify this content and those that share it to drive awareness or condemnation. Instead, it’s all lumped together. That could subtly push publishers away from covering some of the more devisive topics in the world from war to social justice because they know they can’t earn money from it.
Again, while Facebook has tried to avoid becoming a media company, setting these aggressive rules on what can’t be monetized is akin to making an editorial decision about what content it approves. While publishers are still free to share some of this content as long as it abides by Facebook’s standard policies, monetary incentives could inspire self-censorship.