Facebook is facing backlash after a Cleveland man uploaded a video of himself shooting someone to the social network, and followed it with a Live video confessing to the murder. The slaying and its subsequent distribution across Facebook has raised questions about how the company moderates violent content.
Justin Osofsky, Facebook’s vice president of global operations, released a statement and timeline of the events and videos surrounding the incident.
Osofsky’s statements hand off the responsibility for policing content on Facebook to its users, although he acknowledged the company can do better at moderation. He says artificial intelligence and new policies governing how videos are shared could present solutions to the issue, and that Facebook will try to speed up its current review process.
“As a result of this terrible series of events, we are reviewing our reporting flows to be sure people can report videos and other material that violates our standards as easily and quickly as possible. In this case, we did not receive a report about the first video, and we only received a report about the second video — containing the shooting — more than an hour and 45 minutes after it was posted. We received reports about the third video, containing the man’s live confession, only after it had ended,” Osofsky wrote.
11:09AM PDT — First video, of intent to murder, uploaded. Not reported to Facebook.
11:11AM PDT — Second video, of shooting, uploaded.
11:22AM PDT — Suspect confesses to murder while using Live, is live for 5 minutes.
11:27AM PDT — Live ends, and Live video is first reported shortly after.
12:59PM PDT — Video of shooting is first reported.
1:22PM PDT — Suspect’s account disabled; all videos no longer visible to public.
The timeline demonstrates the failures of Facebook’s moderation system, which relies on user reports to flag controversial or violent content. While the Live video of the man’s confession was quickly reported by another user, the video of the killing itself went unreported and therefore remained online for nearly two hours.
“Artificial intelligence, for example, plays an important part in this work, helping us prevent the videos from being reshared in their entirety. (People are still able to share portions of the videos in order to condemn them or for public awareness, as many news outlets are doing in reporting the story online and on television),” Osofsky said.
Even with advances in artificial intelligence, it’s not clear that Facebook can prevent Live from being used to broadcast violence. The livestreaming service has already been used to share videos of shootings, torture, and sexual assault. And while users are angry at Facebook for allowing the Cleveland killing to be livestreamed, users were also outraged when a “technical glitch” caused the removal of video documenting the police murder of Philando Castile. Osofsky says that the Cleveland videos “goes against our policies and everything we stand for,” but there are times when users will expect Facebook to preserve violent videos because they have political importance. It’s a delicate balance, and one that isn’t likely to be solved by AI alone.
“Facebook isn’t going to stop a murder. And I don’t care how good the AI gets, it’s unlikely any time soon to say ‘hey, that video is some person killing another person, don’t stream that.'” Mike Masnick noted on Techdirt. “Yes, senseless murders and violence lead people to go searching for answers, but sometimes there are no answers. And demanding answers from a random tool that was peripherally used connected to the senseless violence doesn’t seem helpful at all.”