From Steve Bannon To Millennial Millie: Facebook, YouTube Struggle With Live Video

YouTube

Millie Weaver, a former correspondent for the conspiracy theory website Infowars, hosts nearly 7 hours of live coverage on her YouTube channel. Conservative influencers like Weaver who often broadcast live are increasingly worrisome to misinformation researchers.

Last week, millions of Americans turned to cable news to watch election returns pour in. Some refreshed their Twitter feeds to get the latest tallies. And nearly 300,000 others kept an eye on the YouTube channel of 29-year-old Millie Weaver, a former correspondent for the conspiracy theory website Infowars, who offered right-wing analysis to her followers in a live-stream that carried on for almost seven hours the day after the election.

At times, her pro-Trump commentary veered into something else: misinformation.

First she aired footage of a man pulling a red wagon into a ballot-counting center in Detroit. That image has been spread widely online by conservatives who contend, without evidence, that it is proof of illegal ballot stuffing.

It was, in fact, a TV cameraman pulling his equipment.

Then Weaver raised questions with a guest about the integrity of the election, stoking the false theme that the election was rife with fraud.

“What is going on?” asked Weaver, who goes by Millennial Millie. “It looks like the election is being stolen.”

As unproven theories and rumors multiplied online during the election, major social media platforms including Facebook, Twitter and Google-owned YouTube took more aggressive action than ever before to limit the reach of unsubstantiated or false claims that could undermine confidence in the democratic process.

But one area that has become increasingly worrisome to misinformation researchers is live-streamed videos, often posted by conservative influencers like Weaver.

“This is where the content moderation frontier is right now,” said Renée DiResta, a misinformation researcher at Stanford’s Internet Observatory.

In the months leading up to the election, platforms have taken down violent content, made misleading posts less visible, and added warning labels to false claims. Those are all tools, DiResta noted, that are nearly impossible to apply in real-time to live video.

“The ‘remove, reduce, inform’ framework that is very effective for static content does not work as well in this situation,” she said.

Live videos attract a lot of viewers but are hard to scrutinize

The platforms have leaned into live video, in a bid to keep people’s attention longer. It’s promoted over other types of video. Users who subscribe to a video channel often get push notifications when someone they follow has launched a live stream.

But experts say the streams often occupy an ambiguous gray zone, where it’s difficult for the platform’s automated detection systems or human moderators to quickly flag this type of content.

“That’s in part because it’s harder to search video content as opposed to text,” said Evelyn Douek, a Harvard Law School lecturer who studies the different ways platforms approach content moderation. “It’s a lot harder to scrutinize what’s going on, and it’s a lot more time consuming.”

A feed of mostly opinion, speculation, or coverage of a live event, like a protest, can suddenly take a hard turn into conspiratorial narratives. And as they unfold, some live videos attract hundreds of thousands, sometimes millions, of viewers.

Bad actors know this. DiResta said a common tactic is to piggyback on a high-profile event’s trending hashtag and introduce footage from another place or time, but falsely tag it as “live.”

“It’s almost impossible to fact-check that in real time, to say, ‘no that protest footage that you’re seeing actually happened a year ago,’ or, ‘no, that footage that is being represented as Portland is really D.C.'” DiResta said. “That’s one of the more significant challenges.”

Fake election results, violent threats streamed on YouTube

The perils of live video were on full display on Election Day. Several YouTube accounts live-streamed fake election results to huge audiences. One stream, titled “LIVE 2020 Presidential Election Results,” came from a public YouTube channel with more than 650,000 followers and was one of Google’s Top 5 results for vote tallies in swing states on Tuesday night, according to the Election Integrity Partnership, a coalition of independent researchers who monitored social media for election-related misinformation. The fake results live-streams were ultimately removed by YouTube under its policy prohibiting spam and deceptive practices.

In some cases, when falsehoods, conspiracy theories and even violent threats are aired live, the companies don’t take action until long after the videos reach large audiences.

On Thursday, former White House aide Steve Bannon live-streamed a video from his online program “War Room: Pandemic” calling for Dr. Anthony Fauci and FBI Director Christopher Wray to be beheaded.

After an uproar, Twitter banned one of Bannon’s accounts. Facebook and YouTube both removed Bannon’s video, but not until hours after it was initially streamed. By that time, it had already gotten hundreds of thousands of views. YouTube gave the channel a “strike,” disabling it from uploading new videos for at least a week. (YouTube grants accounts three strikes before it terminates them.)

On Monday, Facebook said it had taken down a network of pages linked to Bannon that had been spreading misinformation and conspiracy theories about election fraud. The social network did not remove Bannon’s own page, but applied penalties, including taking away the ability to post new content and videos.

‘People on the Internet are saying’ could be code for misinformation

Stanford researcher DiResta said even as the companies sharpen their rules and boundaries for what users can and cannot post, their guidance for live-streaming tends to be far looser and more rarely enforced.

On social media apps like TikTok, where users often “duet,” by reposting someone else’s video with a recorded reaction, the groundless rumor snipped from a live feed is often the part that snowballs across the platform.

“A lot of the narratives are presented in a sort of ‘just asking questions,’ kind of way. ‘People on the Internet are saying’ is a common way that manifests,” DiResta said. “Oftentimes, you’ll see live-streams of people watching other live-streams, so there’s a meta, conversational, pass-the-baton-type dynamic that’s happening there.”

To Angelo Carusone, president of the liberal nonprofit watchdog group Media Matters, the Bannon incident shows that platforms have a blindspot with live video. He argues that if live streams received as much moderation as other types of content, that could be more of a deterrent.

“When creators know there is the likelihood of an enforcement action against them, it’s not that they disappear, but they align with some basic standards,” Carusone said.

YouTube spokesman Farshad Shadloo said the Bannon video was removed for violating the company’s policy against inciting violence. On its website, the company says it removes videos that aim “to mislead people about voting,” such as by giving the wrong date for Election Day or telling voters they can cast ballots by text message. And it removes content “encouraging others to interfere with democratic processes, such as obstructing or interrupting voting procedures.”

Shadloo said YouTube removed livestreams and other videos last week that violated its election-related policies, most of which had not gotten many views.

Many videos with false claims of voter fraud and allegations Democrats are trying to steal the election are still up on YouTube, however. Shadloo said such baseless allegations do not break the company’s rules. “Expressing views on the outcome of a current election or process of counting votes is allowed under our policy,” he said.

Before former Vice President Joe Biden was declared the winner of the presidential election, YouTube posted a note below content that it decided fell short of violating its rules, like Weaver’s live results commentary.

“Results may not be final,” the text said, and linked to results from news outlets including the Associated Press. The label on Weaver’s video has now been updated to say that the AP has called the race for Biden. A second label says “Robust safeguards help ensure the integrity of elections and results” and links to a U.S. government website that debunks false claims about voting.

More broadly, YouTube says it reduces the spread of content it deems “borderline” — videos that come close to, but do not violate its rules — by not recommending or showing them in search results.

Facebook said last week it was limiting the distribution of all live videos related to the election.

Facebook spokeswoman Kristen Morea said the company prioritizes reviewing live broadcasts to make sure they comply with its rules. “We’ve made progress limiting the spread of misinformation by reducing the distribution of election-related content before an official result and tightened our policies to fight content that may be misleading,” she said.

Meanwhile, Weaver, the conservative YouTuber, defended her election-night coverage. In a statement to NPR, Weaver framed her approach as an independent journalist just asking questions.

“I was monitoring Twitter posts relating to claims of election fraud and playing those clips to people on my social media platforms. I was debunking some that appeared to be hoaxes while presenting other material that appeared to have some validity,” Weaver said. “Anyone with reason could conclude that the election appears to have many irregularities.”

Federal and state officials from both parties say there are no signs of any significant irregularities with last week’s election.

Videos banned on one platform can jump to another

There are limits to how much any single company can restrict a video’s spread across social media. A YouTube livestream or TikTok video may be reposted on Twitter or Instagram, making policy enforcement seem like a game of whack-a-mole.

Shadloo said that “vast majority” of traffic to YouTube videos comes from other sites linking to them — including other social media platforms — rather than its own recommendations.

“Taking a platform-by-platform view of these things is inherently limited,” Douek, the Harvard Law School lecturer, said. “What one individual platform can do in the whole of the Internet ecosystem will always be somewhat limited.”

Yet, she said, platforms have the responsibility “to think about exactly what they can do to help mitigate the harm that their platform can cause.”

In search of a ‘scalable solution’ for live video

Emerson Brooking, a fellow at the Atlantic Council’s Digital Forensic Research Lab, said platforms have known about the dangers of live video for years now. There have been numerous violent acts committed in front of the camera, including the shooting at a mosque in Christchurch, New Zealand, in 2019. The gunman had streamed himself live on Facebook.

Facebook put new rules in place about live streaming after the Christchurch incident, including a one-strike policy restricting users who break its “most serious policies” from posting live videos for a period of time. But Brooking and other experts say problems with live videos persist.

“I don’t think live-stream will ever be quite where we want it to be,” Brooking said. “But the new problem we are seeing are people who see the live-stream and take clips of it and take it to platforms that have no moderation.”

TikTok, the short-form video app made popular by dance challenges, and Twitch, the live-streaming platform for gamers, both have become places where people repost videos originally recorded live from other platforms.

TikTok blocks an account from posting future live-streams if “harmful content” is posted live, according to a company spokeswoman. Other live-stream violations include inciting violence or promoting hateful ideologies, conspiracies or disinformation.

That said, a number of viral misinformation videos on TikTok that were removed by the platform have reappeared after users reposted them. In many cases they get just a tiny fraction of the audience of the original, but the proliferation illustrates just how daunting it is to staunch the flow of misinformation.

Stanford’s DiResta says it is a good practice to punish users who turn social media live-streams into mini propaganda machines.

“If a person hasn’t been a good citizen [on social media], maybe they lose the ability to live stream, maybe they’re removed from recommendations,” she said.

Alex Stamos, Facebook’s former chief security officer, said tech companies are now developing machine learning tools to better monitor video content, which could eventually be applied to livestreams.

Access to the transcribed texts of what is said during live video could help platforms take faster action. Researchers could also track the rise and fall of various misinformation trends.

“We can hire a bunch of 20-somethings to go look at TikTok all day,” said Stamos, who is the director of Stanford’s Internet Observatory. “But it’s not a scalable solution.”

Editor’s note: Facebook and Google are among NPR’s financial supporters. TikTok helps fund NPR content appearing on the social media platform.

Copyright 2020 NPR. To see more, visit https://www.npr.org.