By Michelle Quinn
As U.S. voters wait to hear who the next president will be, Twitter, Facebook, Google and other internet firms will be busy doing something else: Monitoring their sites and deciding if and when to stop the spread of misinformation.
After the 2016 U.S. election, in which internet firms were criticized for allowing foreign-sponsored actors to use their networks to spread misinformation, they vowed to take steps to better protect their sites.
Once the coronavirus pandemic hit, companies began to more directly tackle misinformation related to the health crisis, observers say, and turned to more automated ways to moderate content, such as artificial intelligence.
Those practices have carried over to efforts to address misinformation around the election, said Spandana Singh, a policy analyst with New America’s Open Technology Institute.
“A number of the policies and practices that they adopted for the U.S. elections were largely informed by their COVID-19 response,” she said.
Now that they’ve signaled more of a willingness to address misinformation, the tech firms are walking a tightrope: Take steps to stop misinformation about the election from spreading or allow people to express themselves, whether it’s sharing truth or falsehoods.
Are they ready?
Singh said the internet companies approach content moderation now in a more nuanced way, beyond just taking down harmful or misleading content.
They are labeling some content that is questionable and, in some cases, “algorithmically downgrading content,” she said.
But it’s impossible to know how prepared they are for Election Day, she said.
“Because they don’t provide a lot of transparency and accountability around their efforts and what impact these efforts are having, it is really difficult to understand whether they are actually ready,” she said.
Twitter has started labeling some factually questionable tweets about election issues to give people a way to find credible information and has said candidates won’t be permitted to claim they’ve won the election before a declared result.
Facebook said it could turn to its so-called “break-glass options.”
What that exactly means, the company hasn’t said. But the Wall Street Journal reported that the company may turn to measures taken in Sri Lanka and Myanmar, such as possibly deactivate hashtags related to false information about election results or suppress viral posts that spread messages of violence or fake news.
“This election cycle is a really good testing ground for a number of new policies and practices,” Singh said. “Should they be effective, I definitely think they will be rolled out globally.”
One problem with online misinformation is that it can spread widely before internet sites, which are also sensitive to claims they are suppressing certain viewpoints, decide to act, said Shannon McGregor, an assistant professor at the University of North Carolina, Chapel Hill.
“I worry if they will break the glass as quick as it might need to be done depending on what is happening in our post-election period,” she said.
While U.S. voters chart the future course of the nation, this Election Day is another test case of whether social media helps or hurts the democratic process.