By Alan Cunningham
Misinformation was at the heart of the 2016 presidential election and has been a recurring problem for governments and publics across the globe since then. With the election of President Joe Biden, the proliferation of misinformation and conspiracy theories online has grown substantially. Even during the 2020 U.S. presidential election, it is known that Vladimir Putin and the Russian government directed misinformation efforts against the Biden campaign.
Despite the real threat misinformation poses in our current environment, the Biden administration has been slow in combating misinformation online. It was only in July of 2021 that Biden called out social media and Big Tech over COVID-19 misinformation, saying platforms were “killing people” before walking back his statement. Beginning in July as well, and continuing into October, the Biden administration is reviewing “whether to try to alter Sec. 230 in order to tackle COVID-19 vaccine misinformation on social media sites like Facebook and Twitter.”
The bulk of these actions, however, have been public statements; little in the way of legislation or the creation of different organizations has been done.
It goes without saying that misinformation on social media can have an extreme effect upon the public’s views and opinions on domestic and foreign policy. News agencies, both credible (like CBS News, The Hill, or AP) and discredited (like InfoWars, The Free Thought Project, or GlobalResearch) now advertise on social media and put out content through social media platforms (Facebook, Instagram, Snapchat, etc.).
News agencies now have the ability to reach people in a way never before conceived or imagined. Brian Ott, a professor of communications and rhetoric at Texas Tech University, opined: “One of the things we know is that people increasingly are getting their news from social media. This is deeply problematic, regardless of what end of the political spectrum someone might be on, because we know that as they get their news in that way, what tends to happen is that they get news and information that already confirms biases they already have. So they’re only confirming news that reinforces existing opinions”. Professor Ott continues and states that algorithms utilized by social media can more selectively narrow down the content and provide one with media that is highly biased to one single viewpoint.
The promulgation of fake news is a big problem as well, as these types of stories are having a greater and greater effect on public views regarding domestic and foreign policy.
For example, the belief that Syria’s president, Bashar al-Assad, did not use chemical weapons against his own people, that the United Nations are spreading lies about North Korea’s abuses, and that the government was involved in 9/11, if read and believed by the majority of a population, could result in the government being forced to abide by the public’s desires. This could translate into opening inquiry upon inquiry into certain matters, halting government processes and spending time discrediting baseless theories, all while further sowing domestic discord and allowing foreign powers to gain the upper hand abroad.
In combating misinformation campaigns online, there are many potential solutions. In a 2001 paper written at the US Army’s Command and General Staff College, Major Simon Hulme, a member of the Royal Engineers of the United Kingdom, describes his own thoughts on how best to prevent disinformation and the potential inaccuracies of the media on the internet, advising that an “independent regulatory body [must begin] the almost impossible task of monitoring and censoring information contained on the net” in order to ensure that information is accurate and correct.
To this day, there is still no body that regulates content online nor regulates the accuracy of information put out by social media platforms, news aggregates, or self-defined news agencies. While some groups, like Facebook, have taken up the mantle to a certain degree, many of these sites that were removed are now up on Facebook again under new accounts. In theory, a regulatory body is a good idea and would surely help in stopping a lot of the bad information that is threatening democratic societies; however there a lot of difficult questions involved, such as: Who will run such a group, the government or the titans of the internet? Will they utilize academic descriptions of what constitutes fake news and disinformation or will they create their own? How will they enforce repeat offenders?
I argue that a combination of both a regulatory agency and a team of information warfare specialists and counterintelligence investigators would be an effective way to combat this. Having a regulatory body headed by cybersecurity experts and administrators from big business (Facebook, Reddit, Twitter, Google, etc.) as well as having a small crew of former counterintelligence investigators, information warfare specialists from NATO’s StratCom or the U.S. Air Force, and others involved in the disinformation field would be a good start. Having government oversight (the Departments of Justice or Commerce seem to make the best oversight authorities) and utilizing academic standards on what constitutes fake news would provide ample oversight capabilities and ensure strict and concrete defining terms for which media organizations are a threat to the public online and what groups are tied to foreign governments. In terms of more specific solutions, perhaps the government placing disclaimers on sites that are known to be conspiratorial or pseudoscience prone, detailing the information in short with links to more evidence before forcing the person to have to click through the disclaimer (this very similar to how when social aggregate giant Reddit quarantines a community; one must agree to view the content before entering). This would surely stop the flow of persons to such sites and would, in some cases, impact a group’s revenue so they must halt operations or change and become a more reputable operation.
While some may find a unit like this unnecessary or unrealistic, it is important to note that Canada, Spain, Turkey, Sweden, and others, all have some form of task force aimed at stopping disinformation. Some of these, like Mexico’s, are merely government sponsored fact-checking organizations, while Turkey’s has the ability to criminally charge persons they find engaging in misinformation. As well, the State of California in 2018 did try to create a task force built on stopping disinformation, but was vetoed by Governor Jerry Brown as it was viewed as “unnecessary.”
However, I would argue that stopping misinformation on the web is highly pressing and, given that blatant foreign intelligence interference in the conduct of democratic processes is a known problem, it must be dealt with swiftly and expertly by the Biden administration before any more damage is done.
The views expressed in this article are those of the authors alone and do not necessarily reflect those of Geopoliticalmonitor.com