The recent progress in artificial Intelligence impacted society. Most recently, it gradually played a dual role in disinformation, simultaneously helping to create and debunk the circulated fabricated content.
Many malicious actors have used AI to create and disseminate disinformation content that surpassed the old tactics and strategies in many ways. In July 2020, an investigative report revealed a network of fake experts including journalists, analysts, and political consultants who managed to publish articles and op-eds to influence the public, the photos of these non-existent experts were generated using AI technology. The network published many articles on well-known media platforms such as the Washington Examiner, Jerusalem Post, Alarbiya, and South China.
The technology was also used in creating fake videos, one of the last cases of this application is the viral video that depicts the Ukrainian president ordering his soldiers to surrender. The video that hackers placed on an official news website was later removed. While not adopted in the real world, AI has the potential to create texts that can fool many they are real. A recent study showed that AI-generated fake reports could fool even experts, thanks to the recent breakthrough in the AI field known as Large Language models(LLM) that aims to let the machine understand human languages. The technology has also spread disinformation content to the targeted audience. Online bots and simple software usually encountered on platforms such as Twitter were used to disinform the public during the elections and Covid-19.
In contrast, AI has also been beneficial in reducing the harm of disinformation. During the last few years, the technology has been used by many in the fact-checking industry and in most parts of the fact-checking process, such as monitoring and deciding on claims worthy of checking. However, verifying these claims and deicing on their veracity still needs to be fully automated. Full fact, a leading UK fact-checking organization, used AI to detect 500,000 claims about the Covid-19 virus. In some cases, big tech companies such as Facebook started to tap the power of AI in fact-checking content shared by its users. In 2020, youtube removed millions of videos using AI that contained misleading content. Twitter also used AI to remove the covid-19 disinformation content. AI is also currently used by companies to rate and assess the credibility of news sources. Another significant usage of AI in fighting disinformation is detecting manipulated images and videos.
This dual-use nature of AI in disinformation will make it hard for society to regulate the technology. Most of the algorithms used to build these models are readily available, and some cases as open source; mitigating the harm while benefiting from the AI technology in maintaining a healthy information ecosystem is a real challenge in the next period.
As AI keeps improving, the disinformation landscape will change more in the future. But old and new tactics and techniques will co-exist and benefit and enhance each other.