Curbing Fake News – Analysis

By

By Munish Sharma*

India is a burgeoning market for social media platforms and messaging services, with close to half a billion Internet users on mobile platforms — the second largest online population in the world. The growth rates for media access on smartphones are astonishing. Equally disturbing however is the increasing instances of misleading and maliciously false online content. Social media and messaging services, which are essentially meant to share information, ideas and interests or to facilitate virtual communities interact, participate and communicate, are increasingly being abused to incite communal riots and spread false information.

In India, as of July 1, 2018, fake information-induced mob lynching has claimed 23 lives1 in 18 of such reported incidents.2 The government has taken a stern stand with clear directions to the messaging service providers such as WhatsApp,3 who have reciprocated with assurances of incorporating suitable technology to curb the menace.4 The phenomenon of fake news has wider implications for law and order, safety and security of the citizens, and to the democratic credentials of the country. Fake news, fake social media messages and campaigns have also been used to malign the reputation of organisations as well as to manipulate stock markets, as in the cases such as the ‘Arctic Ready’ hoax targeting Shell in 20125 and The Associated Press twitter account hack in 2013.6

Political parties also find these platforms and messaging applications attractive for election campaigns to distribute videos, audios, images, articles, graphics and posts. Tools, which draw in details related to the demographics of voters, their age, location coupled with plethora of available information related to the political and religious inclinations, interests, hobbies, preferences, lifestyle etc., help them in targeting content at a specific group of voters.7 Targeted content or tailor made messages find acceptance among a group of voters as it is precisely relevant to their concerns, interests and preferences, and hence influences voter behaviour. Social media campaigns of political parties, being executed over both official and non-official channels, can easily transgress the boundaries of ethics if it includes doctored content and fake news. This also runs the risk of inciting religious or communal riots, hoaxes and rumours.

Security features like end-to-end encryption in messaging services enhance privacy for the users, but it also makes them susceptible to misuse. Falsified information, in the form of provoking and doctored content, can travel over these platforms unmonitored. Well-crafted content is potent enough for opinion engineering. The problem is not specific to developing countries. It is more worrisome for mature economies, which are likely to consume more convincing fake news content than real correct information by 2022, as per a Gartner research.8 As the interest in fake news and other illicit content grows, their implications for society and the individual in turn are grim. In the quest of finding an immediate solution to this, social media giants are experimenting with Artificial Intelligence (AI), which for decades has been used to curb spam emails.

A simple AI solution for instance can run a content cross-check for the news story against a dynamic database of stories which demarcates legit and fake stories. A database of specific accounts, sources, geographical locations9 or IP addresses which are a known source of fake news, could also be handy for a quick check. AI systems can run an evaluation for the headline text and the content of the post, looking for consistency between both or sift through similar articles over other news media platforms for fact checking. AI is also being used to spot manipulated or doctored images and videos, which can further alert the users of the dubious content. Numerous fact-checking websites have sprung up and a few of them have even partnered with big players like Google and Facebook to provide factual accuracy.

However, the efficacy of AI in curbing the menace remains contentious, particularly when AI is being put to use to synthesise textual and media content which could be exceptionally convincing. As part of a collaboration project between Technical University of Munich, Stanford University and few other institutions, researchers have demonstrated a face-swapping application, which swaps the facial expressions to impersonate someone else, by using depth-sensing camera to manipulate video footage.10 As technology advances in machine and deep learning, it is quite possible that AI-generated fake content would be indistinguishable from real information for the AI-powered discriminator.

Although AI can run a meticulous fact-checker with unparalleled speed and efficiency, it may fall short of understanding the nuances of human writing with adjectives, contexts and subtleties of tone.11 Some of the fake content can even confuse human beings, as it runs on the edges of fact and fiction. While AI may find it technically challenging to decipher such content, the majority and obvious cases of misinformation or falsified information can be identified. The race between the use and abuse of AI is gearing up, making it equally challenging for AI systems to detect professionally-made fake content using AI tools. AI also has its limitations in eradicating the fake news problem. It can, at the best, filter out or label the dubious content, similar to the spam filters in email boxes. AI is not an absolute answer or solution to this problem. This is largely a human versus technology problem, and a purely technology-led solution would fall short of producing effective results.

Due diligence on part of the users, as actual consumers and targets of fake information and online content, can contain the spread of fake news. Human judgement and wisdom is critical to solving this problem, but it needs extensive awareness and education campaigning. Users, aware of the basic fact-checking methods and societal fallouts of the fake information they share, are better positioned to contain its proliferation. Before sharing dubious content, users can exercise judgement to question the source and its credibility, or to check the credentials of the individual it has come from. This could act as barriers to this uncontrolled flow of falsified information. Leveraging the competence of human networks through crowd-sourcing, few experiments have been carried out for the fake news problem, similar to the concept of Wikipedia where a network of volunteers keeps the information updated. Network of volunteers, individuals as well as organisations, can maintain database for fact-checking and even point out the articles, posts, news content carrying falsified or fake information. AI-based content verification and labelling can also warn users if the content is likely to be fake or that the authenticity of the source is not established.

By and large, eradicating the fake news problem calls for a collective effort of individuals, governments, social media and content platforms, and organisations producing innovative technology solutions. Standalone technology solutions cannot be effective, unless and until they are integrated with social causes and awareness among the masses to solve such mounting problems. This is especially so in the Indian context, when a large online community of users, who are quite prone to opinion engineering, source their news from social media or messaging platforms without paying heed to the credibility or authenticity of the source. At the end of the day, human wisdom can make the real difference, technology solutions can at the most augment human ability to differentiate between fake and real at the first look.

Tackling fake content and news raises many questions. This includes decisions pertaining to taking down of fake content, who makes that decision, or would it, in some way or the other, interfere with the freedom to speech or expression, and the manner in which technology should respond to parody and satire meant purely for entertainment. The larger question pertains to the very future of AI, with technology relating to fake content demarcation racing against technology used to generate fake content.

While fake news is a problem for every country across the globe, countries with stringent content controls, like China, are apparently in a better position to tackle the problem as they have both technical measures and legal regimes in place. The looming choice is whether it is always going to be a trade-off between freedom of expression and safety against fake content or whether the Chinese way is the right approach. For India, the problem is more immediate as general elections are around the corner and violence due to fake content has already claimed 23 lives. In the absence of controls and regulations, human wisdom and technology can together put an end to the growing menace. The sooner it happens, the better it is for the society and the democratic ethos of developing countries like India.

Views expressed are of the author and do not necessarily reflect the views of the IDSA or of the Government of India.

About the author:
*Munish Sharma
is Consultant at the Institute for Defence Studies and Analyses, New Delhi

Source:
This article was published by IDSA.

Notes:

Manohar Parrikar Institute for Defence Studies and Analyses (MP-IDSA)

The Manohar Parrikar Institute for Defence Studies and Analyses (MP-IDSA), is a non-partisan, autonomous body dedicated to objective research and policy relevant studies on all aspects of defence and security. Its mission is to promote national and international security through the generation and dissemination of knowledge on defence and security-related issues. The Manohar Parrikar Institute for Defence Studies and Analyses (MP-IDSA) was formerly named The Institute for Defence Studies and Analyses (IDSA).

Leave a Reply

Your email address will not be published. Required fields are marked *