By Munish Sharma*
India is a burgeoning market for social media platforms and messaging services, with close to half a billion Internet users on mobile platforms — the second largest online population in the world. The growth rates for media access on smartphones are astonishing. Equally disturbing however is the increasing instances of misleading and maliciously false online content. Social media and messaging services, which are essentially meant to share information, ideas and interests or to facilitate virtual communities interact, participate and communicate, are increasingly being abused to incite communal riots and spread false information.
In India, as of July 1, 2018, fake information-induced mob lynching has claimed 23 lives1 in 18 of such reported incidents.2 The government has taken a stern stand with clear directions to the messaging service providers such as WhatsApp,3 who have reciprocated with assurances of incorporating suitable technology to curb the menace.4 The phenomenon of fake news has wider implications for law and order, safety and security of the citizens, and to the democratic credentials of the country. Fake news, fake social media messages and campaigns have also been used to malign the reputation of organisations as well as to manipulate stock markets, as in the cases such as the ‘Arctic Ready’ hoax targeting Shell in 20125 and The Associated Press twitter account hack in 2013.6
Political parties also find these platforms and messaging applications attractive for election campaigns to distribute videos, audios, images, articles, graphics and posts. Tools, which draw in details related to the demographics of voters, their age, location coupled with plethora of available information related to the political and religious inclinations, interests, hobbies, preferences, lifestyle etc., help them in targeting content at a specific group of voters.7 Targeted content or tailor made messages find acceptance among a group of voters as it is precisely relevant to their concerns, interests and preferences, and hence influences voter behaviour. Social media campaigns of political parties, being executed over both official and non-official channels, can easily transgress the boundaries of ethics if it includes doctored content and fake news. This also runs the risk of inciting religious or communal riots, hoaxes and rumours.
Security features like end-to-end encryption in messaging services enhance privacy for the users, but it also makes them susceptible to misuse. Falsified information, in the form of provoking and doctored content, can travel over these platforms unmonitored. Well-crafted content is potent enough for opinion engineering. The problem is not specific to developing countries. It is more worrisome for mature economies, which are likely to consume more convincing fake news content than real correct information by 2022, as per a Gartner research.8 As the interest in fake news and other illicit content grows, their implications for society and the individual in turn are grim. In the quest of finding an immediate solution to this, social media giants are experimenting with Artificial Intelligence (AI), which for decades has been used to curb spam emails.
A simple AI solution for instance can run a content cross-check for the news story against a dynamic database of stories which demarcates legit and fake stories. A database of specific accounts, sources, geographical locations9 or IP addresses which are a known source of fake news, could also be handy for a quick check. AI systems can run an evaluation for the headline text and the content of the post, looking for consistency between both or sift through similar articles over other news media platforms for fact checking. AI is also being used to spot manipulated or doctored images and videos, which can further alert the users of the dubious content. Numerous fact-checking websites have sprung up and a few of them have even partnered with big players like Google and Facebook to provide factual accuracy.
However, the efficacy of AI in curbing the menace remains contentious, particularly when AI is being put to use to synthesise textual and media content which could be exceptionally convincing. As part of a collaboration project between Technical University of Munich, Stanford University and few other institutions, researchers have demonstrated a face-swapping application, which swaps the facial expressions to impersonate someone else, by using depth-sensing camera to manipulate video footage.10 As technology advances in machine and deep learning, it is quite possible that AI-generated fake content would be indistinguishable from real information for the AI-powered discriminator.
Although AI can run a meticulous fact-checker with unparalleled speed and efficiency, it may fall short of understanding the nuances of human writing with adjectives, contexts and subtleties of tone.11 Some of the fake content can even confuse human beings, as it runs on the edges of fact and fiction. While AI may find it technically challenging to decipher such content, the majority and obvious cases of misinformation or falsified information can be identified. The race between the use and abuse of AI is gearing up, making it equally challenging for AI systems to detect professionally-made fake content using AI tools. AI also has its limitations in eradicating the fake news problem. It can, at the best, filter out or label the dubious content, similar to the spam filters in email boxes. AI is not an absolute answer or solution to this problem. This is largely a human versus technology problem, and a purely technology-led solution would fall short of producing effective results.
Due diligence on part of the users, as actual consumers and targets of fake information and online content, can contain the spread of fake news. Human judgement and wisdom is critical to solving this problem, but it needs extensive awareness and education campaigning. Users, aware of the basic fact-checking methods and societal fallouts of the fake information they share, are better positioned to contain its proliferation. Before sharing dubious content, users can exercise judgement to question the source and its credibility, or to check the credentials of the individual it has come from. This could act as barriers to this uncontrolled flow of falsified information. Leveraging the competence of human networks through crowd-sourcing, few experiments have been carried out for the fake news problem, similar to the concept of Wikipedia where a network of volunteers keeps the information updated. Network of volunteers, individuals as well as organisations, can maintain database for fact-checking and even point out the articles, posts, news content carrying falsified or fake information. AI-based content verification and labelling can also warn users if the content is likely to be fake or that the authenticity of the source is not established.
By and large, eradicating the fake news problem calls for a collective effort of individuals, governments, social media and content platforms, and organisations producing innovative technology solutions. Standalone technology solutions cannot be effective, unless and until they are integrated with social causes and awareness among the masses to solve such mounting problems. This is especially so in the Indian context, when a large online community of users, who are quite prone to opinion engineering, source their news from social media or messaging platforms without paying heed to the credibility or authenticity of the source. At the end of the day, human wisdom can make the real difference, technology solutions can at the most augment human ability to differentiate between fake and real at the first look.
Tackling fake content and news raises many questions. This includes decisions pertaining to taking down of fake content, who makes that decision, or would it, in some way or the other, interfere with the freedom to speech or expression, and the manner in which technology should respond to parody and satire meant purely for entertainment. The larger question pertains to the very future of AI, with technology relating to fake content demarcation racing against technology used to generate fake content.
While fake news is a problem for every country across the globe, countries with stringent content controls, like China, are apparently in a better position to tackle the problem as they have both technical measures and legal regimes in place. The looming choice is whether it is always going to be a trade-off between freedom of expression and safety against fake content or whether the Chinese way is the right approach. For India, the problem is more immediate as general elections are around the corner and violence due to fake content has already claimed 23 lives. In the absence of controls and regulations, human wisdom and technology can together put an end to the growing menace. The sooner it happens, the better it is for the society and the democratic ethos of developing countries like India.
Views expressed are of the author and do not necessarily reflect the views of the IDSA or of the Government of India.
About the author:
*Munish Sharma is Consultant at the Institute for Defence Studies and Analyses, New Delhi
This article was published by IDSA.
- 1. Government’s Stern Warning Forces WhatsApp To Say, “Will Step Up Efforts”, Boom, July 03, 2018, available at https://www.boomlive.in/governments-stern-warning-forces-whatsapp-to-say-will-step-up-efforts/, accessed on 08 July, 2018.
- 2. “Mob Lynching Triggered By Child Lifting Rumours”, Compiled by Boom, available at https://docs.google.com/spreadsheets/d/19a9r_oGH8RMgiP61coaIWNKApqPE40j_iSBurUF5Mt8/edit#gid=0, accessed on 08 July, 2018.
- 3. Press Information Bureau – Government of India, “WhatsApp warned for abuse of their platform”, July 03, 2018, available at http://pib.nic.in/newsite/PrintRelease.aspx?relid=180364, accessed on 09 July, 2018.
- 4. “WhatsApp’s response to MeitY letter on issue of misinformation”, The Indian Express, July 05, 2018, available at https://indianexpress.com/article/technology/social/whatsapps-response-to-meity-letter-on-issue-of-misinformation-here-is-the-full-text-5245614/, , accessed on 09 July, 2018.
- 5. Timothy Stenovec, “Shell Arctic Ready Hoax Website by Greenpeace Takes Internet by Storm”, Huffington Post, July 19, 2012, available at https://www.huffingtonpost.in/entry/shell-arctic-ready-hoax-greenpeace_n_1684222, accessed on 09 July, 2018.
- 6. “Fake White House bomb report causes brief stock market panic”, CBC News, April 23, 2013, available at http://www.cbc.ca/news/business/fake-white-house-bomb-report-causes-brief-stock-market-panic-1.1352024, accessed on 09 July, 2018.
- 7. Katharine Dommett and Luke Temple, “Digital Campaigning: The Rise of Facebook and Satellite Campaigns”, Parliamentary Affairs, Volume 71, Issue suppl_1, 1 March 2018, Pages 189–202, https://doi.org/10.1093/pa/gsx056.
- 8. Kasey Panetta, “Gartner Top Strategic Predictions for 2018 and Beyond”, Gartner, October 03, 2017, available at https://www.gartner.com/smarterwithgartner/gartner-top-strategic-predictions-for-2018-and-beyond/, accessed 08 July, 2018.
- 9. Jackie Snow, “Can AI Win the War Against Fake News?”, MIT Technology Review, December 13, 2017, available at https://www.technologyreview.com/s/609717/can-ai-win-the-war-against-fake-news/, accessed 09 July, 2018.
- 10. “Face2Face: Real-time Face Capture and Reenactment of RGB Videos”, available at http://niessnerlab.org/projects/thies2016face.html, accessed 06 July, 2018.
- 11. James Vincent, Why AI isn’t going to solve Facebook’s fake news problem, The Verge, April 05, 2018, available at https://www.theverge.com/2018/4/5/17202886/facebook-fake-news-moderation-ai-challenges, accessed on 06 July, 2018.