In a constantly progressing cyber threat landscape where antivirus software and firewalls are considered tools of antiquity, organizations are now looking for more technologically advanced means of safeguarding classified and sensitive information. Artificial intelligence (AI) is taking the position as a warrior against digital threats across the globe. It has not only become popular in military domain, but security companies are also incorporating AI technologies for utilizing deep learning to find out similarities and differences within a data set. Companies like Microsoft are investing 1 billion USD in AI based companies such as Open AI.
Only three countries are reported to be working on developing serious military AI technologies: US, Russia and China due to the fact that it promises military advantage to nation’s defensive and offensive military capabilities. With every new technology, comes a new threat. Hence, cyber threats to AI-based systems cannot be over looked.
AI has the capacity to get merged with new, sophisticated but untried weaponry such as cyber offensive capabilities. This development is alarming as cyber offensive weapons have the capability to destabilize the balance of military power among the leading nations. With the advent of AI and machine learning, cyberattacks have become more commonly available threats for critical infrastructure – airport flight tracking, banking systems, hospital records, and programs which run the nation’s critical infrastructure and nuclear reactors.
Failure by governments to take proactive measures to ensure the security of AI systems “is going to come back to bite us,” Omar Al Olama, minister of state for artificial intelligence for the United Arab Emirates, warned. Studies suggest one of the most significant problems which lie in the destabilizing effects of cyber weaponry, increased by AI technologies on the regional balance of power.
Though there are no definite proofs that critical infrastructure command and control systems are prone to cyberattacks but due to the digitization of these systems, hence the vulnerability exists. The destabilizing impact of AI cyber weaponry remains a significant matter of concern for every nation. Indeed, protecting against these weapons, and safeguarding nation’s software, hardware and confidential data against cyberattacks have become an integral issue for national security.
AI has entered the game recently as cybersecurity experts and researchers are trying to harness its potential to develop solutions which can eschew hackers with minimal human input. Using machine learning and AI neural networks, developers are getting adapted to better anticipation of next steps of cybercriminals and their new attack vectors. It is estimated that future impact of these applications would be doubled in next few years. The IT leaders of more than 25% organizations consider security the top reason and adapted machine learning in their firms. They consider AI as good as for business as for security as AI can reduce the time and funds required for human driven detection and intervention by automating the inspection process. It is also believed to be more accurate than humans, responding better to insider threats and cyberattacks.
In 2017, a unique cyberattack was reported by a renowned cybersecurity firm Darktrace. In this, machine learning was used to observe and learn normal user behavior patterns within a network. The malicious software started to copy normal behavior combining it into the background which became difficult for security tools to identify it. There are many organizations which are exploring the use of machine learning and AI to protect their systems against cyberattacks. Keeping in view the self-learning nature, these systems have reached a level where they can be trained to be a threat to system by going offensive.
Policy makers should closely work with technical experts to investigate, prevent and counter potential malignant uses of AI. Studies suggest that AI zero-day vulnerabilities are being made which are not publicly known yet, so it becomes difficult to develop its patch until its first experiment.
Furthermore, conducting red team exercises in AI domain like DARPA Cyber Grand Challenge will also help better understand the level to carry out attacks and discover the defenses. Present research in the public domain is confined to white hat hackers only which is aimed at employing machine learning to find out vulnerabilities and suggest fixes. The speed AI is developing at, will not take much time that attackers would be using AI capabilities on a mass scale. AI could prove a cybersecurity threat in a subtle way. As AI-driven and machine learning products are set to be used as part of defense strategy, there are chances that it could lull IT professionals and employees into a false sense of security.
Today AI solutions are in experimenting phase, and complete reliance on them could be mistake. In the future, AI will require some kind of hi-tech monitoring to ensure that it performs the constructive tasks it is meant to perform and not become a tool of destruction. AI should be developed in such a manner that they are prone to cyber-attacks. Hence, a comprehensive, multifaceted strategy should be the prime focus of every nation. It is the question of time that how new technologies such as machine learning and AI will prove beneficial in the long run which also rests on the ability to harness their potential properly.
- Zaheema Iqbal is a senior cyber security policy researcher at National Institute of Maritime Affairs, Bahria University Islamabad and member advisory Strategic Warfare Group. Her interest includes Cyber Warfare and Cyber Defense Planning. She can be reached at [email protected]
- Hammaad Salik is an entrepreneur and member advisory Strategic Warfare Group. He aims to provide accurate and transparent cyber information to the general public. His expertise are Cyber Warfare Operations and Kinetic Warfare. He can be reached at [email protected]