AI and Cybersecurity
The talent shortage is a perennial feature when it comes to cybersecurity. This problem was prevalent during the early years of the digital era and has only been more complicated with the recent technological developments. Every year, millions of wearables, IoT devices, smart cars, and other interconnected devices are added to the digital network. If the talent shortage was a big issue in the pre-IoT era, one can only imagine the scale of talent required in the present times. According to a Robert Walters report, only 10% of the professionals in the UK have the requisite skill set. While all these new technologies, powered by AI, using data analytics have enhanced the user experience and improved the quality of life for most people, the threat surface and the nodes of vulnerabilities have increased substantially.
Cybersecurity is undoubtedly a boardroom level issue now. The cost of an attack on an organisation or state has grown. There are several data integrity and data security issues that organisations have to worry about. In addition to these, regulatory and compliance issues are being brought into legislation in various countries. As a result, cybersecurity insurance is one of the fastest-growing segments in the insurance industry. It was valued at $7.36B in 2020 and is expected to grow to $27.83B by 2025, a 400% increase. Brands and organisations have to deal with the loss of reputation and opportunity from a business point of view.
In this context, using AI to enhance cybersecurity becomes an attractive proposition. Manual-based cybersecurity practices are insufficient and inefficient to meet the security demands of modern organisations, impossible for large states and the present-day digital space architecture. AI is accomplished at automating repetitive tasks, which can reduce the workload on cybersecurity professionals, and they can be redirected into carrying out more proactive analysis. In addition to automation of repetitive tasks, there are several use cases where using AI or, more specifically, Machine Learning can improve cybersecurity practices. For example, ML can ensure that machines can be used for various use cases like intrusion detection by learning intrusive behaviour based on the number of attempts, queries per minute, data requested per query and others. Malware detection is another potential field that can employ machine learning. Bad actors create the initial version of the malware, and the creation of variants is automated after. ML can enhance traditional signature-based detection systems and can help counter these variants. Additionally, ML can also be used to find code vulnerabilities. Several areas like fraud detection and enhancing threat intelligence can be best served by employing ML and AI.
However, it would be unwise to dismiss the dangers of mixing AI and cybersecurity, mainly when AI is used in cyber offensive operations. AI, like any other technology, has vulnerabilities and limitations. AI is not impossible to outsmart. GAN (Generative Adversarial Networks), a type of artificial neural network, can be utilised to confuse the underlying machine learning model. GANs have been demonstrated to be able to fool facial and speech recognition AI. It is also important to note that lower quality ML models will increase the chances of attack and spoofing. Thus, AI is not a fail-safe against cyber attacks. The AI and ML techniques available to organisations or state actors are also available to malevolent actors, and these actors can find vulnerabilities in these models. Malicious actors already use AI in spear-phishing to make emails appear personalised and authentic to get their targets to click unsafe links and divulge personal information.
The current digital environment is complex and gets further complicated with the addition of new interconnected devices daily. AI is not the ultimate solution to minimising cybersecurity risks, but it provides an edge to states and organisations against some attacks. Cybersecurity in the present age is a never-ending game. It comes down to whether organisations and states can utilise AI to prevent an attack against critical infrastructure or data theft or malicious actors can leverage AI better.