Cyber Security: the double edge of artificial intelligence

Artificial intelligence is part of the problem, but also of the solution. It is used by the dark side in increasingly sophisticated attacks, but it also provides the defending side with unprecedented capabilities. Of course, we must develop towards a comprehensive vision of cybersecurity.

A senior executive of a British company receives a call from their CEO. The message urges them to lodge 220,000 euros to a supplier's account in Hungary within the hour. Although there is no written request, they do so simply because they recognised their boss's voice. And they fall into the trap. This case hit the headlines not because of the amount swindled, but because of the algorithms capable of imitating a specific human voice. It happened in 2019, but similar applications had already existed for at least two years before that.

The production of fake videos that tamper with voices and images (deep fake) is listed in all the rankings of innovation trends for 2022, but its rise is also linked to a leap forwards in the scale and sophistication of malware and ransomware. This dichotomy between technological progress and crime has been influenced the unstoppable advance of the digitalisation, in addition to artificial intelligence (AI).

In 2020, the Spanish National Intelligence Centre recorded twice as many very dangerous incidents. By June 2021, Fortinet had detected ten times as many ransomware attempts overall than a year earlier. According to the ESET company, by the middle of last year, the number of phishing emails had increased in Latin America by 132% compared to 2019. And Allianz places cyber incidents as the highest business risk, along with business interruptions and the economic impact of COVID-19.

Why is AI so coveted by the dark side?

The answer is simple: it provides new opportunities for the cybercrime industry. Its ability to learn both from mistakes and successes, to find meaningful patterns and relationships in chaotic data or cause-and-effect sequences in the behaviour of a user, provides new ways to swindle. 

A US cybersecurity company demonstrated long ago that AI is just as skilled as humans when it comes to creating tweets as phishing bait or impersonating identities to access passwords. This was a real milestone because it often requires subtlety, not just ingenuity, to convince someone to click here or there. This could happen when someone performs an online banking, administrative or sales transaction and a disguised chatbot suddenly appears offering them some benefit.

The AI learns to select valuable information (passwords, billing data, private conversations, etc.) for which the owner would pay a ransom, or uses a captcha process (selection of images or identification of distorted letters) so that an algorithm can cross that access barrier just like a human.

Offensive vs defensive

At the height of this eternal struggle between good and evil, the so-called offensive artificial intelligence would be able to sabotage the very essence of the defending artificial Intelligence. Intelligent algorithms need to be trained by processing huge amounts of data. If a cracker manages to inject false information during that stage, all subsequent analysis would be corrupted whatever its application, from autonomous driving to voice identification or even telemedicine.

That ability to learn on its own is crucial in the automation, for example, of robotic malware that operates alone, on a mass level, mutates, perfects itself and looks for 'cat flaps' into poorly protected systems or avoids sophisticated shields. The criminal user does not even have to develop it. They can buy it from a third party under the Crime-as-a-Service format, another major trend, according to Europol.

The conclusion is that artificial intelligence is a double-edged sword and calls for a security policy at the level of its destructive potential. How can we achieve that protection? by investing, precisely, in defensive AI. In fact, this specific sector will grow by 23.3% a year until 2026, according to MarketsandMarkets. "It can dramatically speed up the identification of new threats and responses, help stop attacks before they spread, automate and orchestrate processes, and require less time for analysis by specialists," explains Deloitte.

According to the consultancy firm, the paradigm should be refocused from reaction to prevention: "Orchestrating detection processes and technologies provides proactivity in the face of advanced threats." If a malware AI can learn from attacks and become smarter, a database with machine learning can detect that modified malware and block it, because it has analysed previous incidents.

And, of course, it can also be applied after the fact. For example: the pirating of paid sporting events, especially football, is an open wound for clubs, platforms and television stations. The trend is beginning to change with greater judicial agility, but also thanks to machine-learning tracking and blocking solutions. In 2020, the Spanish LaLiga reduced pirated matches by almost 20% and is already exporting the system to other competitions.

What should an AI cybersecurity policy be like?

In an article, José Luis Laguna, a cybersecurity director at several multinationals, backs integration that takes advantage of the unprecedented resources of today's companies and overcomes the fragmented security approach that allows attacks in. This distributed, group model involves the deep integration of all systems, devices and data sources within a company. Interoperability is critical for machine-learning solutions to be able to access, share, and protect all of the information with a seamless shield. Its integration would be not only technological but also human, from the perspective of top management to training (a bottleneck due to a lack of technological talent) or the onboarding of specific AI cybersecurity roles.

What's more, trend rankings indicate that this integration must go beyond companies and reach all the players involved in a public-private partnership model: from potential victims and solution developers to the educational system, administrations and security companies.

There's no time to waste. The explosion of the Internet of Things thanks to 5G networks is imminent. 6G is on the horizon and its huge processing capacity will allow another generation of AI services. Both leaps would imply vulnerability if, as has been the case to date, many of the connected objects, from household appliances to clothing, are not designed with an obsession for cybersecurity. According to Europol and private reports, such as the one by CrowdStrike, an added risk would be attackers receiving funding from states engaged in an increasingly open cyberwar.

More Innovation articles