Why Artificial Intelligence is the Future of Cyber security?
Cyber security concerns all the ways that can be implemented in order to protect the systems, networks, architecture, programs… and so on… from cyber attacks.
It seems like a very broad definition, and although it is, it can be simplified by saying that Cyber security gathers all possible means in order to protect your company’s (or your personal) data.
Protecting your data means that your data remains the same (integrity of data), remains yours (confidentiality of data), and remains accessible (availability of data) to you under your own chosen setup.
The name “Big Data” appeared in 1997 and its opportunities became a revolution in the IT world a few years ago. Although, “Big Data” has always existed, as it simply means “A LOT of data” if we were to simplify. And we have always had a lot of data in our world: your name is a data, your birthday is a data, your car type is a data, your car’s plate number is a data… everything is a data, so of course it amounts to a lot of data. But we could not do much more than what the technology offered us if we wanted to explore all this data.
The reason “Big Data”’s opportunities threw suddenly such a spotlight in the IT World is because it came with the concept of Artificial Intelligence (AI), which enables us to create systems in order to predict future data based on an excessively huge amount of data as well as create complex systems able to solve problems, and deploy these systems on bigger scales.
Now, if we go back to Cyber security… All the ways that are implemented in order to protect the confidentiality, the integrity and the availability of data: all these ways remain for 80% “reactive” instead of “proactive” (this is an average from the Cyber Security structures that I have seen in companies in the past 10 years). In other terms, technologies evolve so fast that hackers also take advantage of AI based systems in order to be more and more creative in their attacks.
The proactivity of Cyber security is mainly observed through the adequacy and readiness of their various procedures regarding prevention mechanisms and users’ best practices.
In the context of growing access to AI by hackers, these prevention mechanisms and best practices need to be reinforced… But they will not be sufficient on the mid-run, as Cyber security will have to take use of more and more AI based systems too in order to become more defensive in the long-run.
Before AI based Cyber security systems can be implemented in companies, a challenge needs to be overcome.
This challenge to overcome is the underestimation on the risks of a cyber breach. Gartner research shows that 90% of employees know when they are doing actions that can put the company at risk but they do it anyways (for speed or convenience related reasons). So even when the best practices are well shared and understood, it does not mean that they will be applied.
In addition, this underestimation of risks is also due to the fact that, well, let’s be fair: when a company gets attacked, they don’t want everyone to know how much they have lost in the attack (especially if it may impact the trust that their customers have in them). So companies will keep their losses to themselves, unless forced to share these publicly.
I believe that this underestimation of risks is also due to the fact that it is difficult to “quote” the exact financial impacts of a cyber attack in advance, as it can range from 500$ to billions of $.
And if parts of these people (who underestimate those risks) are involved in the decision-making regarding budget allocation for the company, they will tend to choose not to allocate the necessary budget in AI based Cyber security solutions.
Convincing the decision actors to invest in an AI based solution that has a very tangible price is, thus, a challenge on which ethical cyber security specialists will have to find ways to overcome in order to do their work in due diligence.