Today’s most damaging threats, from phishing to ransomware, will use machine learning to analyze vast amounts of information about their victims and design more sophisticated attacks. Artificial intelligence could become the basis for more personalized and effective malware. However, these algorithms could also be the foundation for the future fight against cybercrime: predicting attacks before they occur based on suspicious behavior.
Determining what MDR is, why you need it, and which offering is right for you may seem like a lot to figure out. Fortunately, there are many great resources to help you do just that, like the 2020 market guide for managed detection and response services.
Here is a real world example where machine learning could be used to launch a malware attack. A few weeks before your company’s Christmas dinner, you receive an email from the person who is organizing it. He uses his usual language and tone and sends you a couple of files saying that these are the menus of the two possible restaurants you could go to, so that you can vote for your favorite. When you open the first one, the file has hidden ‘malware’ that is downloaded to your computer.
In reality, that email, received at a time and in a way that was not suspicious, is written by a malicious tool that has imitated the behavior of the person who supposedly wrote it to trick you into getting a virus on your computer.
Although this scenario still sounds a bit like science fiction, experts point out that it is unfortunately the future of cybercrime: hostile programs that are able to learn to inflict as much damage as possible thanks to the data collected. “We are already beginning to see intelligent ‘malware’ that uses advanced techniques, including artificial intelligence, to carry out slow, quiet attacks,” Emily Orton, product manager for Darktrace, a cyber security company that seeks to use artificial intelligence to address computer threats, told HojaDeRouter.com.
Over time, the tools used by criminals will be able to store and analyze information about user behavior and, as a result, emulate them to gain access to the Internet at different times. “They will also be able to create highly personalized attacks against specific individuals, because they will understand their interests, their habits and their social groups,” he adds.
In fact, it is already possible to teach different artificial intelligence programs how to imitate writing styles: from attempts by Guardian publishers – with a lack of content, of course – to newspaper articles that provide the key information in circumstances where data – from the stock market or a sporting event, for example – can form the basis of information.
Many companies hire ethical hackers with a background in machine learning
Artificial intelligence will also lay the foundations for the future fight against cybercrime
As these systems evolve and are able to fine-tune their literary output and understand users’ tastes and movements at a deeper level, intelligent malware will be used to develop more elaborate versions of different scams, such as the famous boss email. In this scam, the impersonator pretends to be a company executive using an almost identical email address. In the message, he requests that a money transfer be made, which is not uncommon if the impersonator is a senior manager in the company. According to FBI estimates, this type of attack has already cost the affected businesses a whopping $23 billion (about 20 billion euros), and yet there is still a human hand behind them, without an artificial intelligence to facilitate the process.
This will not be the only attack that will increase their effectiveness. The sadly popular ‘ransomware’ (which hijacks some of the information from the infected system and demands a ransom in exchange for lifting the restriction) could also take advantage of the development of artificial intelligence to fine-tune its targets, choosing more precisely the data it must capture to force payment.
“Cybercriminals will be able to take advantage of artificial intelligence and machine learning to make their malware smarter, which means cyber security will always be one step ahead,” Nitesh Chawla, a professor and member of the research team that created AI2, a system that predicts cyberattacks through machine learning, told HojaDeRouter.com.
While artificial intelligence could become the basis for more personalized and effective malware, these algorithms may also be the foundation for the future fight against cybercrime.