Here’s a great, futurist look at cybercrime: This is how artificial intelligence will become weaponized in future cyberattacks …
Among other things it points out that AI will create new techniques for cybercrime. Most important of these is to “gain[ing] an understanding of what communication is dominant in the target’s network and blend in”.
Who hasn’t heard the story of the faithful employee who fell victim to a clever phishing attack because of domain treachery, or unclear relationships between vendors? They clicked on what looked like an official company domain, or maybe one related to a partner of the company (like their insurer or 401K provider). They filled out the form. They provided the requested info. Like they’ve done many times before.
In other words, employing adaptive behavior that is usually attributed to human hackers. Scary stuff. Especially at the scale that AI could potentially achieve.
Using trust to confuse during phishing attacks is not a new technique. That’s why so many companies insist on having customer and partner-facing services hosted under their own domain.
Quite honestly these points just point to the incredible opportunity and threat that AI poses. And makes it all the more important to maintain control over data – especially when it leaves your organization.
Because even if you can defend your own employees against the future ai cybercriminals – or cyber augmented criminals – can you protect your partners, vendors and customers?