The core problem with email security
Email reaches half the world’s population, and is the leading attack vector for cyber crime. Everyone from petty criminal gangs to state-sponsored espionage groups make use of email as a primary delivery vehicle for the vast majority of their attacks.
To cite just one among many sources, the 2019 Verizon Data Breach Investigations Report (DBIR) cites email phishing attacks as the number one cause of data breaches.
And costs are rising. The Federal Bureau of Investigation estimates that business email compromise (BEC) attacks (the fastest growing type of spear phishing) cost companies $12.5 billion between October 2013 and May 2018. It gets worse, though: The cost of BEC more than doubled to $26 billion for a similar three-year period from June 2016 to July 2019.
The security industry has responded to this threat by investing billions of dollars into new anti-phishing products and technologies. Unfortunately, most of them are woefully inadequate to stop phishing attacks.
For too long, the industry has relied on employee security awareness training and tools for content scanning, spam blacklisting, and artificial intelligence to protect against phishing.
Meanwhile, criminals continue to adapt and change their attack methods. One reason for the surge in BEC: Clever impersonation attacks slip through most current defenses. That’s one reason that Barracuda recently found that almost 90% of email attacks use impersonation, of either a brand (83%) or a person (6%).
Employee Security Training
Nearly 50% of cybersecurity incidents in 2017 were attributed to human error, so there’s clearly a need for training that covers a variety of best practices for physical and data security, including how to avoid email phishing, hoaxes, and malware.
But if you think employees will be able to detect all email fraud if you just train them well enough, dream on. Human brains are wired to understand and interpret patterns, and you can probably easily read even jumbled text (below left). Attackers make use of this strength and turn it into a weakness, sending their phishing attacks from domains that have “jumbled” text but which your brain interprets as a legitimate sender (below right).
Even the best-trained employees will have a hard time identifying deceptive domains like these, particularly if they’re moving through their inboxes quickly, distracted by other work, or reading on their mobile phones.
Commercial secure email gateways (SEGs) are quite adept at scanning email content for malware, viruses, and bad URLs. Additionally, sandboxing technologies can quarantine email attachments to determine if they’re safe or not. Unfortunately, attackers have moved on from malware-centric email attacks. FireEye reports that 90 percent of email attacks are malware-less, which means they often pass right through SEGs and other defenses focused on scanning for malicious content.
Artificial Intelligence and Machine Learning
Artificial intelligence (AI) and machine learning (ML) techniques can be useful to analyze large volumes of content and monitor network activity. Some email security solutions have used these technologies map relationships amongst senders and recipients, learn and model expected behaviors, and classify messages with scores meant to indicate their relative level of risk.
These solutions tend to work in two ways: One is to build an understanding of the individual’s and/or organization’s “emailing network” to determine unusual activity based on contextual cues, relationships in the network, etc. IT administrators can spend weeks to months on manual configuration before these systems reach full functionality. Another method analyzes content to classify phishing emails based on how similar it is to known-good messages or how much it has in common with known-bad messages. This method can detect some phishing right away, but still takes time to build accuracy for individual users. Both methods require a lot of IT resources and time devoted to training and tuning the systems. But they also have weaknesses, such as a high rate of false positives (good messages mistakenly labeled as bad). And well-crafted social engineering attacks, which are often nearly indistinguishable from legitimate messages, slip through these filters.
The missing piece
Employee training, content-scanning and filtering solutions like SEGs, and AI/ML techniques all have their place in a complete, layered approach to defending against phish.
But what they miss is a robust approach to validating sender identity. The high rate of impersonation among email attacks mentioned above proves that attackers have recognized this weakness. Impersonation enables attackers to slip through these defenses with malware-less messages that don’t trigger any alarms, delivering deceptive social-engineering attacks right into users’ inboxes. Those emails are aimed at getting users to do something other than click on a link or download an attachment — instead, they direct the recipient to update a payroll direct deposit, or to send payment to a new bank account, or to deliver the codes for iTunes gift cards via email to the “boss” who sent the message.
Without a strong approach to sender identity, impersonation attacks like these will continue to wreak havoc and cost companies billions of dollars.
In our next blog post we’ll look at the three main types of impersonation used by email attackers.
Want to know more? Download our free white paper, Put an end to phishing.