A psychology study has explored what makes workers vulnerable to phishing attacks and shows that workers feeling stressed are more likely than others to become the victims of a phishing attack. The implication is that having strong workplace welfare schemes in place, which include elements of helping employees manage stress, can have a positive impact on reducing the success of phishing attacks.
The study was carried out at the US Department of Energy’s Pacific Northwest National Laboratory (PNNL) and was published recently in the Journal of Information Warfare.
The researchers noted that the relationship between stress and response to simulated phishing email was statistically significant.
“The first step to defend ourselves is understanding the complex constellation of variables that make a person susceptible to phishing,” says PNNL psychologist Corey Fallon, a corresponding author of the study. “We need to tease out those factors that make people more or less likely to click on a dubious message.”
In their study, Fallon and colleagues found that people who reported a high level of work-related distress were significantly more likely to follow a phishing email’s link. Every one-point increase in self-reported distress increased the likelihood of responding to the simulated phishing email by 15 percent.
The scientists describe distress as a feeling of tension when someone on the job feels they’re in a difficult situation and unable to tackle the task at hand. Distress might stem from feeling their workload is too high, or they might be questioning whether they have adequate training or time to accomplish their work.
The 153 participants had agreed to take part in a study, but they were unaware that the phishing email sent a few weeks later was part of the planned study into human factors research.
Each participant received one of four different versions of a message about an alleged new dress code to be implemented at their organization. The team tested three common phishing tactics separately and together.
Here’s what they found:
- 49 percent of recipients clicked on the links. Sample text: “This policy will go into effect 3 days from the receipt of this notice...acknowledge the changes immediately.”
- 47 percent clicked. “…comply with this change in dress code or you may be subject to disciplinary action.”
- 38 percent clicked. “Per the Office of General Counsel…”
- The three tactics together: 31 percent clicked.
While the team had expected that more tactics used together would result in more people clicking on the message, that wasn’t the case.
“It’s possible that the more tactics that were used, the more obvious it was a phishing message,” said author Dustin Arendt, a data scientist. “The tactics must be compelling, but there’s a middle ground. If too many tactics are used, it may be obvious that you’re being manipulated.”
Human-machine teaming to reduce cyber security risk
How can companies and employees use this data to reduce the risk?
“One option is to help people recognize when they are feeling distressed,” said Fallon, “so they can be extra aware and cautious when they’re especially vulnerable.”
In the future, one option might be human-machine teaming. If an algorithm notes a change in a work pattern that might indicate fatigue or inattention, a smart machine assistant could suggest a break from email.
Automated alerts are becoming more common, for instance, when a driver drifts unexpectedly and the car issues a warning about fatigue. The researchers noted that the potential benefits of input from a machine assistant would need to be weighed against employee privacy concerns.
The work was funded by the Cybersecurity and Infrastructure Security Agency, part of the Department of Homeland Security.