z-logo
open-access-imgOpen Access
Synthetic Social Engineering Scenario Generation using LLMs for Awareness-Based Attack Resilience
Author(s) -
Jade Webb,
Faranak Abri,
Sayma Akther
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3614550
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Social engineering is found in a strong majority of cyberattacks today, as it is a powerful manipulation tactic that does not require the technical skills of hacking. Calculated social engineers utilize simple communication to deceive and exploit their victims, all by capitalizing on the vulnerabilities of human nature: trust and fear. When successful, this inconspicuous technique can lead to millions of dollars in losses. Social engineering is not a one-dimensional technique; criminals often leverage a combination of strategies to craft a robust yet subtle attack. In addition, offenders are continually evolving their methods in efforts to surpass preventive measures. A common utility to defend against social engineering attacks is detection-based software. Security awareness, however, is a valuable approach that is often eclipsed by automated tech solutions. Awareness establishes a strong first line of defense against these ever-changing attacks. This study utilizes four data-supplemented large language models to generate custom social engineering scenarios with the goal of supporting strong example-driven security awareness programs. The performances of BERT, T5, GPT-3.5, and Llama 3.1 are comparatively analyzed, with Llama 3.1 producing the highest quality scenarios based on multiple metrics, including LLM-as-a-judge.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom