
On the Validity of Traditional Vulnerability Scoring Systems for Adversarial Attacks against LLMs
Author(s) -
Atmane Ayoub Mansour Bahar,
Ahmad Samer Wazan
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3574108
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
This research investigates the effectiveness of established vulnerability metrics, such as the Common Vulnerability Scoring System (CVSS), in evaluating attacks on Large Language Models (LLMs), with a focus on Adversarial Attacks (AAs). The study explores the influence of different metric factors in determining vulnerability scores, providing new perspectives on potential enhancements to these metrics. Approach - This study adopts a quantitative approach, calculating and comparing the coefficient of variation of vulnerability scores across 56 adversarial attacks on LLMs. The attacks, sourced from various research papers, and obtained through online databases, were evaluated using multiple vulnerability metrics. Scores were determined by averaging the values assessed by three distinct LLMs. Findings - The results indicate that existing scoring systems yield vulnerability scores with minimal variation across different attacks, supporting the hypothesis that current vulnerability metrics are limited in evaluating AAs on LLMs, and highlighting the need for the development of more flexible, generalized metrics tailored to such attacks.