this post was submitted on 01 Apr 2025
13 points (100.0% liked)

Pulse of Truth

849 readers
18 users here now

Cyber Security news and links to cyber security stories that could make you go hmmm. The content is exactly as it is consumed through RSS feeds and wont be edited (except for the occasional encoding errors).

This community is automagically fed by an instance of Dittybopper.

founded 1 year ago
MODERATORS
 

arXiv:2503.23175v1 Announce Type: new Abstract: Several recent works have argued that Large Language Models (LLMs) can be used to tame the data deluge in the cybersecurity field, by improving the automation of Cyber Threat Intelligence (CTI) tasks. This work presents an evaluation methodology that other than allowing to test LLMs on CTI tasks when using zero-shot learning, few-shot learning and fine-tuning, also allows to quantify their consistency and their confidence level. We run experiments with three state-of-the-art LLMs and a dataset of 350 threat intelligence reports and present new evidence of potential security risks in relying on LLMs for CTI. We show how LLMs cannot guarantee sufficient performance on real-size reports while also being inconsistent and overconfident. Few-shot learning and fine-tuning only partially improve the results, thus posing doubts about the possibility of using LLMs for CTI scenarios, where labelled datasets are lacking and where confidence is a fundamental factor.

top 2 comments
sorted by: hot top controversial new old
[โ€“] caffinatedone@lemmy.world 5 points 1 day ago (1 children)

Large Language Models are Unreliable f~~or Cyber Threat Intelligence~~

There, fixed it.

[โ€“] donuts@lemmy.world 3 points 1 day ago

yOu'Re JuSt PrOmPtInG iT wRoNg