A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions
On this page
The recent progression of Large Language Models (LLMs) has witnessed greatsuccess in the fields of data-centric applications. LLMs trained on massivetextual datasets showed ability to encode not only context but also ability toprovide powerful comprehension to downstream tasks. Interestingly, GenerativePre-trained Transformers utilised this ability to bring AI a step closer tohuman being replacement in at least datacentric applications. Such power can beleveraged to identify anomalies of cyber threats, enhance incident response,and automate routine security operations. We provide an overview for the recentactivities of LLMs in cyber defence sections, as well as categorization for thecyber defence sections such as threat intelligence, vulnerability assessment,network security, privacy preserving, awareness and training, automation, andethical guidelines. Fundamental concepts of the progression of LLMs fromTransformers, Pre-trained Transformers, and GPT is presented. Next, the recentworks of each section is surveyed with the related strengths and weaknesses. Aspecial section about the challenges and directions of LLMs in cyber securityis provided. Finally, possible future research directions for benefiting fromLLMs in cyber security is discussed.
Further reading
- Access Paper in arXiv.org