CTIBench: Evaluating LLMs in Cyber Threat Intelligence with Nidhi Rastogi - #729 2phb

30/04/2025

Today, we're ed by Nidhi Rastogi, assistant professor at Rochester Institute of Technology to...

Today, we're ed by Nidhi Rastogi, assistant professor at Rochester Institute of Technology to discuss Cyber Threat Intelligence (CTI), focusing on her recent project CTIBench—a benchmark for evaluating LLMs on real-world CTI tasks. Nidhi explains the evolution of AI in cybersecurity, from rule-based systems to LLMs that accelerate analysis by providing critical context for threat detection and defense. We dig into the advantages and challenges of using LLMs in CTI, how techniques like Retrieval-Augmented Generation (RAG) are essential for keeping LLMs up-to-date with emerging threats, and how CTIBench measures LLMs’ ability to perform a set of real-world tasks of the cybersecurity analyst. We unpack the process of building the benchmark, the tasks it covers, and key findings from benchmarking various LLMs. Finally, Nidhi shares the importance of benchmarks in exposing model limitations and blind spots, the challenges of large-scale benchmarking, and the future directions of her AI4Sec Research Lab, including developing reliable mitigation techniques, monitoring "concept drift" in threat detection models, improving explainability in cybersecurity, and more.
The complete show notes for this episode can be found at https://twimlai.com/go/729.

How OpenAI Builds AI Agents That Think and Act with Josh Tobin - #730 28 días 01:07:42 From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731 21 días 01:01:53 RAG Risks: Why Retrieval-Augmented LLMs are Not Safer with Sebastian Gehrmann - #732 14 días 57:37 Google I/O 2025 Special Edition - #733 6 días 26:37 Generative Benchmarking with Kelly Hong - #728 1 mes 53:47 Ver más en APP Comentarios del episodio 69592h