Pentesting GenAI LLM models: Securing Large Language Models
What you’ll learn
- Understand the unique vulnerabilities of large language models (LLMs) in real-world applications.
- Explore key penetration testing concepts and how they apply to generative AI systems.
- Master the red teaming process for LLMs using hands-on techniques and real attack simulations.
- Analyze why traditional benchmarks fall short in GenAI security and learn better evaluation methods.
- Dive into core vulnerabilities such as prompt injection, hallucinations, biased responses, and more.
- Use the MITRE ATT&CK framework to map out adversarial tactics targeting LLMs.
- Identify and mitigate model-specific threats like excessive agency, model theft, and insecure output handling.
- Conduct and report on exploitation findings for LLM-based applications.