OWASP - Top 10 for LLM version 1.0 (Prompt Injection, Training Data Poisoning, ...)

 

 

https://owasp.org/www-project-top-10-for-large-language-model-applications/ 

 

OWASP Top 10 for LLM version 1.0

LLM01: Prompt Injection

This manipulates a large language model (LLM) through crafty inputs, causing
unintended actions by the LLM. Direct injections overwrite system prompts,
while indirect ones manipulate inputs from external sources.

LLM02: Insecure Output Handling

This vulnerability occurs when an LLM output is accepted without scrutiny,
exposing backend systems. Misuse may lead to severe consequences
like XSS, CSRF, SSRF, privilege escalation, or remote code execution.

LLM03: Training Data Poisoning

This occurs when LLM training data is tampered, introducing vulnerabilities
or biases that compromise security, effectiveness, or ethical behavior. Sources include
Common Crawl, WebText, OpenWebText, & books.

LLM04: Model Denial of Service

Attackers cause resource-heavy operations on LLMs, leading to
service degradation or high costs. The vulnerability is magnified due to
the resource-intensive nature of LLMs and unpredictability of user inputs.

LLM05: Supply Chain Vulnerabilities

LLM application lifecycle can be compromised by vulnerable components or services,
leading to security attacks. Using third-party datasets, pre-trained models, and plugins can add vulnerabilities.

LLM06: Sensitive Information Disclosure

LLM’s may inadvertently reveal confidential data in its responses,
leading to unauthorized data access, privacy violations, and security breaches.
It’s crucial to implement data sanitization and strict user policies to mitigate this.

LLM07: Insecure Plugin Design

LLM plugins can have insecure inputs and insufficient access control.
This lack of application control makes them easier to exploit and can result in consequences like remote code execution.

LLM08: Excessive Agency

LLM-based systems may undertake actions leading to unintended consequences.
The issue arises from excessive functionality, permissions, or autonomy granted to the LLM-based systems.

LLM09: Overreliance

Systems or people overly depending on LLMs without oversight may face misinformation,
miscommunication, legal issues, and security vulnerabilities due to incorrect or inappropriate content generated by LLMs.

LLM10: Model Theft

This involves unauthorized access, copying, or exfiltration of proprietary LLM models.
The impact includes economic losses, compromised competitive advantage, and potential access to sensitive information.

 

 

Educational Resources (AI Threat Mind Map, ...)

https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources

Die OWASP® Foundation arbeitet an der Sicherheitsverbesserung von Software durch Open-Source-Projekte
mit Hunderten Ortsgruppen weltweit, Zehntausende von Mitgliedern.

 

Publication Author Date Title and Link
PDF Sandy Dunn 14-July-23 AI Threat Mind Map
Medium Ken Huang 11-Jun-23 LLM-Powered Applications’ Architecture Patterns and Security Controls
Medium Avinash Sinha 02-Feb-23 AI-ChatGPT-Decision Making Ability- An Over Friendly Conversation with ChatGPT
Medium Avinash Sinha 06-Feb-23 AI-ChatGPT-Decision Making Ability- Hacking the Psychology of ChatGPT- ChatGPT Vs Siri
Wired Matt Burgess 13-Apr-23 The Hacking of ChatGPT Is Just Getting Started
The Math Company Arjun Menon 23-Jan-23 Data Poisoning and Its Impact on the AI Ecosystem
IEEE Spectrum Payal Dhar 24-Mar-23 Protecting AI Models from “Data Poisoning”
AMB Crypto Suzuki Shillsalot 30-Apr-23 Here’s how anyone can Jailbreak ChatGPT with these top 4 methods
Techopedia Kaushik Pal 22-Apr-23 What is Jailbreaking in AI models like ChatGPT?
The Register Thomas Claburn 26-Apr-23 How prompt injection attacks hijack today's top-end AI – and it's tough to fix
Itemis Rafael Tappe Maestro 14-Feb-23 The Rise of Large Language Models ~ Part 2: Model Attacks, Exploits, and Vulnerabilities
Hidden Layer Eoin Wickens, Marta Janus 23-Mar-23 The Dark Side of Large Language Models: Part 1
Hidden Layer Eoin Wickens, Marta Janus 24-Mar-23 The Dark Side of Large Language Models: Part 2
Embrace the Red Johann Rehberger (wunderwuzzi) 29-Mar-23 AI Injections: Direct and Indirect Prompt Injections and Their Implications
Embrace the Red Johann Rehberger (wunderwuzzi) 15-Apr-23 Don't blindly trust LLM responses. Threats to chatbots
MufeedDVH Mufeed 9-Dec-22 Security in the age of LLMs
danielmiessler.com Daniel Miessler 15-May-23 The AI Attack Surface Map v1.0
Dark Reading Gary McGraw 20-Apr-23 Expert Insight: Dangers of Using Large Language Models Before They Are Baked
Honeycomb.io Phillip Carter 25-May-23 All the Hard Stuff Nobody Talks About when Building Products with LLMs
Wired Matt Burgess 25-May-23 The Security Hole at the Heart of ChatGPT and Bing
BizPacReview Terresa Monroe-Hamilton 30-May-23 ‘I was unaware’: NY attorney faces sanctions after using ChatGPT to write brief filled with ‘bogus’ citations
Washington Post Pranshu Verma 18-May-23 A professor accused his class of using ChatGPT, putting diplomas in jeopardy
Kudelski Security Research Nathan Hamiel 25-May-23 Reducing The Impact of Prompt Injection Attacks Through Design
AI Village GTKlondike 7-June-23 Threat Modeling LLM Applications
Embrace the Red Johann Rehberger 28-Mar-23 ChatGPT Plugin Exploit Explained
NVIDIA Developer Will Pearce, Joseph Lucas 14-Jun-23 NVIDIA AI Red Team: An Introduction

 

 

08/2023