Skip to content
#

prompt-injection-defense

Here are 11 public repositories matching this topic...

PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.

  • Updated Jun 29, 2025
  • Python

A comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.

  • Updated Oct 8, 2025

Bidirectional Security Framework for Human/LLM Interfaces - RC9-FPR4 baseline frozen (ASR 2.76%, Wilson Upper 3.59% GATE PASS, FPR stratified: doc_with_codefence 0.79% Upper GATE PASS, pure_doc 4.69% Upper). RC10.3c development integrated (semantic veto, experimental). Tests: 833/853 (97.7%), MyPy clean, CI GREEN. Shadow deployment ready.

  • Updated Nov 2, 2025
  • Python

🛠️ Explore large language models through hands-on projects and tutorials to enhance your understanding and practical skills in natural language processing.

  • Updated Nov 2, 2025
  • Jupyter Notebook

🛡️ Explore tools for securing Large Language Models, uncovering their strengths and weaknesses in the realm of offensive and defensive security.

  • Updated Nov 2, 2025

Improve this page

Add a description, image, and links to the prompt-injection-defense topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the prompt-injection-defense topic, visit your repo's landing page and select "manage topics."

Learn more