Skip to content

ThirdKeyAI/AgentNull

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

24 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 AgentNull: AI System Security Threat Catalog + Proof-of-Concepts

This repository contains a red team-oriented catalog of attack vectors targeting AI systems including autonomous agents (MCP, LangGraph, AutoGPT), RAG pipelines, vector databases, and embedding-based retrieval systems, along with individual proof-of-concepts (PoCs) for each.

πŸ“˜ Structure

  • catalog/AgentNull_Catalog.md β€” Human-readable threat catalog
  • catalog/AgentNull_Catalog.json β€” Structured version for SOC/SIEM ingestion
  • pocs/ β€” One directory per attack vector, each with its own README, code, and sample input/output

⚠️ Disclaimer

This repository is for educational and internal security research purposes only. Do not deploy any techniques or code herein in production or against systems you do not own or have explicit authorization to test.

πŸ”§ Usage

Navigate into each pocs/<attack_name>/ folder and follow the README to replicate the attack scenario.

πŸ€– Testing with Local LLMs (Recommended)

For enhanced PoC demonstrations without API costs, use Ollama with local models:

Install Ollama

# Linux/macOS
curl -fsSL https://ollama.ai/install.sh | sh

# Or download from https://ollama.ai/download

Setup Local Model

# Pull a lightweight model (recommended for testing)
ollama pull gemma3

# Or use a more capable model
ollama pull deepseek-r1
ollama pull qwen3

Run PoCs with Local LLM

# Advanced Tool Poisoning with real LLM
cd pocs/AdvancedToolPoisoning
python3 advanced_tool_poisoning_agent.py local

# Other PoCs work with simulation mode
cd pocs/ContextPackingAttacks
python3 context_packing_agent.py

Ollama Configuration

  • Default endpoint: http://localhost:11434
  • Model selection: Edit the model name in PoC files if needed
  • Performance: Llama2 (~4GB RAM), Mistral (~4GB RAM), CodeLlama (~4GB RAM)

🧩 Attack Vectors Covered

πŸ€– MCP & Agent Systems

🧠 Memory & Context Systems

πŸ” RAG & Vector Systems

πŸ’» Code & File Systems

⚑ Resource & Performance

🌐 Agentic Browser & Assistant Attacks

  • EchoLeak (LLM Scope Violation) - Zero-click indirect prompt injection exfiltrating data via M365 Copilot (Aim Labs / Varonis, CVE-2025-32711, June 2025)
  • Gemini Trifecta - Three-vector indirect prompt injection across Google Gemini surfaces (Tenable Research, October 2025)
  • CometJacking - URL-based prompt injection hijacking agentic browser connected services (LayerX / Brave Security, August 2025)
  • Tainted Memories - CSRF-based persistent memory injection via AI browser auth (LayerX, October 2025)
  • DECEPTICON - Dark patterns manipulate web agents more effectively than humans; larger models MORE susceptible (arXiv:2512.22894, December 2025)
  • Parallel Poisoned Web - Agent-specific web cloaking serves different content to AI agents vs. humans (JFrog, arXiv:2509.00124, September 2025)

πŸ€– Multi-Agent System Attacks

  • Multi-Agent Control-Flow Hijacking (MAS-CFH) - Fake error messages hijack orchestrator re-planning (COLM 2025, arXiv:2503.12188)
  • Inter-Agent Trust Exploitation - Malicious payloads laundered through peer agents bypass direct injection defenses (arXiv:2507.06850, July 2025)
  • A2A Protocol Exploitation - Agent Card spoofing, token abuse, cascading delegation in Google A2A (CSA, Semgrep, arXiv:2505.12490)
  • Prompt Infection - Self-replicating LLM-to-LLM worm propagation across multi-agent systems (arXiv:2410.07283, conferences 2025)

🧠 Memory & Persistence Attacks

  • MemoryGraft - Poisoned experience retrieval in agent memory systems (arXiv:2512.16962, December 2025)
  • Delayed Tool Invocation - Conditional deferred injection triggered by natural user responses (Johann Rehberger / Embrace The Red, February 2025)
  • Zombie Agents - Self-reinforcing persistent injection survives across sessions via memory self-replication (arXiv:2602.15654, February 2026)
  • MINJA - Memory injection via query-only interaction, 98.2% success rate (arXiv:2503.03704, NeurIPS 2025)

πŸ”— Supply Chain & Marketplace Attacks

  • Slopsquatting - Register hallucinated package names as malicious supply chain packages (USENIX Security 2025)
  • Marketplace Skill Poisoning (OpenClaw / ClawHavoc) - Unvetted skill registries exploited for credential theft and RCE (SecurityScorecard, Sangfor, January–February 2026)
  • MCP Supply Chain Backdoor - Compromised npm package silently BCCs emails to attacker (Authzed, 2025)
  • s1ngularity - AI CLI weaponization via supply chain for automated credential theft (Snyk, GitGuardian, August 2025)
  • LangGrinch - Serialization injection in LangChain enables prompt injection β†’ RCE chain (CVE-2025-68664, Cyata, December 2025)

πŸ”“ Protocol & Sandbox Attacks

  • MCP Sampling Exploitation - Covert tool invocation, conversation hijacking via MCP sampling feature (Unit 42, December 2025)
  • Reasoning-Assisted Sandbox Escape - Agent autonomously reasons past sandbox controls (Ona Security, 2025)
  • Semantic Privilege Escalation - Agent takes unauthorized actions while passing every access control check (Acuvity, late 2025)

🎯 Prompt Injection Advances

  • Phantom - Structural template injection creates fabricated conversation history via chat delimiters (arXiv:2602.16958, February 2026)
  • STAC - Sequential tool-chain attack composes benign tool calls into dangerous sequences, 90%+ ASR (arXiv:2509.25624, September 2025)
  • Policy Puppetry - Universal jailbreak via config/policy file formatting, all frontier models vulnerable (HiddenLayer, April 2025)
  • Promptware Kill Chain - Seven-stage kill chain for prompt-injection malware, 21 real-world incidents documented (arXiv:2601.09625, January 2026)
  • Chain-of-Thought Hijacking - Primes reasoning models with harmless puzzles before harmful requests, 99% ASR (arXiv:2510.26418, October 2025)

πŸ‘οΈ Multimodal & Vision Agent Attacks

  • Visual Prompt Injection - Visually embedded instructions in UIs hijack computer-use agents via screenshots (VPI-Bench, arXiv:2506.02456, June 2025)
  • CrossInject - Cross-modal adversarial perturbations across vision + language simultaneously (arXiv:2504.14348, ACM MM 2025)
  • Flashboom - Blinds LLM code auditors via high-attention distraction snippets, 96.3% success (IEEE S&P 2025)

πŸ› οΈ Tool Selection & Hijacking

  • ToolHijacker - Inject malicious tool documents to compel agent tool selection, 96.43% ASR (NDSS 2026, arXiv:2504.19793)
  • UDora - Reasoning trace hijacking via automated injection point discovery (arXiv:2503.01908, February 2025)

πŸ’» Coding Agent Attacks

  • Rule File Injection ("Your AI, My Shell") - Prompt injection via .cursorrules and copilot-instructions.md (arXiv:2509.22040, September 2025)
  • ZombAI - Self-propagating worm turns coding agents into malware/C2 endpoints (CVE-2025-53773, Embrace The Red, 2025)
  • AgentFlayer - Zero-click enterprise agent exploit via hidden document instructions (Zenity, Black Hat USA 2025)
  • Email Agent Hijacking - Remote control of email agents via malicious email content, 1,404/1,404 hijacked (arXiv:2507.02699, July 2025)
  • Gemini Calendar Worm - Calendar invite prompt injection with worm-like self-propagation (SafeBreach, Black Hat USA 2025)

πŸ”¬ RAG Poisoning Advances

  • CorruptRAG - Single-document RAG poisoning with no triggers needed (arXiv:2504.03957, April 2025)

πŸ“š Related Research & Attribution

Novel Attack Vectors (⭐)

The attack vectors marked with ⭐ represent novel concepts primarily developed within the AgentNull project, extending beyond existing documented attack patterns.

Known Attack Patterns with Research Links

2025–2026 Attack Research Links

arXiv Papers & Conference Publications (2025–2026)

Industry Research & CVE Sources

Sponsored by ThirdKey