A local sandbox for your AI agents
-
Updated
Mar 15, 2026 - Python
A local sandbox for your AI agents
Security scanner for local LLMs scanning LLM vulnerabilities including jailbreaks, prompt injection, training data leakage, and adversarial abuse
🔍 Enhance local LLM security by testing for vulnerabilities like prompt injection, model inversion, and data leakage with this robust toolkit.
AI Diet Assistant: React + Flask + local LLM for personalized meal plans and nutrition insights.
Add a description, image, and links to the llmstudio topic page so that developers can more easily learn about it.
To associate your repository with the llmstudio topic, visit your repo's landing page and select "manage topics."