Getting Started
An open-source framework for LLM security testing and red teaming
Prerequisites
- Python 3.11+
Installation
Quick Start Guide
Results Visualization
The success rates visualization shows the effectiveness of different attack vectors against the target LLM model. Higher percentages indicate more successful prompt injection attempts, helping identify potential vulnerabilities in the model’s response patterns.
The success rates visualization shows the effectiveness of different attack vectors against the target LLM model. Higher percentages indicate more successful prompt injection attempts, helping identify potential vulnerabilities in the model’s response patterns.
Token usage metrics display the computational resources consumed during testing. This visualization helps track and optimize the token consumption across different attack scenarios, providing insights into the efficiency of various testing approaches.
Cost analysis provides a detailed breakdown of expenses associated with different testing strategies. This visualization helps monitor and manage testing costs across various attack vectors, enabling better budget planning and resource allocation for security assessments.
Contributing
Fork Repository
Start by forking the repository
Make Changes
Implement your features or fixes
Submit PR
Create a pull request for review
Support & Community
Discord Community
Join our Discord server for discussions and support
GitHub Issues
Report bugs and request features on GitHub