| | --- |
| | library_name: transformers |
| | license: other |
| | base_model: meta-llama/Meta-Llama-3-8B-Instruct |
| | tags: |
| | - llama-factory |
| | - full |
| | - generated_from_trainer |
| | model-index: |
| | - name: sft |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # REDCODER: Automated Multi-Turn Red Teaming for Code LLMs |
| |
|
| | > 🔬 A model fine-tuned for adversarial multi-turn prompt generation to induce vulnerabilities in Code LLMs. |
| | > 📄 [[arXiv:2507.22063](https://arxiv.org/pdf/2507.22063)] • 🧠 |
| | > 💻 Full code & data: [GitHub – luka-group/RedCoder](https://github.com/luka-group/RedCoder) |
| |
|
| | --- |
| |
|
| |
|
| | ## 🧠 Model Summary |
| |
|
| | **REDCODER** is a red-teaming LLM trained to engage target Code LLMs in multi-turn conversations that gradually steer them into generating **CWE vulnerabilities** (e.g., Such as path traversal, SQL injection, etc.). |
| |
|
| | This model is designed to support: |
| | - ⚔️ **Red-teaming evaluations** for Code LLMs |
| | - 🧪 **Security benchmarking** of model guardrails and filters |
| | - 🧩 **Multi-turn adversarial prompt generation** in research settings |
| |
|
| | > ⚠️ This model should not be used to generate real-world exploits. Its intended use is for research, safety evaluation, and secure LLM development. |
| |
|
| | --- |
| |
|
| |
|
| | If you find this work useful, please cite: |
| |
|
| | ```bibtex |
| | @article{mo2025redcoder, |
| | title = {REDCODER: Automated Multi-Turn Red Teaming for Code LLMs}, |
| | author = {Wenjie Jacky Mo and Qin Liu and Xiaofei Wen and Dongwon Jung and |
| | Hadi Askari and Wenxuan Zhou and Zhe Zhao and Muhao Chen}, |
| | journal = {arXiv preprint arXiv:2507.22063}, |
| | year = {2025} |
| | } |
| | ``` |
| |
|
| |
|