**A collection of 8 code models (3B–20B) trained to behave like a security reviewer.**
## The Problem
Code assistants frequently recommend patterns that pass tests but fail security review—string-built SQL, brittle auth logic, unsafe parsing, insecure defaults, and more. I built SecureCode to address this gap.
You are a senior application security engineer. Review the code below.
Output:
(1) findings with severity,
(2) likely exploit scenarios (high level),
(3) secure rewrite,
(4) defense-in-depth recommendations,
(5) regression tests/checks.
Code: `...`
## Dataset Coverage
SecureCode covers both traditional and emerging security domains: - **Traditional web security** (OWASP Top 10 2021) - **AI/ML security** (OWASP LLM Top 10 2025): prompt injection, RAG poisoning, model extraction, agentic AI patterns
## We Want Your Feedback
We're looking for real-world contributions:
- **Real snippets**: Share code that "slipped through review once" (sanitized is fine) - **False positives/negatives**: What didn't work as expected? - **CVE-grounded examples**: New vulnerability patterns you've encountered
**Please include**: language/framework + what the correct remediation looks like in your environment.
---
**Have contributions or suggestions?** I'd be happy to hear them. Thanks for your support!
10 days? Rookie numbers 😊. so many side-quests. no idea what to do with myself lol. i'd love to hear about your ideas, and happy to give some feedback (for what it's worth)
Note : some things like persistent models/storage/custom LoRAs might not be fully working out of the box. If you need those, you might have to dig into the Wan2GP codebase, see how to tweak the storage folder. Happy hacking!
I just finished AI Engineering by Chip Huyen. Probably the best resource I’ve seen that covers the full AI stack. People wondering how to shift their careers toward AI might find this very useful.
🧠 We just implemented Andrej Karpathy's "third paradigm" for LLM learning!
System Prompt Learning (SPL) enables LLMs to automatically learn problem-solving strategies from experience, rather than relying on static prompts.
🚀 How it works: Your LLM builds a database of effective strategies, selects the best ones for each problem, and refines them over time based on success rates.
The best part? All strategies are human-readable and the system gets progressively better at problem types you use frequently.
✨ Key benefits: 🔄 Cumulative learning over time 📖 Transparent, inspectable strategies 🔌 Works with any OpenAI-compatible API ⚡ Simple integration: just add "spl-" prefix to your model
Built as an open-source plugin in optillm. After 500 queries, our system developed 129 strategies and refined 97 of them!
This feels like a genuine step toward AI that learns from experience while staying completely interpretable.
🧬 Hey everyone! Just released **OpenEvolve** - an open-source implementation of Google DeepMind's AlphaEvolve system.
It's an evolutionary coding agent that uses LLMs to discover and optimize algorithms. I successfully replicated DeepMind's results on circle packing (99.97% match!) and evolved a random search into a simulated annealing algorithm.
✨ Key features: - Evolves entire codebases (not just single functions) - Works with any OpenAI-compatible API - LLM ensemble approach for better results - Multi-objective optimization