Model Card for aegisnode_student
This model is a fine-tuned version of unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit. It has been trained using TRL.
π‘οΈ AegisNode (7B)
Forged by KHALM Labs | The Deterministic AI Foundry AegisNode is a highly specialized, deterministic AI model designed exclusively to generate compiler-verified, production-ready AWS Terraform infrastructure. Fine-tuned on the powerful Qwen2.5-Coder-7B-Instruct base, AegisNode has been stripped of conversational bloat and trained to operate with zero hallucinations in mission-critical cloud environments.
π The Critic Loop Architecture
AegisNode was not trained on scraped, deprecated GitHub repositories. It was trained using KHALM's proprietary Critic Loop. Every single trajectory in the training dataset was autonomously generated, executed, and mathematically verified against an isolated HashiCorp compiler (terraform validate and terraform plan). If a generated architecture contained circular dependencies, duplicate resources, or missing IAM policies, it was rejected. AegisNode has only been trained on pure, flawless Success: 0 HCL syntax.
Quick start
You can run AegisNode using standard transformers or natively through unsloth for 2x faster inference.
from unsloth import FastLanguageModel
# 1. Load the AegisNode Model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "KHALM-Labs/aegisnode",
max_seq_length = 4096,
dtype = None,
load_in_4bit = True,
)
FastLanguageModel.for_inference(model)
# 2. The Infrastructure Prompt
messages = [
{"role": "system", "content": "You are a Principal AWS Cloud Architect. Write flawless, enterprise-grade Terraform code."},
{"role": "user", "content": "Deploy an AWS S3 bucket named 'khalm-secure-data' with versioning enabled, a private ACL, and enforce strict least-privilege IAM roles. Provider ~> 5.0."}
]
# 3. Generate Terraform
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True,
return_dict = True,
return_tensors = "pt",
).to("cuda")
outputs = model.generate(input_ids = inputs["input_ids"], max_new_tokens = 2048, use_cache = True)
response = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True)[0]
print(response)
βοΈ Model Capabilities & Rules
During training, AegisNode was conditioned on strict deterministic rules: Zero Placeholders: It outputs complete, ready-to-deploy files. Unique Resources: Hard-conditioned to never declare duplicate resource names. Offline Integrity: Conditioned to use hardcoded architectural placeholders (e.g., ami-12345678) rather than dynamic data sources, ensuring offline terraform plan integrity. Pure Output: Outputs raw HCL inside standard ````terraform` blocks without conversational apologies or explanations.
β οΈ Intended Use & Limitations
Intended Use: Automated CI/CD pipelines, Cloud Architecture drafting, and Enterprise Managed Service Providers (MSPs). Limitations: AegisNode is heavily specialized for AWS (Amazon Web Services). Performance on Azure or GCP providers is currently out of scope for this specific release.
Training procedure
This model was trained with Supervised Fine-Tuning (SFT) utilizing packed sequences for maximum VRAM efficiency.
Framework versions
- PEFT 0.18.1
- TRL: 0.24.0
- Transformers: 5.2.0
- Pytorch: 2.10.0
- Datasets: 4.3.0
- Tokenizers: 0.22.2
π About KHALM Labs
General AI is for brainstorming. Foundry AI is for infrastructure. Visit khalm.ai to learn more about our Deterministic Validation pipelines.
Citations
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
- Downloads last month
- 203
4-bit