Massimo Roberto Scamarcia PRO
AI & ML interests
Recent Activity
Organizations
@maxxafits00 if you are on a budget, I suggest to start small. I unfortunately don't have enough compute to scale right now. To evaluate a pretraining or distillation framework (such as arcee-ai's distillkit), or a new model architecture, you can start from datasets such as TinyStories and move to FineWeb-EDU, cosmopedia, etc later.
Wait for the training and architecture to be stable and validated before moving to a bigger dataset/model. Also, a 7-8B parameters is probably too big for small scale pre-training experiments.
You should try to target 0.5B, max 3B, especially if you use consumer-grade hardware, or a single GPU for rent.
interesting. Yes, as you noticed as well a few billions tokens aren't enough. SmolLM2 360M was trained on 4 trillion tokens.
but I am not sure how to explain those results on piqa and sciq:
uv run lm_eval --model hf --model_args pretrained=models/Echo-DSRN-Small-Instruct-Kurtis,trust_remote_code=True,device_map="auto" --tasks hellaswag,winogrande,piqa,sciq --output_path ./results_final
| Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
|---|---|---|---|---|---|---|---|---|
| hellaswag | 1 | none | 0 | acc | β | 0.2927 | Β± | 0.0045 |
| none | 0 | acc_norm | β | 0.3199 | Β± | 0.0047 | ||
| piqa | 1 | none | 0 | acc | β | 0.6230 | Β± | 0.0113 |
| none | 0 | acc_norm | β | 0.6202 | Β± | 0.0113 | ||
| sciq | 1 | none | 0 | acc | β | 0.7380 | Β± | 0.0139 |
| none | 0 | acc_norm | β | 0.6480 | Β± | 0.0151 | ||
| winogrande | 1 | none | 0 | acc | β | 0.5020 | Β± | 0.0141 |
I can share more details in this convo, but this probably uncharted territory for an hybrid RNN with 4 attention heads
Now available at https://huggingface.co/spaces/ethicalabs/Echo-DSRN-Small-Next-Word-Prediction ... on the shared CPU HF resources it runs slow, but on my Macbook M4 and AMD Strix Halo is blazing fast. Memory footprint is low. I am now expanding to 1B using Net2Net and today I tested a SFT run (QLoRA, 4-bit, bf16) on consumer hardware with trl with apparently no catastrophic forgetting.
10 years ago, getting an LSTM to output coherent English was a struggle.
10 years later, after a "cure" based on FineWeb-EDU and a custom synthetic mix for causal conversation, the results are fascinating.
We trained this on ~10B tokens on a single AMD GPU (ROCm). It is not a Transformer: Echo-DSRN (400M) is a novel recurrent architecture inspired by Hymba, RWKV, and xLSTM, designed to challenge the "Attention is All You Need" monopoly on the Edge.
The ambitious goal is to build a small instruct model with RAG and tool usage capabilities ( ethicalabs/Kurtis-EON1)
π The Benchmarks (Size: 400M)
For a model this size (trained on <10B tokens), the specialized performance is surprising:
*SciQ*: 73.8% π¦ (This rivals billion-parameter models in pure fact retrieval).
*PIQA*: 62.3% (Solid physical intuition for a sub-1B model).
The Reality Check:
HellaSwag (29.3%) and Winogrande (50.2%) show the limits of 400M parameters and 10B tokens training.
We are hitting the "Reasoning Wall" which confirms we need to scale to (hopefully) unlock deeper common sense. As you can see in the visualization (to be released soon on HF), the FineWeb-EDU bias is strong. The model is convinced it is in a classroom ("In this course, we explore...").
The Instruct Model is not ready yet and we are currently using curriculum learning to test model plasticity.
Source code and weights will not be released yet. This is not a fork or a fine-tune: the base model is built in-house at https://www.ethicalabs.ai/, with novel components that do not exist in current open libraries.
π€ Call for Collaboration: I am looking for Peer Reviewers interested in recurrent/hybrid architectures. If you want to explore what lies beyond Transformers, letβs connect!
Training diary: ethicalabs/Kurtis-EON1
