AI & ML interests

Local LLMs

Recent Activity

Aurelien-MorganΒ 
posted an update 10 days ago
view post
Post
1016
@retrain-pipelines v0.2.0 is out !
I'm at Station F at My booth with GOSIM Paris 2026 today & tomorrow.
Come meet me for a live in-person demo and a chat !
  • 1 reply
Β·
Ujjwal-TyagiΒ 
posted an update 12 days ago
view post
Post
217
6 Open-Source Libraries to FineTune LLMs
1. Unsloth
GitHub: https://github.com/unslothai/unsloth
β†’ Fastest way to fine-tune LLMs locally
β†’ Optimized for low VRAM (even laptops)
β†’ Plug-and-play with Hugging Face models

2. Axolotl
GitHub: https://github.com/OpenAccess-AI-Collective/axolotl
β†’ Flexible LLM fine-tuning configs
β†’ Supports LoRA, QLoRA, multi-GPU
β†’ Great for custom training pipelines

3. TRL (Transformer Reinforcement Learning)
GitHub: https://github.com/huggingface/trl
β†’ RLHF, DPO, PPO for LLM alignment
β†’ Built on Hugging Face ecosystem
β†’ Essential for post-training optimization

4. DeepSpeed
GitHub: https://github.com/microsoft/DeepSpeed
β†’ Train massive models efficiently
β†’ Memory + speed optimization
β†’ Industry standard for scaling

5. LLaMA-Factory
GitHub: https://github.com/hiyouga/LLaMA-Factory
β†’ All-in-one fine-tuning UI + CLI
β†’ Supports multiple models (LLaMA, Qwen, etc.)
β†’ Beginner-friendly + powerful

6. PEFT
GitHub: https://github.com/huggingface/peft
β†’ Fine-tune with minimal compute
β†’ LoRA, adapters, prefix tuning
β†’ Best for cost-efficient training
  • 1 reply
Β·
Sri-Vigneshwar-DJΒ 
posted an update 13 days ago
view post
Post
114
![Feather DB LongMemEval Results]( Hawky-ai/longmemeval-results)

We ran Feather DB v0.8.0 on LongMemEval (ICLR 2025) β€” 500 questions across real multi-session conversations, up to 115K tokens each.

**Score: 0.693** Β· GPT-4o full-context baseline: 0.640
Full 500-question run with Gemini-Flash: **$2.40**

Per-axis breakdown:
β†’ Info-extraction: **0.942**
β†’ Knowledge-update: **0.714**
β†’ Multi-session: **0.606**
β†’ Temporal: **0.477** ← the hard one, Phase 9 addresses this

Architecture: Hybrid BM25+dense Β· adaptive temporal decay Β· embedded (no server) Β· p50 = 0.19ms Β· MIT

pip install feather-db

Raw results + audit JSONs: Hawky-ai/longmemeval-results
prithivMLmodsΒ 
posted an update 14 days ago
view post
Post
4996
Multimodal-Edge Demo, a node-based inference canvas demo, is now live on Spaces. It features node-based Transformers for fast inference across 10+ edge-device multimodal models on the Hub, all within a single space. The series includes models from Qwen3.5, Qwen3-VL, Gemma 4, and the LFM 2.5 VL model series, with support for reasoning and grounding tasks.

πŸ€— Demo: prithivMLmods/Multimodal-Edge-Node
πŸ”— GitHub: https://github.com/PRITHIVSAKTHIUR/Multimodal-Edge-Node
βœ… Multimodal Apps Collections: https://huggingface.co/collections/prithivMLmods/hall-of-multimodal-apps

πŸ€— > To learn more, visit the app page or the respective model pages.
Ujjwal-TyagiΒ 
posted an update 22 days ago
view post
Post
198
This is the best set of AI and ML books and a full guide to learning machine learning from the ground up. This is my study material that I used, so I thought it would be helpful to share it with others. Like, share, and add it to your collection at Ujjwal-Tyagi/ai-ml-foundations-book-collection.
prithivMLmodsΒ 
posted an update 22 days ago
view post
Post
1872
Now, a collection of various compression schemes for Qwen3.6 and the abliterated version 1 of dense models is available on the Hub. Check it out via the links below. πŸ‘‡

πŸ”— Qwen3.6-MoE: https://huggingface.co/collections/prithivMLmods/qwen36-35b-a3b-compressions
πŸ”— Qwen3.6-27B Compressions: https://huggingface.co/collections/prithivMLmods/qwen36-27b-compressions

πŸ€— > To learn more, visit the app page or the respective model pages.
Ujjwal-TyagiΒ 
posted an update 24 days ago
view post
Post
3941
We are hiring at Shirova AI. We need AI researchers and engineers to work in our research lab. Shirova AI is a research lab in India, so we can help our researchers move to nearby workspaces or let them work from home without ever coming to the lab. We're building our founding team, so the pay will be good. You can learn, so don't hesitate to mail us at: careers@shirova.com
prithivMLmodsΒ 
posted an update 27 days ago
view post
Post
4184
HY-World-2.0 β€” A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds is now available on Spaces, and it works both as native Gradio components and in Gradio server mode.

> HY-World-2.0-Demo: prithivMLmods/HY-World-2.0-Demo
> HY-World-2.0 [Server Mode]: prithivMLmods/HY-World-2.0-Demo
> Featuring 3D reconstruction and Gaussian splats with the Rerun viewer, along with camera poses, depth maps, and surface normals.
> In Server Mode, Gradio is served via FastAPI, with FastAPI remaining the top-level server.
> Model: tencent/HY-World-2.0
> GitHub: https://github.com/PRITHIVSAKTHIUR/HY-World-2.0-Demo

πŸ€—To learn more, visit the app page or the respective model pages.
ParveshiiiiΒ 
posted an update about 1 month ago
view post
Post
540
πŸš€ Sonic: A lightweight Python audio processing library with tempo matching, BPM detection, time-stretching, resampling & track blending β€” now with GPU (CUDA) acceleration for 10x speed!

Perfect for quick remixes, batch edits or syncing tracks.

πŸ‘‰ https://github.com/Parveshiiii/Sonic

#Python #AudioProcessing #OpenSource #PyTorch
Aurelien-MorganΒ 
posted an update about 1 month ago
view post
Post
219
Launching a workweek of @retrain-pipelines wheels.

Day #1 : Compose
  • 4 replies
Β·
prithivMLmodsΒ 
posted an update about 1 month ago
view post
Post
6210
A new comparator on Spaces showcases Standard FLUX.2 Decoder vs. FLUX.2 Small Decoder. The Small Decoder is ~1.4Γ— faster, uses ~1.4Γ— less VRAM, and maintains near-identical image quality. It has ~28M parameters with narrower channels [96, 192, 384, 384] vs. [128, 256, 512, 512], and the demo supports sequence generation by running both decoders simultaneously and comparing the results side by side.

πŸ€— Comparator: https://huggingface.co/spaces/prithivMLmods/Flux.2-4B-Decoder-Comparator
πŸ”— FLUX.2-small-decoder: black-forest-labs/FLUX.2-small-decoder
πŸ”— GitHub: https://github.com/PRITHIVSAKTHIUR/Flux.2-4B-Encoder-Comparator
🚁 Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection

πŸ€— > App built on the Gradio SDK. To learn more, visit the app page or the respective model pages.
prithivMLmodsΒ 
posted an update about 1 month ago
view post
Post
4236
Now, a collection of various compression schemes for Gemma 4 and the abliterated version 1 of dense models is available on the Hub. Check it out via the links below. πŸ‘‡

πŸ”—Gemma 4 Compression(s)- https://huggingface.co/collections/prithivMLmods/gemma-4-compressions
πŸ”—Gemma 4 Uncensored [MAX] + Compression(s) - [`Ξ² ]- https://huggingface.co/collections/prithivMLmods/gemma-4-uncensored-max-compressions
πŸ”—Gemma 4 Compression(s) - MoE- https://huggingface.co/collections/prithivMLmods/gemma-4-compressions-moe
πŸ”—Gemma-4 F32 GGUF- https://huggingface.co/collections/prithivMLmods/gemma-4-f32-gguf

πŸ€— > To learn more, visit the app page or the respective model pages.
prithivMLmodsΒ 
posted an update about 1 month ago
view post
Post
2329
Now the demo for image detection based on SAM3 and Gemma-4 (*Filter) is available on Spaces, using full-fledged Transformers inference with multimodal reasoning for processed images. It also supports video segmentation (mask), video segmentation (annotation), and image click segmentation.

πŸ€— Demo Space: prithivMLmods/SAM3-Gemma4-CUDA
πŸ₯½ SAM3: facebook/sam3
πŸ”— gemma-4-E2B-it: google/gemma-4-E2B-it

To learn more, visit the app page or the respective model pages.
  • 1 reply
Β·
ParveshiiiiΒ 
posted an update about 1 month ago
view post
Post
1619
Excited to announce my latest open-source release on Hugging Face: Parveshiiii/breast-cancer-detector.

This model has been trained and validated on external datasets to support medical research workflows. It is designed to provide reproducible benchmarks and serve as a foundation for further exploration in healthcare AI.

Key highlights:
- Built for medical research and diagnostic study contexts
- Validated against external datasets for reliability
- Openly available to empower the community in building stronger, more effective solutions

This release is part of my ongoing effort to make impactful AI research accessible through **Modotte**. A detailed blog post explaining the methodology, dataset handling, and validation process will be published soon.

You can explore the model here: Parveshiiii/breast-cancer-detector

#AI #MedicalResearch #DeepLearning #Healthcare #OpenSource #HuggingFace

prithivMLmodsΒ 
posted an update about 1 month ago
view post
Post
4767
The demo for Image Detection (*Filter) based on SAM3 and Qwen-3.5 is now available on Hugging Face Spaces using Transformers inference, with multimodal reasoning for processed images, and it also supports video segmentation (mask), video segmentation (annotation), and image click segmentation.

πŸ€— Demo Space: prithivMLmods/SAM3-Plus-Qwen3.5
πŸ₯½ SAM3: facebook/sam3
πŸ”— Qwen-3.5: Qwen/Qwen3.5-2B

To learn more, visit the app page or the respective model pages.
  • 5 replies
Β·
MaziyarPanahiΒ 
posted an update about 2 months ago
view post
Post
2552
Training mRNA Language Models Across 25 Species for $165

We built an end-to-end protein AI pipeline covering structure prediction, sequence design, and codon optimization. After comparing multiple transformer architectures for codon-level language modeling, CodonRoBERTa-large-v2 emerged as the clear winner with a perplexity of 4.10 and a Spearman CAI correlation of 0.40, significantly outperforming ModernBERT. We then scaled to 25 species, trained 4 production models in 55 GPU-hours, and built a species-conditioned system that no other open-source project offers. Complete results, architectural decisions, and runnable code below.

https://huggingface.co/blog/OpenMed/training-mrna-models-25-species
OzTianluΒ 
posted an update about 2 months ago
view post
Post
1407
https://github.com/lizixi-0x2F/March
I just released March, an open-source high-performance KV cache sharing library for LLM inference that uses Trie-based prefix deduplication.
When you run LLM services, you often see thousands of requests sharing the same system prompt and conversation history. But traditional KV cache systems store each sequence separately β€” duplicating the exact same data over and over again. Pure waste.
March uses a Trie structure to automatically detect and reuse identical token prefixes. Instead of storing [system_prompt + history] 1000 times, it's stored once. Everyone shares it.
- 80-97% memory reduction in prefix-heavy workloads (tested on SmolLM2-135M with 500 multi-turn conversations)
- Zero-copy queries β€” returns direct pointers into the memory pool, no expensive memcpy on the hot path
- Predictable memory usage β€” fixed-size page pool with O(L) complexity
- Trade-off: slightly slower than dict O(1) lookup, but the memory savings are worth it in production
  • 1 reply
Β·