Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

danielhanchenย 
posted an update 3 days ago
SeaWolf-AIย 
posted an update about 7 hours ago
view post
Post
367
ALL Bench Leaderboard โ€” Structural Problems in AI Benchmarking and the Case for Unified Evaluation

FINAL-Bench/all-bench-leaderboard

The AI benchmark ecosystem has three structural problems. Major benchmarks like MMLU have surpassed 90%, losing discriminative power. Most leaderboards publish unverified self-reported scores โ€” our cross-verification found Claude Opus 4.6's ARC-AGI-2 listed as 37.6% (actual: 68.8%), Gemini 3.1 Pro as 88.1% (actual: 77.1%). OpenAI's own audit confirmed 59.4% of SWE-bench Verified tasks are defective, yet it remains widely used.

ALL Bench addresses this by comparing 91 models across 6 modalities (LLM ยท VLM ยท Agent ยท Image ยท Video ยท Music) with 3-tier confidence badges (โœ“โœ“ cross-verified ยท โœ“ single-source ยท ~ self-reported). Composite scoring uses a 5-Axis Framework and replaces SWE-Verified with contamination-resistant LiveCodeBench.

Key finding: metacognition is the largest blind spot. FINAL Bench shows Error Recovery explains 94.8% of self-correction variance, yet only 9 of 42 models are even measured. The 9.2-point spread (Kimi K2.5: 68.71 โ†’ rank 9: 59.5) is 3ร— the GPQA top-model spread, suggesting metacognition may be the single biggest differentiator among frontier models today.

VLM cross-verification revealed rank reversals โ€” Claude Opus 4.6 leads MMMU-Pro (85.1%) while Gemini 3 Flash leads MMMU (87.6%), producing contradictory rankings between the two benchmarks.

๐Ÿ“Š Article: https://huggingface.co/blog/FINAL-Bench/all-bench
๐Ÿ“ฆ Dataset: FINAL-Bench/ALL-Bench-Leaderboard
โšก GitHub: https://github.com/final-bench/ALL-Bench-Leaderboard
๐Ÿ† Leaderboard: FINAL-Bench/all-bench-leaderboard
๐Ÿงฌ FINAL Bench: FINAL-Bench/Metacognitive
prithivMLmodsย 
posted an update 1 day ago
view post
Post
1664
The Qwen3.5 Multimodal Understanding Demo, powered by Qwen3.5-2B, is now available on HF Spaces! It is a lightweight model designed for fast image and video reasoning. Built with Gradio, the demo showcases Image QA, Video QA, object detection, and 2D point tracking, along with real-time token streaming.

๐Ÿค— Demo: prithivMLmods/Qwen-3.5-HF-Demo
โœ… Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations
๐Ÿ”— Qwen3.5-2B: Qwen/Qwen3.5-2B

To learn more, visit the app page or the respective model pages.
perfecXionย 
posted an update 1 day ago
view post
Post
1626
# IntentGuard: Open-Source Vertical Intent Classifiers for LLM Guardrails

Three models published to the Hub:

- [perfecXion/intentguard-finance]( perfecXion/intentguard-finance)
- [perfecXion/intentguard-healthcare]( perfecXion/intentguard-healthcare)
- [perfecXion/intentguard-legal]( perfecXion/intentguard-legal)

DeBERTa-v3-xsmall fine-tuned for three-way classification: **allow**, **deny**, or **abstain**. ONNX + INT8 quantized, under 80MB, p99 <30ms on CPU. Margin-based thresholds (not argmax) โ€” uncertain queries route to clarification instead of forcing a guess.

**Eval results (adversarial test sets, ~470-480 examples per vertical):**

| Vertical | Accuracy | Legit-Block Rate | Off-Topic-Pass Rate |
|----------|----------|------------------|---------------------|
| Finance | 99.6% | 0.00% | 0.00% |
| Healthcare | 98.9% | 0.00% | 0.98% |
| Legal | 97.9% | 0.00% | 0.50% |

docker run -p 8080:8080 ghcr.io/perfecxion/intentguard:finance-latest

curl -X POST http://localhost:8080/v1/classify \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "What are current mortgage rates?"}]}'


Apache 2.0. Full pipeline + Docker configs on [GitHub](https://github.com/perfecxion-ai/intentguard).

Feedback welcome on domain coverage, adversarial robustness, and multilingual demand.

OzTianluย 
posted an update 3 days ago
view post
Post
1802
We deleted the Embedding Layer -- INTRO Our Collins-Embedding-3M
NoesisLab/Collins-Embedding-3M
Most "small" models are just giant vocab tables in a trench coat. Collins-3M changes that. By using 2-Universal Hashing and Chernoff-bound noise suppression, weโ€™ve collapsed the embedding space into a fixed O(1) hash-map.
* STSB: 0.7114 (Beating many 100M+ models)
* Size: 3M (Edge-ready, IoT-ready)
* Tech: Randomized Sign-Hashing + RoPE positional injection.
Built by NoesisLab
AbstractPhilย 
posted an update 2 days ago
view post
Post
1655
I've... done it. This, with experts, achieves near 100% R1 retrieval accuracy on an adjacent - unseen by the fusion transformer - dataset with around 40k steps from the seen dataset. This means the language of the models are at least tested fused within the constraints, not just projected or estimated.
AbstractPhil/geolip-procrustes

I encourage EVERYONE who is curious to check my work. Check it, double check it, and triple check it.

These were aligned using COCO and then validated with Flickr. Entirely different datasets. The experts arbitrated and the alignment yielded the correct answers. Preliminary tests show that with almost no alignment requirement, the models can reach 100% R1 retrieval accuracy.

Not to be confused with validation accuracy for a classification model or a text encoder's text response, this allows multispectral communication between entirely different models for direct downstream consumption with almost no training for the chosen models.

I have a working procrustes experiment that learns adjacent manifolds within a reasonable spectrum and the speed is... well, 1 epoch with COCO using Bert-Large and DinoV2 that allows the models to align nearly perfectly. For some scales in the experiment it shows that the 3 set epochs aren't quite enough to align R1 to highest, while many align nearly immediately.

These two were an obvious pair to pick, 60% similarity and >90% spectral similarity.

The trainer transfers layers, learns embeddings, and more - all by sticking strictly to geometric boundaries and procrustes informational accumulation within a modulation model's constraints.

I have many experiments to run.
  • 1 reply
ยท
Reubencfย 
posted an update 1 day ago
view post
Post
422
๐Ÿš€ I am thrilled to announce the release of a new Konkani LLM!

We've seen some fantastic results for both translation and transliteration tasks, and I'm excited to share this progress with the community.

๐Ÿ“– Read the launch article and see the results: https://huggingface.co/blog/Reubencf/konkani-llm
๐Ÿค– Explore the model and collection:
konkani


I would love to hear your feedback or see what you build with it! #Konkani #LLM #NLP #HuggingFace #IndicNLP #Konkani
TravisMuhlesteinย 
posted an update 1 day ago
view post
Post
1589
Moving AI from experiments to production systems (GoDaddy + AWS case study)

A recurring pattern across many organizations right now is that AI experimentation is easy โ€” operationalizing it is much harder.

This case study from AWS describes how GoDaddy has been deploying AI systems in production environments using AWS infrastructure.

One example is Lighthouse, a generative AI system built using Amazon Bedrock that analyzes large volumes of customer support interactions to identify patterns, insights, and opportunities for improvement.

The interesting part isnโ€™t just the model usage โ€” itโ€™s the system design around it:

- large-scale interaction data ingestion
- LLM-driven analysis pipelines
- recursive learning platforms where real-world signals improve systems over time
- infrastructure designed for continuous iteration

Weโ€™re starting to see a shift where organizations move from AI prototypes toward AI platforms and production systems.

Would be interested to hear how others in the community are thinking about:

- production AI architectures
- LLM evaluation pipelines
- Feedback loops in real-world systems
- infrastructure for scaling AI workloads

Case study:
https://aws.amazon.com/partners/success/godaddy-agenticai/

umarbutlerย 
posted an update 2 days ago
view post
Post
1805
This awesome visualization by @abdurrahmanbutler tracks how reliant the High Court of Australia has been on UK precedents over time.

Back in the early 1900s, up to 70% of citations in High Court decisions were from the UK. Today, that number sits around 20%.

This change seems to have happened gradually as Australia gained more and more independence from the UK, culminating in the Australia Acts of 1986, where we see a nice bump in the proportion of Australian cases cited.

These insights would not be possible without our latest legal AI model, Kanon 2 Enricher, which we used to extract dates and citations from High Court decisions in isaacus/open-australian-legal-corpus and categorize citations by jurisdiction. You can learn about Kanon 2 Enricher here: https://isaacus.com/blog/kanon-2-enricher.
ronantakizawaย 
posted an update 2 days ago
view post
Post
2434
Introducing the github-codereview dataset: A compilation of 200k+ human-written code reviews from top OSS projects (React, Tensorflow, VSCode...).

I finetuned a Qwen2.5-Coder-32B-Instruct model with this dataset and saw significant improvements in generating better code fixes and review comments (4x improved BLEU-4, ROUGE-L, SBERT scores compared to base model).

#codereview #code #datasets

ronantakizawa/github-codereview