-
BERT-as-a-Judge: A Robust Alternative to Lexical Methods for Efficient Reference-Based LLM Evaluation
Paper β’ 2604.09497 β’ Published β’ 26 -
artefactory/BERTJudge
Text Classification β’ 0.2B β’ Updated β’ 60 β’ 3 -
artefactory/BERTJudge-Free-CR
Text Classification β’ 0.2B β’ Updated β’ 4 β’ 1 -
artefactory/BERTJudge-Formatted-QCR
Text Classification β’ 0.2B β’ Updated β’ 61 β’ 1
AI & ML interests
NLP, Information Retrieval, Computer Vision, Uncertainty Estimation, Trustworthy AI, Bias Estimation, Unbalanced ML, Choice Modeling, Time Series
Recent Activity
View all activity
Papers
BERT-as-a-Judge: A Robust Alternative to Lexical Methods for Efficient Reference-Based LLM Evaluation
Learned Hallucination Detection in Black-Box LLMs using Token-level Entropy Production Rate
Related paper: "Should We Still Pretrain Encoders with Masked Language Modeling?"
-
Should We Still Pretrain Encoders with Masked Language Modeling?
Paper β’ 2507.00994 β’ Published β’ 81 -
MLMvsCLM/610m-mlm30-42k
Feature Extraction β’ Updated β’ 11 -
MLMvsCLM/610m-mlm40-42k-2000
Feature Extraction β’ Updated β’ 13 -
MLMvsCLM/610m-clm-17k-mlm40-22k
Feature Extraction β’ Updated β’ 12
Related paper: "Towards Trustworthy Reranking: A Simple yet Effective Abstention Mechanism" (accepted at TMLR 2024)
Suite of Encoder models EuroBERT
-
EuroBERT: Scaling Multilingual Encoders for European Languages
Paper β’ 2503.05500 β’ Published β’ 81 -
EuroBERT/EuroBERT-210m
Fill-Mask β’ 0.3B β’ Updated β’ 8.5k β’ 83 -
EuroBERT/EuroBERT-610m
Fill-Mask β’ 0.8B β’ Updated β’ 2.21k β’ 34 -
EuroBERT/EuroBERT-2.1B
Fill-Mask β’ 2B β’ Updated β’ 1.14k β’ 67
Related paper: "Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis" (accepted at WMT 2024)
-
Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis
Paper β’ 2409.20059 β’ Published β’ 16 -
artefactory/ALMA-13B-LoRA
Text Generation β’ 13B β’ Updated β’ 5 -
artefactory/ALMA-13B-LoRA-SFT-xCOMET-QE-Multi
Text Generation β’ 13B β’ Updated β’ 3 -
artefactory/ALMA-13B-LoRA-SFT-xCOMET-QE-Multi-No-Base
Text Generation β’ 13B β’ Updated β’ 1
-
BERT-as-a-Judge: A Robust Alternative to Lexical Methods for Efficient Reference-Based LLM Evaluation
Paper β’ 2604.09497 β’ Published β’ 26 -
artefactory/BERTJudge
Text Classification β’ 0.2B β’ Updated β’ 60 β’ 3 -
artefactory/BERTJudge-Free-CR
Text Classification β’ 0.2B β’ Updated β’ 4 β’ 1 -
artefactory/BERTJudge-Formatted-QCR
Text Classification β’ 0.2B β’ Updated β’ 61 β’ 1
Related paper: "Should We Still Pretrain Encoders with Masked Language Modeling?"
-
Should We Still Pretrain Encoders with Masked Language Modeling?
Paper β’ 2507.00994 β’ Published β’ 81 -
MLMvsCLM/610m-mlm30-42k
Feature Extraction β’ Updated β’ 11 -
MLMvsCLM/610m-mlm40-42k-2000
Feature Extraction β’ Updated β’ 13 -
MLMvsCLM/610m-clm-17k-mlm40-22k
Feature Extraction β’ Updated β’ 12
Suite of Encoder models EuroBERT
-
EuroBERT: Scaling Multilingual Encoders for European Languages
Paper β’ 2503.05500 β’ Published β’ 81 -
EuroBERT/EuroBERT-210m
Fill-Mask β’ 0.3B β’ Updated β’ 8.5k β’ 83 -
EuroBERT/EuroBERT-610m
Fill-Mask β’ 0.8B β’ Updated β’ 2.21k β’ 34 -
EuroBERT/EuroBERT-2.1B
Fill-Mask β’ 2B β’ Updated β’ 1.14k β’ 67
Related paper: "Towards Trustworthy Reranking: A Simple yet Effective Abstention Mechanism" (accepted at TMLR 2024)
Related paper: "Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis" (accepted at WMT 2024)
-
Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis
Paper β’ 2409.20059 β’ Published β’ 16 -
artefactory/ALMA-13B-LoRA
Text Generation β’ 13B β’ Updated β’ 5 -
artefactory/ALMA-13B-LoRA-SFT-xCOMET-QE-Multi
Text Generation β’ 13B β’ Updated β’ 3 -
artefactory/ALMA-13B-LoRA-SFT-xCOMET-QE-Multi-No-Base
Text Generation β’ 13B β’ Updated β’ 1