Dataset Viewer
Auto-converted to Parquet Duplicate
reference
stringclasses
6 values
prediction
stringclasses
6 values
wer
float64
0.13
0.27
mmlu is an immediate means of testing performance claude 2 0 is the latest claude model from anthropic notux 8x7b v1 is a fine tune of mixtral wizardlm 70b is a fine tune of llama 70b
mmlu is a means of testing performance claude 2 0 is the latest claude model from anthropic notixit by 7bv1 is a fine tune of mixtral wizardlm 70b is a fine tune of lama 70b
0.166667
it's actually wizardlm 70b v1 0 spin self play fine tuning that improves llms decilm 7b is a fast model with 7 billion parameters arena elo is a means of measuring performance the fastest openai model is gpt 4 turbo
it's actually wizard lm70b v1 0 spin self play fine tuning that improves llms the clm7b is a fast model with 7 billion parameters arena elo is a means of measuring performance the fastest openai model is gpt 4 terminal
0.125
openchat is a fine tune then basically it's a fine tune of the mistral 7b model tricksy is an approach for fast inference using sparsity microsoft have launched phi 2 mt bench is a metric for performance eval
openchat is a fine tuned then basically it's a fine tuned off the mistral 7b model trixie is an approach for fast inference use xpercity microsoft have launched fi2 mt bench is a metric for performance evaluation
0.236842
mistral medium is a larger mixture of experts claude 1 is an earlier version of claude from anthropic mixtral 8x7b instruct v0 1 is the mixture of experts with 7b models tulu 2 dpo 70b is a fine tune of the 70b model gemini pro is google's best model
mixtral medium is a larger mixture of experts cloud 1 is an earlier version of cloud from anthropic mixtral 8x7b instruct v 0 1 is the mixture of experts with 7b models tulu 2dp0 is its fine tuned 7b model gemini pro is google's best model
0.265306
solar 10 7b is the fastest rather sorry it's the it's a strong it's a version of mistral 7b with extra layers claude 2 1 is the latest model from anthropic mixtral 8x7b is a mixture of experts lightning attention is another version of attention that improves inference speed
solar 10 7b is the fastest rudder sorry it's a version of miestral 7b with extra layers claude 2 1 is the latest model from anthropic miestral 8x7b is a mixture of experts lightning attention is another version of attention that improves inference speed
0.163265
and yi 34b chat is a fine tune of actually i'm not sure what that is but it's i think it's a trained a pre trained model of llama style
and ye34b chat is a fine tune of actually i'm not sure what that is but i think it's a pre trained model of lama
0.233333
README.md exists but content is empty.
Downloads last month
5