Datasets:

Languages:
Bashkir
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Bashkir News Binary Classification Dataset

This dataset contains Bashkir news and analytical articles labeled for binary classification: news vs analytics. It provides a balanced dataset for training classifiers to distinguish between news articles and analytical/investigative pieces.

Dataset Details

Dataset Description

The dataset consists of 16,994 Bashkir-language texts (articles and news pieces) collected from various online sources. Each article is labeled either as news (label=1) or analytics (label=0). The dataset is perfectly balanced with 8,497 examples in each class.

  • Curated by: Arabov Mullosharaf Kurbonovich, Khaybullina Svetlana Sergeevna (BashkirNLPWorld)
  • Funded by: Not applicable
  • Shared by: BashkirNLPWorld
  • Language(s): Bashkir (Cyrillic script)
  • License: MIT License

Dataset Sources

Uses

Direct Use

This dataset is suitable for:

  • Binary text classification (news vs analytics)
  • Training binary classifiers (logistic regression, SVM, transformers)
  • Baseline evaluation for text classification tasks in Bashkir
  • Fine-tuning multilingual models for news detection

Out-of-Scope Use

  • The dataset should not be used for multi-class classification tasks (use the multiclass version instead).
  • It is not intended for multi-label classification (use the multilabel version).
  • Not suitable for tasks requiring genre-specific analytics (e.g., opinion detection, satire identification).

Loading the Dataset

Using Hugging Face Datasets Library

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("BashkirNLPWorld/bashkir-news-binary")

# Access the training split
train_data = dataset["train"]

# View first example
print(train_data[0])

# Check class distribution
labels = train_data["label"]
print(f"News (label=1): {sum(labels)}")
print(f"Analytics (label=0): {len(labels) - sum(labels)}")

Convert to Pandas DataFrame

# Convert to pandas for easy analysis
df = train_data.to_pandas()
print(df.head())

# Check class balance
print(df['label_text'].value_counts())

Filter by Class

# Get only news articles
news_articles = train_data.filter(lambda x: x["label"] == 1)
print(f"News articles: {len(news_articles)}")

# Get only analytics articles
analytics_articles = train_data.filter(lambda x: x["label"] == 0)
print(f"Analytics articles: {len(analytics_articles)}")

Streaming Mode (for memory-efficient processing)

# Stream the dataset without downloading all at once
dataset = load_dataset(
    "BashkirNLPWorld/bashkir-news-binary",
    split="train",
    streaming=True
)

# Iterate through examples
for example in dataset:
    print(f"Title: {example['title']}")
    print(f"Label: {example['label_text']}")
    print(f"Content length: {example['content_length']}")
    break

Training a Binary Classifier with Transformers

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import Trainer, TrainingArguments
import torch

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-multilingual-cased")
model = AutoModelForSequenceClassification.from_pretrained(
    "bert-base-multilingual-cased",
    num_labels=2  # binary classification
)

# Load dataset
dataset = load_dataset("BashkirNLPWorld/bashkir-news-binary")

# Tokenize function
def tokenize_function(examples):
    return tokenizer(
        examples["content"],
        padding="max_length",
        truncation=True,
        max_length=512
    )

# Apply tokenization
tokenized_dataset = dataset.map(tokenize_function, batched=True)

# Create train/test split (80/20)
split_dataset = tokenized_dataset["train"].train_test_split(test_size=0.2, seed=42)

# Training arguments
training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    evaluation_strategy="epoch",
    save_strategy="epoch",
    load_best_model_at_end=True,
    metric_for_best_model="accuracy",
)

# Define accuracy metric
def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    predictions = predictions.argmax(axis=-1)
    accuracy = (predictions == labels).mean()
    return {"accuracy": accuracy}

# Create trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=split_dataset["train"],
    eval_dataset=split_dataset["test"],
    compute_metrics=compute_metrics,
)

# Train the model
# trainer.train()

Simple Baseline with Scikit-learn

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report
import numpy as np

# Load dataset
dataset = load_dataset("BashkirNLPWorld/bashkir-news-binary")
df = dataset["train"].to_pandas()

# Create train/test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
    df["content"], df["label"], test_size=0.2, random_state=42
)

# Vectorize text
vectorizer = TfidfVectorizer(max_features=10000)
X_train_vec = vectorizer.fit_transform(X_train)
X_test_vec = vectorizer.transform(X_test)

# Train classifier
clf = LogisticRegression(max_iter=1000)
clf.fit(X_train_vec, y_train)

# Evaluate
y_pred = clf.predict(X_test_vec)
print(f"Accuracy: {accuracy_score(y_test, y_pred):.4f}")
print(classification_report(y_test, y_pred, target_names=['analytics', 'news']))

Dataset Structure

Data Fields

  • content (string): Full article text.
  • title (string): Article title.
  • label (int64): Binary label (1 = news, 0 = analytics).
  • label_text (string): Human-readable label ("news" or "analytics").
  • category (string): Original normalized category (e.g., "Яңылыҡтар", "Йәмғиәт").
  • content_length (int64): Length of the text in characters.
  • resource (string): Original URL or resource identifier (if available).
  • date (string): Publication date (when available).

Label Definitions

Label Value Description
News 1 Articles reporting current events, news updates, and immediate information
Analytics 0 Investigative pieces, opinion articles, analyses, and feature stories

Data Splits

The dataset contains a single split (train) with all 16,994 examples. The split is perfectly balanced:

  • News articles (label=1): 8,497 (50%)
  • Analytics articles (label=0): 8,497 (50%)

Users are encouraged to create their own train/validation/test splits based on their specific needs.

Dataset Creation

Class Definition

  • News articles include all texts with normalized category Яңылыҡтар (all news variants like яңалыклар, Яңылыҡтар таҫмаһы, Новости were unified).
  • Analytics articles include all other categories (society, culture, education, religion, etc.).

Curation Rationale

The goal was to create a clean, balanced dataset for binary classification that distinguishes between time-sensitive news reporting and more analytical/investigative content. This distinction is fundamental in many NLP applications, such as summarization, information retrieval, and content filtering.

Balancing Strategy

The original data had class imbalance:

  • News: 8,497 articles
  • Analytics: 15,931 articles

To create a balanced dataset, we applied undersampling on the analytics class, randomly selecting 8,497 articles to match the news class size. This ensures that the model doesn't develop bias towards the majority class.

Source Data

Data Collection and Processing

Articles were collected from 14 Bashkir online sources (see cluster dataset for full list).

Processing steps:

  1. Extracted JSONL files from raw HTML.
  2. Removed texts shorter than 50 characters or longer than 10,000 characters.
  3. Removed exact duplicates.
  4. Normalized category names (e.g., яңалыклар, Яңылыҡтар таҫмаһы, НовостиЯңылыҡтар).
  5. Created binary labels based on normalized categories.
  6. Balanced classes using undersampling.

Who are the source data producers?

The articles were originally written by journalists, authors, and contributors of the respective online publications. The BashkirNLP team does not claim ownership of the content; it is used for non‑commercial research purposes under fair use.

Annotations

No manual annotations were added. Labels were derived automatically from normalized categories:

  • All news variants → label=1
  • All other categories → label=0

Personal and Sensitive Information

The texts are public news articles and do not contain personally identifiable information (PII) beyond what is already published. No additional personal data was collected.

Bias, Risks, and Limitations

  • Undersampling: While balancing the dataset, we removed 7,434 analytics articles, which may lead to loss of diversity in the analytics class.
  • Definition of analytics: The binary split is based on category labels, which may not perfectly align with the intuitive distinction between news and analysis.
  • Source bias: The dataset is dominated by certain sources (e.g., azatliqorg accounts for 28% of data).
  • Genre bias: All texts are from news sources; may not represent other domains.
  • Date incompleteness: Many articles lack publication dates.

Recommendations

  • Users should consider the undersampling trade-off and experiment with alternative balancing methods (e.g., weighted loss, oversampling).
  • For domain-specific applications, consider filtering articles by source or category.
  • For tasks requiring genre diversity, consider supplementing with additional sources.

Evaluation Results

Baseline Results

Simple models can achieve strong performance on this dataset:

Model Accuracy Notes
TF-IDF + Logistic Regression ~0.92 Strong baseline
XLM-RoBERTa (fine-tuned) ~0.96 Requires GPU
BERT-multilingual (fine-tuned) ~0.95 Good performance

These results demonstrate that the news vs analytics distinction is well-defined in the dataset.

Citation

If you use this dataset in your research, please cite it as:

@dataset{arabov2025bashkirbinary,
  author       = {Arabov, Mullosharaf Kurbonovich and Khaybullina, Svetlana Sergeevna},
  title        = {Bashkir News Binary Classification Dataset},
  year         = {2026},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/BashkirNLPWorld/bashkir-news-binary}
}

Dataset Card Authors

  • Arabov Mullosharaf Kurbonovich
  • Khaybullina Svetlana Sergeevna
  • BashkirNLPWorld

Dataset Card Contact

Downloads last month
-