mgb2-arabic / README.md
MohamedRashad's picture
Update README.md
6d94091 verified
metadata
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: transcript
      dtype: string
    - name: duration
      dtype: float64
    - name: quality
      dtype: string
    - name: segment_id
      dtype: string
    - name: recording_id
      dtype: string
    - name: show_title
      dtype: string
    - name: broadcast_date
      dtype: string
    - name: genre
      dtype: string
    - name: service
      dtype: string
  splits:
    - name: train
      num_bytes: 124665079855.555
      num_examples: 376011
    - name: validation
      num_bytes: 977060717.75
      num_examples: 5002
    - name: test
      num_bytes: 1105445222.375
      num_examples: 5365
  download_size: 131640805329
  dataset_size: 126747585795.68
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
task_categories:
  - automatic-speech-recognition
language:
  - ar
pretty_name: MGB2 Arabic

MGB-2: Arabic Multi-Dialect Broadcast Media Recognition

Dataset Description

Dataset Summary

The Arabic Multi-Genre Broadcast (MGB-2) dataset is a large-scale speech recognition corpus containing 1,200 hours of Arabic broadcast audio from Aljazeera Arabic TV channel. The dataset spans recordings from March 2005 to December 2015 and covers 19 distinct programme series. It was originally created for the MGB-2 Challenge at SLT-2016, focusing on handling dialect diversity in Arabic speech recognition.

The dataset includes lightly supervised transcriptions (non-verbatim) and covers multiple Arabic dialects including Modern Standard Arabic (MSA) and various Dialectal Arabic varieties. The audio comes from three main programme categories: conversations (63%), interviews (19%), and reports (18%).

Supported Tasks and Leaderboards

Primary Tasks:

  • Automatic Speech Recognition (ASR): Speech-to-text transcription of multi-dialect broadcast audio

Challenge Baseline: The original challenge baseline achieved 34% WER on the development set, while the best submitted system achieved 14.7% WER.

Languages

The dataset primarily contains Arabic language content with the following distribution:

  • Modern Standard Arabic (MSA): Approximately 70% of the speech
  • Dialectal Arabic (DA): Approximately 30%, including:
    • Egyptian Arabic (EGY)
    • Gulf Arabic (GLF)
    • Levantine Arabic (LEV)
    • North African Arabic (NOR)
  • Other Languages: Small portions of English and French (typically translated and dubbed into Arabic)

Dataset Structure

Data Instances

Each instance represents a speech segment from an Aljazeera broadcast programme. Here's an example of what a data instance looks like:

{
  'audio': {'path': 'path/to/audio.wav', 'array': [...], 'sampling_rate': 16000},
  'duration': 12.5,
  'transcript': 'في هذا البرنامج نناقش القضايا السياسية...',
  'quality': 'clean',
  'segment_id': '00C710F1_5611_4A7C_8A25_801BAEA5A5AD_utt_1_align',
  'recording_id': '00C710F1-5611-4A7C-8A25-801BAEA5A5AD',
  'show_title': "lqA' Alywm",
  'broadcast_date': '2015/04/22',
  'genre': 'conversation',
  'service': 'Al Jazeera'
}

Data Fields

  • audio (Audio): The audio file containing the speech segment

    • Sampling rate: 16,000 Hz
    • Format: Audio array with path information
  • duration (float): Duration of the audio segment in seconds

    • Range: 0.26 to 141 seconds
    • Most segments are between 5-30 seconds as described in the paper
  • transcript (string): The transcribed text for the audio segment

    • Length: 4 to 1,830 characters
    • Note: These are lightly supervised, non-verbatim transcriptions that may include rephrasing or summarization
  • quality (string): Quality/type of transcription

    • Indicates the transcription methodology (lightly supervised alignment)
  • segment_id (string): Unique identifier for each audio segment

    • Length: 48-50 characters
  • recording_id (string): Identifier for the source recording/episode

    • 93 unique recordings in the dataset
  • show_title (string): Name of the TV programme series

    • 12 unique programme titles
  • broadcast_date (date): Original broadcast date of the programme

    • Range: September 15, 2012 to October 30, 2015
  • genre (string): Programme category

    • Categories include: conversation, interview, and report
  • service (string): Broadcasting service/channel

    • Source: Aljazeera Arabic channel

Data Splits

The dataset contains:

  • Training set: ~1,200 hours from 3,000 episodes
  • Development set: 10 hours with verbatim transcription (8.5 hours non-overlapping speech, 1.5 hours overlapping speech)
  • Test set: Details available in the original challenge

Dataset Creation

Curation Rationale

The MGB-2 Challenge was created to advance Arabic speech recognition technology, particularly for handling:

  • Multi-dialect Arabic speech in realistic broadcast conditions
  • Varying acoustic environments and recording quality
  • Overlapping speech in conversational programmes
  • Domain diversity across news, politics, culture, and other topics

Source Data

Initial Data Collection and Normalization

Audio data was crawled from Aljazeera Arabic TV channel using the QCRI Advanced Transcription System (QATS). Only programmes with existing transcriptions on Aljazeera.net were included. The transcriptions are non-verbatim and may include rephrasing, removal of repetitions, or summarization, particularly in cases of overlapping speech. The Word Error Rate (WER) between original transcriptions and verbatim versions is approximately 5% on the development set.

Who are the source language producers?

The source content comes from Aljazeera Arabic TV channel, featuring professional broadcasters, journalists, presenters, political figures, experts, and guests from across the Arabic-speaking world, representing diverse dialects and speaking styles.

Annotations

Annotation process

The dataset uses lightly supervised transcriptions aligned using:

  • The QCRI Arabic LVCSR system with LSTM acoustic models
  • Grapheme-based modeling with vocabulary of approximately one million words
  • Smith-Waterman algorithm for local sequence alignment
  • Linear interpolation for force-aligning non-matching segments
  • Anchor rate calculation to assess alignment quality (Equation: AnchorRate = MatchedWords / TranscriptionWords)

Who are the annotators?

Original transcriptions were created by professional transcribers at Aljazeera Media Network. Automatic alignment was performed using QCRI's ASR system.

Personal and Sensitive Information

The dataset contains broadcast media content featuring public figures, journalists, and guests who appeared on Aljazeera programmes. All content was originally publicly broadcast.

Considerations for Using the Data

Social Impact of Dataset

This dataset enables research and development of Arabic speech recognition systems that can handle:

  • Multiple Arabic dialects in realistic settings
  • Broadcast media with varying audio quality
  • Domain-specific vocabulary across news, politics, and culture

Discussion of Biases

The dataset has several important characteristics to consider:

  • Dialect distribution: MSA is overrepresented (70%) compared to dialectal varieties (30%)
  • Domain bias: Political content is the most frequent category
  • Geographic representation: Coverage of different Arabic-speaking regions varies
  • Transcription quality: Non-verbatim transcriptions may not capture all spoken content, particularly hesitations, disfluencies, and overlapping speech

Other Known Limitations

  • Transcriptions are lightly supervised and non-verbatim, with approximately 5% divergence from actual speech
  • Alignment quality varies significantly: 48% exact match, 15% approximate match, 37% no match
  • Overlapping speech (particularly in conversational programmes) presents significant challenges
  • Some segments contain translated/dubbed content from English or French
  • Timing information accuracy depends on ASR system quality during initial alignment

Additional Information

Dataset Curators

The dataset was created through collaboration between:

  • Qatar Computing Research Institute, HBKU, Doha, Qatar
  • Centre for Speech Technology Research, University of Edinburgh, UK
  • MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Cambridge, MA, USA
  • Aljazeera Media Network, Doha, Qatar

Licensing Information

Please refer to the original MGB-2 Challenge terms and Aljazeera Media Network's usage policies for licensing information.

Citation Information

@inproceedings{ali2016mgb2,
  title={The MGB-2 Challenge: Arabic Multi-Dialect Broadcast Media Recognition},
  author={Ali, Ahmed and Bell, Peter and Glass, James and Messaoui, Yacine and Mubarak, Hamdy and Renals, Steve and Zhang, Yifan},
  booktitle={Proceedings of the Spoken Language Technology Workshop (SLT)},
  year={2016},
  organization={IEEE}
}