id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
759,654,174
1,329
Add yoruba ner corpus
closed
[]
2020-12-08T17:54:00
2020-12-08T23:11:12
2020-12-08T23:11:12
dadelani
https://github.com/huggingface/datasets/pull/1329
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1329", "html_url": "https://github.com/huggingface/datasets/pull/1329", "diff_url": "https://github.com/huggingface/datasets/pull/1329.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1329.patch", "merged_at": null }
true
759,634,907
1,328
Added the NewsPH Raw dataset and corresponding dataset card
closed
[]
2020-12-08T17:25:45
2020-12-10T11:04:34
2020-12-10T11:04:34
This PR adds the original NewsPH dataset which is used to autogenerate the NewsPH-NLI dataset. Reopened a new PR as the previous one had problems. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
jcblaisecruz02
https://github.com/huggingface/datasets/pull/1328
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1328", "html_url": "https://github.com/huggingface/datasets/pull/1328", "diff_url": "https://github.com/huggingface/datasets/pull/1328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1328.patch", "merged_at": "2020-12-10T11:04:34" }
true
759,629,321
1,327
Add msr_genomics_kbcomp dataset
closed
[]
2020-12-08T17:18:20
2020-12-08T18:18:32
2020-12-08T18:18:06
manandey
https://github.com/huggingface/datasets/pull/1327
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1327", "html_url": "https://github.com/huggingface/datasets/pull/1327", "diff_url": "https://github.com/huggingface/datasets/pull/1327.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1327.patch", "merged_at": "2020-12-08T18:18:06" }
true
759,611,784
1,326
TEP: Tehran English-Persian parallel corpus
closed
[]
2020-12-08T16:56:53
2020-12-19T14:55:03
2020-12-10T11:25:17
TEP: Tehran English-Persian parallel corpus more info : http://opus.nlpl.eu/TEP.php
spatil6
https://github.com/huggingface/datasets/pull/1326
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1326", "html_url": "https://github.com/huggingface/datasets/pull/1326", "diff_url": "https://github.com/huggingface/datasets/pull/1326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1326.patch", "merged_at": "2020-12-10T11:25:17" }
true
759,595,556
1,325
Add humicroedit dataset
closed
[]
2020-12-08T16:35:46
2020-12-17T17:59:09
2020-12-17T17:59:09
Pull request for adding humicroedit dataset
saradhix
https://github.com/huggingface/datasets/pull/1325
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1325", "html_url": "https://github.com/huggingface/datasets/pull/1325", "diff_url": "https://github.com/huggingface/datasets/pull/1325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1325.patch", "merged_at": "2020-12-17T17:59:09" }
true
759,587,864
1,324
❓ Sharing ElasticSearch indexed dataset
open
[]
2020-12-08T16:25:58
2020-12-22T07:50:56
null
Hi there, First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing. **Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering - how can I know where it has been saved? - how can I share the indexed dataset with others? I tried to dig into the docs, but could not find anything about that. Thank you very much for your help. Best, Pietro Edit: apologies for the wrong label
pietrolesci
https://github.com/huggingface/datasets/issues/1324
null
false
759,581,919
1,323
Add CC-News dataset of English language articles
closed
[]
2020-12-08T16:18:15
2021-02-01T16:55:49
2021-02-01T16:55:49
Adds [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/) dataset. It contains 708241 English language news articles. Although each article has a language field these tags are not reliable. I've used Spacy language detection [pipeline](https://spacy.io/universe/project/spacy-langdetect) to confirm that the article language is indeed English. The prepared dataset is temporarily hosted on my private Google Storage [bucket](https://storage.googleapis.com/hf_datasets/cc_news.tar.gz). We can move it to HF storage and update this PR before merging.
vblagoje
https://github.com/huggingface/datasets/pull/1323
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1323", "html_url": "https://github.com/huggingface/datasets/pull/1323", "diff_url": "https://github.com/huggingface/datasets/pull/1323.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1323.patch", "merged_at": "2021-02-01T16:55:49" }
true
759,576,003
1,322
add indonlu benchmark datasets
closed
[]
2020-12-08T16:10:58
2020-12-13T02:11:27
2020-12-13T01:54:28
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.
yasirabd
https://github.com/huggingface/datasets/pull/1322
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1322", "html_url": "https://github.com/huggingface/datasets/pull/1322", "diff_url": "https://github.com/huggingface/datasets/pull/1322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1322.patch", "merged_at": null }
true
759,573,610
1,321
added dutch_social
closed
[]
2020-12-08T16:07:54
2020-12-16T10:14:17
2020-12-16T10:14:17
The Dutch social media tweets dataset. Which has a total of more than 210k tweets in dutch language. These tweets have been machine annotated with sentiment scores (`label` feature) and `industry` and `hisco_codes` It can be used for sentiment analysis, multi-label classification and entity tagging
skyprince999
https://github.com/huggingface/datasets/pull/1321
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1321", "html_url": "https://github.com/huggingface/datasets/pull/1321", "diff_url": "https://github.com/huggingface/datasets/pull/1321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1321.patch", "merged_at": "2020-12-16T10:14:17" }
true
759,566,148
1,320
Added the WikiText-TL39 dataset and corresponding card
closed
[]
2020-12-08T16:00:26
2020-12-10T11:24:53
2020-12-10T11:24:53
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Restarted a new pull request since there were problems with the earlier one. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
jcblaisecruz02
https://github.com/huggingface/datasets/pull/1320
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1320", "html_url": "https://github.com/huggingface/datasets/pull/1320", "diff_url": "https://github.com/huggingface/datasets/pull/1320.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1320.patch", "merged_at": "2020-12-10T11:24:52" }
true
759,565,923
1,319
adding wili-2018 language identification dataset
closed
[]
2020-12-08T16:00:09
2020-12-14T21:20:32
2020-12-14T21:20:32
Shubhambindal2017
https://github.com/huggingface/datasets/pull/1319
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1319", "html_url": "https://github.com/huggingface/datasets/pull/1319", "diff_url": "https://github.com/huggingface/datasets/pull/1319.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1319.patch", "merged_at": "2020-12-14T21:20:32" }
true
759,565,629
1,318
ethos first commit
closed
[]
2020-12-08T15:59:47
2020-12-10T14:45:57
2020-12-10T14:45:57
Ethos passed all the tests except from this one: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name> with this error: E OSError: Cannot find data file. E Original error: E [Errno 2] No such file or directory:
iamollas
https://github.com/huggingface/datasets/pull/1318
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1318", "html_url": "https://github.com/huggingface/datasets/pull/1318", "diff_url": "https://github.com/huggingface/datasets/pull/1318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1318.patch", "merged_at": null }
true
759,553,495
1,317
add 10k German News Article Dataset
closed
[]
2020-12-08T15:44:25
2021-09-17T16:55:51
2020-12-16T16:50:43
stevhliu
https://github.com/huggingface/datasets/pull/1317
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1317", "html_url": "https://github.com/huggingface/datasets/pull/1317", "diff_url": "https://github.com/huggingface/datasets/pull/1317.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1317.patch", "merged_at": null }
true
759,549,601
1,316
Allow GitHub releases as dataset source
closed
[]
2020-12-08T15:39:35
2020-12-10T10:12:00
2020-12-10T10:12:00
# Summary Providing a GitHub release URL to `DownloadManager.download()` currently throws a `ConnectionError: Couldn't reach [DOWNLOAD_URL]`. This PR fixes this problem by adding an exception for GitHub releases in `datasets.utils.file_utils.get_from_cache()`. # Reproduce ``` import datasets url = 'http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz' result = datasets.utils.file_utils.get_from_cache(url) # Returns: ConnectionError: Couldn't reach http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz ``` # Cause GitHub releases returns a HTTP status 403 (FOUND), indicating that the request is being redirected (to AWS S3, in this case). `get_from_cache()` checks whether the status is 200 (OK) or if it is part of two exceptions (Google Drive or Firebase), otherwise the mentioned error is thrown. # Solution Just like the exceptions for Google Drive and Firebase, add a condition for GitHub releases URLs that return the HTTP status 403. If this is the case, continue normally.
benjaminvdb
https://github.com/huggingface/datasets/pull/1316
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1316", "html_url": "https://github.com/huggingface/datasets/pull/1316", "diff_url": "https://github.com/huggingface/datasets/pull/1316.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1316.patch", "merged_at": "2020-12-10T10:12:00" }
true
759,548,706
1,315
add yelp_review_full
closed
[]
2020-12-08T15:38:27
2020-12-09T15:55:49
2020-12-09T15:55:49
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353 I included the dataset card.
hfawaz
https://github.com/huggingface/datasets/pull/1315
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1315", "html_url": "https://github.com/huggingface/datasets/pull/1315", "diff_url": "https://github.com/huggingface/datasets/pull/1315.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1315.patch", "merged_at": "2020-12-09T15:55:48" }
true
759,541,937
1,314
Add snips built in intents 2016 12
closed
[]
2020-12-08T15:30:19
2020-12-14T09:59:07
2020-12-14T09:59:07
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
bduvenhage
https://github.com/huggingface/datasets/pull/1314
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1314", "html_url": "https://github.com/huggingface/datasets/pull/1314", "diff_url": "https://github.com/huggingface/datasets/pull/1314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1314.patch", "merged_at": "2020-12-14T09:59:06" }
true
759,536,512
1,313
Add HateSpeech Corpus for Polish
closed
[]
2020-12-08T15:23:53
2020-12-16T16:48:45
2020-12-16T16:48:45
This PR adds a HateSpeech Corpus for Polish, containing offensive language examples. - **Homepage:** http://zil.ipipan.waw.pl/HateSpeech - **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf
kacperlukawski
https://github.com/huggingface/datasets/pull/1313
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1313", "html_url": "https://github.com/huggingface/datasets/pull/1313", "diff_url": "https://github.com/huggingface/datasets/pull/1313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1313.patch", "merged_at": "2020-12-16T16:48:45" }
true
759,532,626
1,312
Jigsaw toxicity pred
closed
[]
2020-12-08T15:19:14
2020-12-11T12:11:32
2020-12-11T12:11:32
Requires manually downloading data from Kaggle.
taihim
https://github.com/huggingface/datasets/pull/1312
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1312", "html_url": "https://github.com/huggingface/datasets/pull/1312", "diff_url": "https://github.com/huggingface/datasets/pull/1312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1312.patch", "merged_at": null }
true
759,514,819
1,311
Add OPUS Bible Corpus (102 Languages)
closed
[]
2020-12-08T14:57:08
2020-12-09T15:30:57
2020-12-09T15:30:56
abhishekkrthakur
https://github.com/huggingface/datasets/pull/1311
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1311", "html_url": "https://github.com/huggingface/datasets/pull/1311", "diff_url": "https://github.com/huggingface/datasets/pull/1311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1311.patch", "merged_at": "2020-12-09T15:30:56" }
true
759,508,921
1,310
Add OffensEval-TR 2020 Dataset
closed
[]
2020-12-08T14:49:51
2020-12-12T14:15:42
2020-12-09T16:02:06
This PR adds the OffensEval-TR 2020 dataset which is a Turkish offensive language corpus by me and @basakbuluz. The corpus consist of randomly sampled tweets and annotated in a similar way to [OffensEval](https://sites.google.com/site/offensevalsharedtask/) and [GermEval](https://projects.fzai.h-da.de/iggsa/). - **Homepage:** [offensive-turkish](https://coltekin.github.io/offensive-turkish/) - **Paper:** [A Corpus of Turkish Offensive Language on Social Media](https://coltekin.github.io/offensive-turkish/troff.pdf) - **Point of Contact:** [Çağrı Çöltekin](ccoltekin@sfs.uni-tuebingen.de)
yavuzKomecoglu
https://github.com/huggingface/datasets/pull/1310
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1310", "html_url": "https://github.com/huggingface/datasets/pull/1310", "diff_url": "https://github.com/huggingface/datasets/pull/1310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1310.patch", "merged_at": "2020-12-09T16:02:06" }
true
759,501,370
1,309
Add SAMSum Corpus dataset
closed
[]
2020-12-08T14:40:56
2020-12-14T12:32:33
2020-12-14T10:20:55
Did not spent much time writing README, might update later. Copied description and some stuff from tensorflow_datasets https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/samsum.py
changjonathanc
https://github.com/huggingface/datasets/pull/1309
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1309", "html_url": "https://github.com/huggingface/datasets/pull/1309", "diff_url": "https://github.com/huggingface/datasets/pull/1309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1309.patch", "merged_at": "2020-12-14T10:20:55" }
true
759,492,953
1,308
Add Wiki Lingua Dataset
closed
[]
2020-12-08T14:30:13
2020-12-14T10:39:52
2020-12-14T10:39:52
Hello, This is my first PR. I have added Wiki Lingua Dataset along with dataset card to the best of my knowledge. There was one hiccup though. I was unable to create dummy data because the data is in pkl format. From the document, I see that: ```At the moment it supports data files in the following format: txt, csv, tsv, jsonl, json, xml```
katnoria
https://github.com/huggingface/datasets/pull/1308
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1308", "html_url": "https://github.com/huggingface/datasets/pull/1308", "diff_url": "https://github.com/huggingface/datasets/pull/1308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1308.patch", "merged_at": null }
true
759,458,835
1,307
adding capes
closed
[]
2020-12-08T13:46:13
2020-12-09T15:40:09
2020-12-09T15:27:45
Adding Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6
patil-suraj
https://github.com/huggingface/datasets/pull/1307
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1307", "html_url": "https://github.com/huggingface/datasets/pull/1307", "diff_url": "https://github.com/huggingface/datasets/pull/1307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1307.patch", "merged_at": "2020-12-09T15:27:45" }
true
759,448,427
1,306
add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC)
closed
[]
2020-12-08T13:31:34
2020-12-10T09:53:54
2020-12-10T09:53:28
- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC) - **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data - **Paper:** https://www.aclweb.org/anthology/W19-4406/ - **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP. ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
aseifert
https://github.com/huggingface/datasets/pull/1306
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1306", "html_url": "https://github.com/huggingface/datasets/pull/1306", "diff_url": "https://github.com/huggingface/datasets/pull/1306.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1306.patch", "merged_at": null }
true
759,446,665
1,305
[README] Added Windows command to enable slow tests
closed
[]
2020-12-08T13:29:04
2020-12-08T13:56:33
2020-12-08T13:56:32
The Windows command to run slow tests has caused issues, so this adds a functional Windows command.
TevenLeScao
https://github.com/huggingface/datasets/pull/1305
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1305", "html_url": "https://github.com/huggingface/datasets/pull/1305", "diff_url": "https://github.com/huggingface/datasets/pull/1305.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1305.patch", "merged_at": "2020-12-08T13:56:32" }
true
759,440,841
1,304
adding eitb_parcc
closed
[]
2020-12-08T13:20:54
2020-12-09T18:02:54
2020-12-09T18:02:03
Adding EiTB-ParCC: Parallel Corpus of Comparable News http://opus.nlpl.eu/EiTB-ParCC.php
patil-suraj
https://github.com/huggingface/datasets/pull/1304
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1304", "html_url": "https://github.com/huggingface/datasets/pull/1304", "diff_url": "https://github.com/huggingface/datasets/pull/1304.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1304.patch", "merged_at": "2020-12-09T18:02:03" }
true
759,440,484
1,303
adding opus_openoffice
closed
[]
2020-12-08T13:20:21
2020-12-10T09:37:10
2020-12-10T09:37:10
Adding Opus OpenOffice: http://opus.nlpl.eu/OpenOffice.php 8 languages, 28 bitexts
patil-suraj
https://github.com/huggingface/datasets/pull/1303
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1303", "html_url": "https://github.com/huggingface/datasets/pull/1303", "diff_url": "https://github.com/huggingface/datasets/pull/1303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1303.patch", "merged_at": "2020-12-10T09:37:10" }
true
759,435,740
1,302
Add Danish NER dataset
closed
[]
2020-12-08T13:13:54
2020-12-10T09:35:26
2020-12-10T09:35:26
ophelielacroix
https://github.com/huggingface/datasets/pull/1302
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1302", "html_url": "https://github.com/huggingface/datasets/pull/1302", "diff_url": "https://github.com/huggingface/datasets/pull/1302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1302.patch", "merged_at": "2020-12-10T09:35:26" }
true
759,419,945
1,301
arxiv dataset added
closed
[]
2020-12-08T12:50:51
2020-12-09T18:05:16
2020-12-09T18:05:16
**adding arXiv dataset**: arXiv dataset and metadata of 1.7M+ scholarly papers across STEM dataset link: https://www.kaggle.com/Cornell-University/arxiv
tanmoyio
https://github.com/huggingface/datasets/pull/1301
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1301", "html_url": "https://github.com/huggingface/datasets/pull/1301", "diff_url": "https://github.com/huggingface/datasets/pull/1301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1301.patch", "merged_at": "2020-12-09T18:05:16" }
true
759,418,122
1,300
added dutch_social
closed
[]
2020-12-08T12:47:50
2020-12-08T16:09:05
2020-12-08T16:09:05
WIP As some tests did not clear! 👎🏼
skyprince999
https://github.com/huggingface/datasets/pull/1300
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1300", "html_url": "https://github.com/huggingface/datasets/pull/1300", "diff_url": "https://github.com/huggingface/datasets/pull/1300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1300.patch", "merged_at": null }
true
759,414,566
1,299
can't load "german_legal_entity_recognition" dataset
closed
[]
2020-12-08T12:42:01
2020-12-16T16:03:13
2020-12-16T16:03:13
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py
nataly-obr
https://github.com/huggingface/datasets/issues/1299
null
false
759,412,451
1,298
Add OPUS Ted Talks 2013
closed
[]
2020-12-08T12:38:38
2020-12-16T16:57:50
2020-12-16T16:57:49
abhishekkrthakur
https://github.com/huggingface/datasets/pull/1298
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1298", "html_url": "https://github.com/huggingface/datasets/pull/1298", "diff_url": "https://github.com/huggingface/datasets/pull/1298.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1298.patch", "merged_at": "2020-12-16T16:57:49" }
true
759,404,103
1,297
OPUS Ted Talks 2013
closed
[]
2020-12-08T12:25:39
2023-09-24T09:51:49
2020-12-08T12:35:50
abhishekkrthakur
https://github.com/huggingface/datasets/pull/1297
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1297", "html_url": "https://github.com/huggingface/datasets/pull/1297", "diff_url": "https://github.com/huggingface/datasets/pull/1297.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1297.patch", "merged_at": null }
true
759,375,292
1,296
The Snips Built In Intents 2016 dataset.
closed
[]
2020-12-08T11:40:10
2020-12-08T15:27:52
2020-12-08T15:27:52
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
bduvenhage
https://github.com/huggingface/datasets/pull/1296
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1296", "html_url": "https://github.com/huggingface/datasets/pull/1296", "diff_url": "https://github.com/huggingface/datasets/pull/1296.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1296.patch", "merged_at": null }
true
759,375,251
1,295
add hrenwac_para
closed
[]
2020-12-08T11:40:06
2020-12-11T17:42:20
2020-12-11T17:42:20
IvanZidov
https://github.com/huggingface/datasets/pull/1295
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1295", "html_url": "https://github.com/huggingface/datasets/pull/1295", "diff_url": "https://github.com/huggingface/datasets/pull/1295.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1295.patch", "merged_at": "2020-12-11T17:42:20" }
true
759,365,246
1,294
adding opus_euconst
closed
[]
2020-12-08T11:24:16
2020-12-08T18:44:20
2020-12-08T18:41:23
Adding EUconst, a parallel corpus collected from the European Constitution. 21 languages, 210 bitexts
patil-suraj
https://github.com/huggingface/datasets/pull/1294
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1294", "html_url": "https://github.com/huggingface/datasets/pull/1294", "diff_url": "https://github.com/huggingface/datasets/pull/1294.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1294.patch", "merged_at": "2020-12-08T18:41:22" }
true
759,360,113
1,293
add hrenwac_para
closed
[]
2020-12-08T11:16:41
2020-12-08T11:34:47
2020-12-08T11:34:38
ivan-zidov
https://github.com/huggingface/datasets/pull/1293
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1293", "html_url": "https://github.com/huggingface/datasets/pull/1293", "diff_url": "https://github.com/huggingface/datasets/pull/1293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1293.patch", "merged_at": null }
true
759,354,627
1,292
arXiv dataset added
closed
[]
2020-12-08T11:08:28
2020-12-08T14:02:13
2020-12-08T14:02:13
tanmoyio
https://github.com/huggingface/datasets/pull/1292
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1292", "html_url": "https://github.com/huggingface/datasets/pull/1292", "diff_url": "https://github.com/huggingface/datasets/pull/1292.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1292.patch", "merged_at": null }
true
759,352,810
1,291
adding pubmed_qa dataset
closed
[]
2020-12-08T11:05:44
2020-12-09T08:54:50
2020-12-09T08:54:50
Pubmed QA dataset: PQA-L(abeled) 1k PQA-U(labeled) 61.2k PQA-A(rtifical labeled) 211.3k
tuner007
https://github.com/huggingface/datasets/pull/1291
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1291", "html_url": "https://github.com/huggingface/datasets/pull/1291", "diff_url": "https://github.com/huggingface/datasets/pull/1291.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1291.patch", "merged_at": "2020-12-09T08:54:50" }
true
759,339,989
1,290
imdb dataset cannot be downloaded
closed
[]
2020-12-08T10:47:36
2020-12-24T17:38:09
2020-12-24T17:38:09
hi please find error below getting imdb train spli: thanks ` datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")` errors ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}] ```
rabeehk
https://github.com/huggingface/datasets/issues/1290
null
false
759,333,684
1,289
Jigsaw toxicity classification dataset added
closed
[]
2020-12-08T10:38:51
2020-12-08T15:17:48
2020-12-08T15:17:48
The dataset requires manually downloading data from Kaggle.
taihim
https://github.com/huggingface/datasets/pull/1289
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1289", "html_url": "https://github.com/huggingface/datasets/pull/1289", "diff_url": "https://github.com/huggingface/datasets/pull/1289.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1289.patch", "merged_at": null }
true
759,309,457
1,288
Add CodeSearchNet corpus dataset
closed
[]
2020-12-08T10:07:50
2020-12-09T17:05:28
2020-12-09T17:05:28
This PR adds the CodeSearchNet corpus proxy dataset for semantic code search: https://github.com/github/CodeSearchNet I have had a few issues, mentioned below. Would appreciate some help on how to solve them. ## Issues generating dataset card Is there something wrong with my declaration of the dataset features ? ``` features=datasets.Features( { "repository_name": datasets.Value("string"), "func_path_in_repository": datasets.Value("string"), "func_name": datasets.Value("string"), "whole_func_string": datasets.Value("string"), "language": datasets.Value("string"), "func_code_string": datasets.Value("string"), "func_code_tokens": datasets.Sequence(datasets.Value("string")), "func_documentation_string": datasets.Value("string"), "func_documentation_tokens": datasets.Sequence(datasets.Value("string")), "split_name": datasets.Value("string"), "func_code_url": datasets.Value("string"), # TODO - add licensing info in the examples } ), ``` When running the streamlite app for tagging the dataset on my machine, I get the following error : ![image](https://user-images.githubusercontent.com/33657802/101469132-9ed12c80-3944-11eb-94ff-2d9c1d0ea080.png) ## Issues with dummy data Due to the unusual structure of the data, I have been unable to generate dummy data automatically. I tried to generate it manually, but pytests fail when using the manually-generated dummy data ! Pytests work fine when using the real data. ``` ============================================================================================== test session starts ============================================================================================== platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 plugins: xdist-2.1.0, forked-1.3.0 collected 1 item tests/test_dataset_common.py F [100%] =================================================================================================== FAILURES ==================================================================================================== ________________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_code_search_net _________________________________________________________________________ self = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_code_search_net>, dataset_name = 'code_search_net' @slow def test_load_dataset_all_configs(self, dataset_name): configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True) > self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True) tests/test_dataset_common.py:237: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/test_dataset_common.py:198: in check_load_dataset self.parent.assertTrue(len(dataset[split]) > 0) E AssertionError: False is not true --------------------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------------------- Downloading and preparing dataset code_search_net/all (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /tmp/tmppx78sj24/code_search_net/all/1.0.0... Dataset code_search_net downloaded and prepared to /tmp/tmppx78sj24/code_search_net/all/1.0.0. Subsequent calls will reuse this data. --------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------- ... (irrelevant info - Deprecation warnings) ============================================================================================ short test summary info ============================================================================================ FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_code_search_net - AssertionError: False is not true ========================================================================================= 1 failed, 4 warnings in 3.00s ======================================================================================== ``` ## Note : Data structure in S3 The data is stored on S3, and organized by programming languages. It is stored in the following repository structure: ``` . ├── <language_name> # e.g. python │   └── final │   └── jsonl │   ├── test │   │   └── <language_name>_test_0.jsonl.gz │   ├── train │   │   ├── <language_name>_train_0.jsonl.gz │   │   ├── <language_name>_train_1.jsonl.gz │   │   ├── ... │   │   └── <language_name>_train_n.jsonl.gz │   └── valid │   └── <language_name>_valid_0.jsonl.gz ├── <language_name>_dedupe_definitions_v2.pkl └── <language_name>_licenses.pkl ```
SBrandeis
https://github.com/huggingface/datasets/pull/1288
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1288", "html_url": "https://github.com/huggingface/datasets/pull/1288", "diff_url": "https://github.com/huggingface/datasets/pull/1288.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1288.patch", "merged_at": "2020-12-09T17:05:27" }
true
759,300,992
1,287
'iwslt2017-ro-nl', cannot be downloaded
closed
[]
2020-12-08T09:56:55
2022-06-13T10:41:33
2022-06-13T10:41:33
Hi I am trying `>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")` getting this error thank you for your help ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (download: 314.07 MiB, generated: 39.92 MiB, post-processed: Unknown size, total: 354.00 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/iwsl_t217/iwslt2017-ro-nl/1.0.0/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/iwslt2017/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd/iwslt2017.py", line 118, in _split_generators dl_dir = dl_manager.download_and_extract(MULTI_URL) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 216, in map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
rabeehk
https://github.com/huggingface/datasets/issues/1287
null
false
759,291,509
1,286
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
closed
[]
2020-12-08T09:44:15
2020-12-12T19:36:22
2020-12-12T16:22:36
Hi I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help {'epoch': 20.0} 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:16<00:00, 1.22it/s] 12/08/2020 10:41:19 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/experiment/joint/finetune/lr-2e-5 12/08/2020 10:41:24 - INFO - __main__ - {'wmt16-en-ro': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1998), 'qnli': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 5462), 'scitail': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1303)} 12/08/2020 10:41:24 - INFO - __main__ - *** Evaluate *** 12/08/2020 10:41:24 - INFO - seq2seq.utils.utils - using task specific params for wmt16-en-ro: {'max_length': 300, 'num_beams': 4} 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation ***** 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Num examples = 1998 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Batch size = 64 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:37<00:00, 1.19s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
rabeehk
https://github.com/huggingface/datasets/issues/1286
null
false
759,278,758
1,285
boolq does not work
closed
[]
2020-12-08T09:28:47
2020-12-08T09:47:10
2020-12-08T09:47:10
Hi I am getting this error when trying to load boolq, thanks for your help ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock Traceback (most recent call last): File "finetune_t5_trainer.py", line 274, in <module> main() File "finetune_t5_trainer.py", line 147, in main for task in data_args.tasks] File "finetune_t5_trainer.py", line 147, in <listcomp> for task in data_args.tasks] File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 58, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 54, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
rabeehk
https://github.com/huggingface/datasets/issues/1285
null
false
759,269,920
1,284
Update coqa dataset url
closed
[]
2020-12-08T09:16:38
2020-12-08T18:19:09
2020-12-08T18:19:09
`datasets.stanford.edu` is invalid.
ojasaar
https://github.com/huggingface/datasets/pull/1284
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1284", "html_url": "https://github.com/huggingface/datasets/pull/1284", "diff_url": "https://github.com/huggingface/datasets/pull/1284.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1284.patch", "merged_at": "2020-12-08T18:19:09" }
true
759,251,457
1,283
Add dutch book review dataset
closed
[]
2020-12-08T08:50:48
2020-12-09T20:21:58
2020-12-09T17:25:25
- Name: Dutch Book Review Dataset (DBRD) - Description: The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels and is intended as a benchmark for sentiment classification in Dutch. - Paper: https://arxiv.org/abs/1910.00896 - Data: https://github.com/benjaminvdb/DBRD - Motivation: A large (real-life) dataset of Dutch book reviews and sentiment polarity (positive/negative), based on the associated rating. Checks - [x] Create the dataset script /datasets/dbrd/dbrd.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _info(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
benjaminvdb
https://github.com/huggingface/datasets/pull/1283
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1283", "html_url": "https://github.com/huggingface/datasets/pull/1283", "diff_url": "https://github.com/huggingface/datasets/pull/1283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1283.patch", "merged_at": "2020-12-09T17:25:25" }
true
759,208,335
1,282
add thaiqa_squad
closed
[]
2020-12-08T08:14:38
2020-12-08T18:36:18
2020-12-08T18:36:18
Example format is a little different from SQuAD since `thaiqa` always have one answer per question so I added a check to convert answers to lists if they are not already one to future-proof additional questions that might have multiple answers. `thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/).
cstorm125
https://github.com/huggingface/datasets/pull/1282
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1282", "html_url": "https://github.com/huggingface/datasets/pull/1282", "diff_url": "https://github.com/huggingface/datasets/pull/1282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1282.patch", "merged_at": "2020-12-08T18:36:18" }
true
759,203,317
1,281
adding hybrid_qa
closed
[]
2020-12-08T08:10:19
2020-12-08T18:09:28
2020-12-08T18:07:00
Adding HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data https://github.com/wenhuchen/HybridQA
patil-suraj
https://github.com/huggingface/datasets/pull/1281
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1281", "html_url": "https://github.com/huggingface/datasets/pull/1281", "diff_url": "https://github.com/huggingface/datasets/pull/1281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1281.patch", "merged_at": "2020-12-08T18:07:00" }
true
759,151,028
1,280
disaster response messages dataset
closed
[]
2020-12-08T07:27:16
2020-12-09T16:21:57
2020-12-09T16:21:57
darshan-gandhi
https://github.com/huggingface/datasets/pull/1280
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1280", "html_url": "https://github.com/huggingface/datasets/pull/1280", "diff_url": "https://github.com/huggingface/datasets/pull/1280.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1280.patch", "merged_at": "2020-12-09T16:21:57" }
true
759,108,726
1,279
added para_pat
closed
[]
2020-12-08T06:28:47
2020-12-14T13:41:17
2020-12-14T13:41:17
Dataset link : https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632 Working on README.md currently
bhavitvyamalik
https://github.com/huggingface/datasets/pull/1279
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1279", "html_url": "https://github.com/huggingface/datasets/pull/1279", "diff_url": "https://github.com/huggingface/datasets/pull/1279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1279.patch", "merged_at": "2020-12-14T13:41:17" }
true
758,988,465
1,278
Craigslist bargains
closed
[]
2020-12-08T01:45:55
2020-12-09T00:46:15
2020-12-09T00:46:15
`craigslist_bargains` dataset from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
ZacharySBrown
https://github.com/huggingface/datasets/pull/1278
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1278", "html_url": "https://github.com/huggingface/datasets/pull/1278", "diff_url": "https://github.com/huggingface/datasets/pull/1278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1278.patch", "merged_at": null }
true
758,965,936
1,276
add One Million Posts Corpus
closed
[]
2020-12-08T00:50:08
2020-12-11T18:28:18
2020-12-11T18:28:18
- **Name:** One Million Posts Corpus - **Description:** The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). - **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711 - **Data:** https://github.com/OFAI/million-post-corpus - **Motivation:** Big German (real-life) dataset containing different annotations around forum moderation with expert annotations. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
aseifert
https://github.com/huggingface/datasets/pull/1276
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1276", "html_url": "https://github.com/huggingface/datasets/pull/1276", "diff_url": "https://github.com/huggingface/datasets/pull/1276.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1276.patch", "merged_at": "2020-12-11T18:28:18" }
true
758,958,066
1,275
Yoruba GV NER added
closed
[]
2020-12-08T00:31:38
2020-12-08T23:25:28
2020-12-08T23:25:28
I just added Yoruba GV NER dataset from this paper https://www.aclweb.org/anthology/2020.lrec-1.335/
dadelani
https://github.com/huggingface/datasets/pull/1275
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1275", "html_url": "https://github.com/huggingface/datasets/pull/1275", "diff_url": "https://github.com/huggingface/datasets/pull/1275.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1275.patch", "merged_at": null }
true
758,943,174
1,274
oclar-dataset
closed
[]
2020-12-07T23:56:45
2020-12-09T15:36:08
2020-12-09T15:36:08
Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on reviews, including hotels, restaurants, shops, and others. : [homepage](http://archive.ics.uci.edu/ml/datasets/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)
alaameloh
https://github.com/huggingface/datasets/pull/1274
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1274", "html_url": "https://github.com/huggingface/datasets/pull/1274", "diff_url": "https://github.com/huggingface/datasets/pull/1274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1274.patch", "merged_at": "2020-12-09T15:36:08" }
true
758,935,768
1,273
Created wiki_movies dataset.
closed
[]
2020-12-07T23:38:54
2020-12-14T13:56:49
2020-12-14T13:56:49
First PR (ever). Hopefully this movies dataset is useful to others!
aclifton314
https://github.com/huggingface/datasets/pull/1273
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1273", "html_url": "https://github.com/huggingface/datasets/pull/1273", "diff_url": "https://github.com/huggingface/datasets/pull/1273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1273.patch", "merged_at": null }
true
758,924,960
1,272
Psc
closed
[]
2020-12-07T23:19:36
2020-12-07T23:48:05
2020-12-07T23:47:48
abecadel
https://github.com/huggingface/datasets/pull/1272
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1272", "html_url": "https://github.com/huggingface/datasets/pull/1272", "diff_url": "https://github.com/huggingface/datasets/pull/1272.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1272.patch", "merged_at": null }
true
758,924,203
1,271
SMS Spam Dataset
closed
[]
2020-12-07T23:18:06
2020-12-08T17:42:19
2020-12-08T17:42:19
Hi :) I added this [SMS Spam Dataset](http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection)
czabo
https://github.com/huggingface/datasets/pull/1271
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1271", "html_url": "https://github.com/huggingface/datasets/pull/1271", "diff_url": "https://github.com/huggingface/datasets/pull/1271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1271.patch", "merged_at": "2020-12-08T17:42:19" }
true
758,917,216
1,270
add DFKI SmartData Corpus
closed
[]
2020-12-07T23:03:48
2020-12-08T17:41:23
2020-12-08T17:41:23
- **Name:** DFKI SmartData Corpus - **Description:** DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. - **Paper:** https://www.dfki.de/fileadmin/user_upload/import/9427_lrec_smartdata_corpus.pdf - **Data:** https://github.com/DFKI-NLP/smartdata-corpus - **Motivation:** Contains fine-grained NER labels for German. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
aseifert
https://github.com/huggingface/datasets/pull/1270
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1270", "html_url": "https://github.com/huggingface/datasets/pull/1270", "diff_url": "https://github.com/huggingface/datasets/pull/1270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1270.patch", "merged_at": "2020-12-08T17:41:23" }
true
758,886,174
1,269
Adding OneStopEnglish corpus dataset
closed
[]
2020-12-07T22:05:11
2020-12-09T18:43:38
2020-12-09T15:33:53
This PR adds OneStopEnglish Corpus containing texts classified into reading levels (elementary, intermediate, advance) for automatic readability assessment and text simplification. Link to the paper: https://www.aclweb.org/anthology/W18-0535.pdf
purvimisal
https://github.com/huggingface/datasets/pull/1269
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1269", "html_url": "https://github.com/huggingface/datasets/pull/1269", "diff_url": "https://github.com/huggingface/datasets/pull/1269.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1269.patch", "merged_at": "2020-12-09T15:33:53" }
true
758,871,252
1,268
new pr for Turkish NER
closed
[]
2020-12-07T21:40:26
2020-12-09T13:45:05
2020-12-09T13:45:05
merveenoyan
https://github.com/huggingface/datasets/pull/1268
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1268", "html_url": "https://github.com/huggingface/datasets/pull/1268", "diff_url": "https://github.com/huggingface/datasets/pull/1268.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1268.patch", "merged_at": "2020-12-09T13:45:05" }
true
758,826,568
1,267
Has part
closed
[]
2020-12-07T20:32:03
2020-12-11T18:25:42
2020-12-11T18:25:42
jeromeku
https://github.com/huggingface/datasets/pull/1267
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1267", "html_url": "https://github.com/huggingface/datasets/pull/1267", "diff_url": "https://github.com/huggingface/datasets/pull/1267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1267.patch", "merged_at": "2020-12-11T18:25:42" }
true
758,704,178
1,266
removing unzipped hansards dummy data
closed
[]
2020-12-07T17:31:16
2020-12-07T17:32:29
2020-12-07T17:32:29
which were added by mistake
yjernite
https://github.com/huggingface/datasets/pull/1266
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1266", "html_url": "https://github.com/huggingface/datasets/pull/1266", "diff_url": "https://github.com/huggingface/datasets/pull/1266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1266.patch", "merged_at": "2020-12-07T17:32:28" }
true
758,687,223
1,265
Add CovidQA dataset
closed
[]
2020-12-07T17:06:51
2020-12-08T17:02:26
2020-12-08T17:02:26
This PR adds CovidQA, a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle’s COVID-19 Open Research Dataset Challenge. Link to the paper: https://arxiv.org/pdf/2004.11339.pdf Link to the homepage: https://covidqa.ai
olinguyen
https://github.com/huggingface/datasets/pull/1265
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1265", "html_url": "https://github.com/huggingface/datasets/pull/1265", "diff_url": "https://github.com/huggingface/datasets/pull/1265.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1265.patch", "merged_at": "2020-12-08T17:02:26" }
true
758,686,474
1,264
enriched webnlg dataset rebase
closed
[]
2020-12-07T17:05:45
2020-12-09T17:00:29
2020-12-09T17:00:27
Rebase of #1206 !
TevenLeScao
https://github.com/huggingface/datasets/pull/1264
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1264", "html_url": "https://github.com/huggingface/datasets/pull/1264", "diff_url": "https://github.com/huggingface/datasets/pull/1264.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1264.patch", "merged_at": "2020-12-09T17:00:27" }
true
758,663,787
1,263
Added kannada news headlines classification dataset.
closed
[]
2020-12-07T16:35:37
2020-12-10T14:30:55
2020-12-09T18:01:31
Manual Download of a kaggle dataset. Mostly followed process as ms_terms.
vrindaprabhu
https://github.com/huggingface/datasets/pull/1263
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1263", "html_url": "https://github.com/huggingface/datasets/pull/1263", "diff_url": "https://github.com/huggingface/datasets/pull/1263.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1263.patch", "merged_at": "2020-12-09T18:01:31" }
true
758,637,124
1,262
Adding msr_genomics_kbcomp dataset
closed
[]
2020-12-07T16:01:30
2020-12-08T18:08:55
2020-12-08T18:08:47
manandey
https://github.com/huggingface/datasets/pull/1262
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1262", "html_url": "https://github.com/huggingface/datasets/pull/1262", "diff_url": "https://github.com/huggingface/datasets/pull/1262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1262.patch", "merged_at": null }
true
758,626,112
1,261
Add Google Sentence Compression dataset
closed
[]
2020-12-07T15:47:43
2020-12-08T17:01:59
2020-12-08T17:01:59
For more information: https://www.aclweb.org/anthology/D13-1155.pdf
mattbui
https://github.com/huggingface/datasets/pull/1261
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1261", "html_url": "https://github.com/huggingface/datasets/pull/1261", "diff_url": "https://github.com/huggingface/datasets/pull/1261.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1261.patch", "merged_at": "2020-12-08T17:01:59" }
true
758,601,828
1,260
Added NewsPH Raw Dataset
closed
[]
2020-12-07T15:17:53
2020-12-08T16:27:15
2020-12-08T16:27:15
Added the raw version of the NewsPH dataset, which was used to automatically generate the NewsPH-NLI corpus. Dataset of news articles in Filipino from mainstream Philippine news sites on the internet. Can be used as a language modeling dataset or to reproduce the NewsPH-NLI dataset. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
jcblaisecruz02
https://github.com/huggingface/datasets/pull/1260
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1260", "html_url": "https://github.com/huggingface/datasets/pull/1260", "diff_url": "https://github.com/huggingface/datasets/pull/1260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1260.patch", "merged_at": null }
true
758,565,320
1,259
Add KorQPair dataset
closed
[]
2020-12-07T14:33:57
2021-12-29T00:49:40
2020-12-08T15:11:41
This PR adds a [Korean paired question dataset](https://github.com/songys/Question_pair) containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task.
jaketae
https://github.com/huggingface/datasets/pull/1259
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1259", "html_url": "https://github.com/huggingface/datasets/pull/1259", "diff_url": "https://github.com/huggingface/datasets/pull/1259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1259.patch", "merged_at": "2020-12-08T15:11:41" }
true
758,557,169
1,258
arXiv dataset added
closed
[]
2020-12-07T14:23:33
2020-12-08T14:07:15
2020-12-08T14:07:15
tanmoyio
https://github.com/huggingface/datasets/pull/1258
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1258", "html_url": "https://github.com/huggingface/datasets/pull/1258", "diff_url": "https://github.com/huggingface/datasets/pull/1258.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1258.patch", "merged_at": null }
true
758,550,490
1,257
Add Swahili news classification dataset
closed
[]
2020-12-07T14:15:13
2020-12-08T14:44:19
2020-12-08T14:44:19
Add Swahili news classification dataset
yvonnegitau
https://github.com/huggingface/datasets/pull/1257
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1257", "html_url": "https://github.com/huggingface/datasets/pull/1257", "diff_url": "https://github.com/huggingface/datasets/pull/1257.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1257.patch", "merged_at": "2020-12-08T14:44:19" }
true
758,531,980
1,256
adding LiMiT dataset
closed
[]
2020-12-07T14:00:41
2020-12-08T14:58:28
2020-12-08T14:42:51
Adding LiMiT: The Literal Motion in Text Dataset https://github.com/ilmgut/limit_dataset
patil-suraj
https://github.com/huggingface/datasets/pull/1256
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1256", "html_url": "https://github.com/huggingface/datasets/pull/1256", "diff_url": "https://github.com/huggingface/datasets/pull/1256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1256.patch", "merged_at": "2020-12-08T14:42:51" }
true
758,530,243
1,255
[doc] nlp/viewer ➡️datasets/viewer
closed
[]
2020-12-07T13:58:41
2020-12-08T17:17:54
2020-12-08T17:17:53
cc @srush
julien-c
https://github.com/huggingface/datasets/pull/1255
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1255", "html_url": "https://github.com/huggingface/datasets/pull/1255", "diff_url": "https://github.com/huggingface/datasets/pull/1255.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1255.patch", "merged_at": "2020-12-08T17:17:53" }
true
758,518,774
1,254
Added WikiText-TL-39
closed
[]
2020-12-07T13:43:48
2020-12-08T16:00:58
2020-12-08T16:00:58
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
jcblaisecruz02
https://github.com/huggingface/datasets/pull/1254
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1254", "html_url": "https://github.com/huggingface/datasets/pull/1254", "diff_url": "https://github.com/huggingface/datasets/pull/1254.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1254.patch", "merged_at": null }
true
758,517,391
1,253
add thainer
closed
[]
2020-12-07T13:41:54
2020-12-08T14:44:49
2020-12-08T14:44:49
ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
cstorm125
https://github.com/huggingface/datasets/pull/1253
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1253", "html_url": "https://github.com/huggingface/datasets/pull/1253", "diff_url": "https://github.com/huggingface/datasets/pull/1253.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1253.patch", "merged_at": "2020-12-08T14:44:49" }
true
758,511,388
1,252
Add Naver sentiment movie corpus
closed
[]
2020-12-07T13:33:45
2020-12-08T14:32:33
2020-12-08T14:21:37
Supersedes #1168 > This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
jaketae
https://github.com/huggingface/datasets/pull/1252
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1252", "html_url": "https://github.com/huggingface/datasets/pull/1252", "diff_url": "https://github.com/huggingface/datasets/pull/1252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1252.patch", "merged_at": "2020-12-08T14:21:37" }
true
758,503,689
1,251
Add Wiki Atomic Edits Dataset (43M edits)
closed
[]
2020-12-07T13:23:08
2020-12-14T10:05:01
2020-12-14T10:05:00
abhishekkrthakur
https://github.com/huggingface/datasets/pull/1251
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1251", "html_url": "https://github.com/huggingface/datasets/pull/1251", "diff_url": "https://github.com/huggingface/datasets/pull/1251.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1251.patch", "merged_at": "2020-12-14T10:05:00" }
true
758,491,704
1,250
added Nergrit dataset
closed
[]
2020-12-07T13:06:12
2020-12-08T14:33:29
2020-12-08T14:33:29
Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. This PR is only for the Named Entity Recognition.
cahya-wirawan
https://github.com/huggingface/datasets/pull/1250
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1250", "html_url": "https://github.com/huggingface/datasets/pull/1250", "diff_url": "https://github.com/huggingface/datasets/pull/1250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1250.patch", "merged_at": "2020-12-08T14:33:29" }
true
758,472,863
1,249
Add doc2dial dataset
closed
[]
2020-12-07T12:39:09
2020-12-14T16:17:14
2020-12-14T16:17:14
### Doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset v0.9 Once complete this will add the [Doc2dial](https://doc2dial.github.io/data.html) dataset from the generic data sets list.
KMFODA
https://github.com/huggingface/datasets/pull/1249
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1249", "html_url": "https://github.com/huggingface/datasets/pull/1249", "diff_url": "https://github.com/huggingface/datasets/pull/1249.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1249.patch", "merged_at": "2020-12-14T16:17:14" }
true
758,454,438
1,248
Update step-by-step guide about the dataset cards
closed
[]
2020-12-07T12:12:12
2020-12-07T13:19:24
2020-12-07T13:19:23
Small update in the step-by-step guide about the dataset cards to indicate it can be created and completing while exploring the dataset.
thomwolf
https://github.com/huggingface/datasets/pull/1248
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1248", "html_url": "https://github.com/huggingface/datasets/pull/1248", "diff_url": "https://github.com/huggingface/datasets/pull/1248.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1248.patch", "merged_at": "2020-12-07T13:19:23" }
true
758,431,640
1,247
Adding indonlu dataset
closed
[]
2020-12-07T11:38:45
2020-12-08T14:11:50
2020-12-08T14:11:50
IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia. It contains 12 datasets.
yasirabd
https://github.com/huggingface/datasets/pull/1247
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1247", "html_url": "https://github.com/huggingface/datasets/pull/1247", "diff_url": "https://github.com/huggingface/datasets/pull/1247.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1247.patch", "merged_at": null }
true
758,418,652
1,246
arXiv dataset added
closed
[]
2020-12-07T11:20:23
2020-12-07T14:22:58
2020-12-07T14:22:58
tanmoyio
https://github.com/huggingface/datasets/pull/1246
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1246", "html_url": "https://github.com/huggingface/datasets/pull/1246", "diff_url": "https://github.com/huggingface/datasets/pull/1246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1246.patch", "merged_at": null }
true
758,411,233
1,245
Add Google Turkish Treebank Dataset
closed
[]
2020-12-07T11:09:17
2023-09-24T09:40:49
2022-10-03T09:39:32
null
abhishekkrthakur
https://github.com/huggingface/datasets/pull/1245
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1245", "html_url": "https://github.com/huggingface/datasets/pull/1245", "diff_url": "https://github.com/huggingface/datasets/pull/1245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1245.patch", "merged_at": null }
true
758,384,417
1,244
arxiv dataset added
closed
[]
2020-12-07T10:32:54
2020-12-07T11:04:23
2020-12-07T11:04:23
tanmoyio
https://github.com/huggingface/datasets/pull/1244
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1244", "html_url": "https://github.com/huggingface/datasets/pull/1244", "diff_url": "https://github.com/huggingface/datasets/pull/1244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1244.patch", "merged_at": null }
true
758,378,904
1,243
Add Google Noun Verb Dataset
closed
[]
2020-12-07T10:26:05
2023-09-24T09:40:54
2022-10-03T09:39:37
null
abhishekkrthakur
https://github.com/huggingface/datasets/pull/1243
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1243", "html_url": "https://github.com/huggingface/datasets/pull/1243", "diff_url": "https://github.com/huggingface/datasets/pull/1243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1243.patch", "merged_at": null }
true
758,370,579
1,242
adding bprec
closed
[]
2020-12-07T10:15:49
2020-12-08T14:33:49
2020-12-08T14:33:48
kldarek
https://github.com/huggingface/datasets/pull/1242
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1242", "html_url": "https://github.com/huggingface/datasets/pull/1242", "diff_url": "https://github.com/huggingface/datasets/pull/1242.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1242.patch", "merged_at": null }
true
758,360,643
1,241
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque
closed
[]
2020-12-07T10:03:34
2020-12-19T14:55:12
2020-12-09T15:12:48
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque More info : http://opus.nlpl.eu/Elhuyar.php
spatil6
https://github.com/huggingface/datasets/pull/1241
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1241", "html_url": "https://github.com/huggingface/datasets/pull/1241", "diff_url": "https://github.com/huggingface/datasets/pull/1241.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1241.patch", "merged_at": "2020-12-09T15:12:48" }
true
758,355,523
1,240
Multi Domain Sentiment Analysis Dataset (MDSA)
closed
[]
2020-12-07T09:57:15
2023-09-24T09:40:59
2022-10-03T09:39:43
null
abhishekkrthakur
https://github.com/huggingface/datasets/pull/1240
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1240", "html_url": "https://github.com/huggingface/datasets/pull/1240", "diff_url": "https://github.com/huggingface/datasets/pull/1240.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1240.patch", "merged_at": null }
true
758,339,593
1,239
add yelp_review_full dataset
closed
[]
2020-12-07T09:35:36
2020-12-08T15:43:24
2020-12-08T15:00:50
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353
hfawaz
https://github.com/huggingface/datasets/pull/1239
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1239", "html_url": "https://github.com/huggingface/datasets/pull/1239", "diff_url": "https://github.com/huggingface/datasets/pull/1239.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1239.patch", "merged_at": null }
true
758,321,688
1,238
adding poem_sentiment
closed
[]
2020-12-07T09:11:52
2020-12-09T16:36:10
2020-12-09T16:02:45
Adding poem_sentiment dataset. https://github.com/google-research-datasets/poem-sentiment
patil-suraj
https://github.com/huggingface/datasets/pull/1238
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1238", "html_url": "https://github.com/huggingface/datasets/pull/1238", "diff_url": "https://github.com/huggingface/datasets/pull/1238.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1238.patch", "merged_at": "2020-12-09T16:02:45" }
true
758,318,353
1,237
Add AmbigQA dataset
closed
[]
2020-12-07T09:07:19
2020-12-08T13:38:52
2020-12-08T13:38:52
# AmbigQA: Answering Ambiguous Open-domain Questions Dataset Adding the [AmbigQA](https://nlp.cs.washington.edu/ambigqa/) dataset as part of the sprint 🎉 (from Open dataset list for Dataset sprint) Added both the light and full versions (as seen on the dataset homepage) The json format changes based on the value of one 'type' field, so I set the unavailable field to an empty list. This is explained in the README -> Data Fields ```py train_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="train") val_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="validation") train_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="train") val_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="validation") for example in train_light_dataset: for i,t in enumerate(example['annotations']['type']): if t =='singleAnswer': # use the example['annotations']['answer'][i] # example['annotations']['qaPairs'][i] - > is [] print(example['annotations']['answer'][i]) else: # use the example['annotations']['qaPairs'][i] # example['annotations']['answer'][i] - > is [] print(example['annotations']['qaPairs'][i]) ``` - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
cceyda
https://github.com/huggingface/datasets/pull/1237
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1237", "html_url": "https://github.com/huggingface/datasets/pull/1237", "diff_url": "https://github.com/huggingface/datasets/pull/1237.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1237.patch", "merged_at": "2020-12-08T13:38:52" }
true
758,263,012
1,236
Opus finlex dataset of language pair Finnish and Swedish
closed
[]
2020-12-07T07:53:57
2020-12-08T13:30:33
2020-12-08T13:30:33
Added Opus_finlex dataset of language pair Finnish and Swedish More info : http://opus.nlpl.eu/Finlex.php
spatil6
https://github.com/huggingface/datasets/pull/1236
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1236", "html_url": "https://github.com/huggingface/datasets/pull/1236", "diff_url": "https://github.com/huggingface/datasets/pull/1236.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1236.patch", "merged_at": "2020-12-08T13:30:33" }
true
758,234,511
1,235
Wino bias
closed
[]
2020-12-07T07:12:42
2020-12-10T20:48:12
2020-12-10T20:48:01
The PR will fail circleCi tests because of the requirement of manual loading of data. Fresh PR because of messed up history of the previous one.
akshayb7
https://github.com/huggingface/datasets/pull/1235
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1235", "html_url": "https://github.com/huggingface/datasets/pull/1235", "diff_url": "https://github.com/huggingface/datasets/pull/1235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1235.patch", "merged_at": null }
true
758,229,304
1,234
Added ade_corpus_v2, with 3 configs for relation extraction and classification task
closed
[]
2020-12-07T07:05:14
2020-12-14T17:49:14
2020-12-14T17:49:14
Adverse Drug Reaction Data: ADE-Corpus-V2 dataset added configs for different tasks with given data
Nilanshrajput
https://github.com/huggingface/datasets/pull/1234
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1234", "html_url": "https://github.com/huggingface/datasets/pull/1234", "diff_url": "https://github.com/huggingface/datasets/pull/1234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1234.patch", "merged_at": "2020-12-14T17:49:14" }
true
758,188,699
1,233
Add Curiosity Dialogs Dataset
closed
[]
2020-12-07T06:01:00
2020-12-20T13:34:09
2020-12-09T14:50:29
Add Facebook [Curiosity Dialogs](https://github.com/facebookresearch/curiosity) Dataset.
vineeths96
https://github.com/huggingface/datasets/pull/1233
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1233", "html_url": "https://github.com/huggingface/datasets/pull/1233", "diff_url": "https://github.com/huggingface/datasets/pull/1233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1233.patch", "merged_at": "2020-12-09T14:50:29" }
true
758,180,669
1,232
Add Grail QA dataset
closed
[]
2020-12-07T05:46:45
2020-12-08T13:03:19
2020-12-08T13:03:19
For more information: https://dki-lab.github.io/GrailQA/
mattbui
https://github.com/huggingface/datasets/pull/1232
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1232", "html_url": "https://github.com/huggingface/datasets/pull/1232", "diff_url": "https://github.com/huggingface/datasets/pull/1232.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1232.patch", "merged_at": "2020-12-08T13:03:19" }
true
758,121,398
1,231
Add Urdu Sentiment Corpus (USC)
closed
[]
2020-12-07T03:25:20
2020-12-07T18:05:16
2020-12-07T16:43:23
@lhoestq opened a clean PR containing only relevant files. old PR #1140
chaitnayabasava
https://github.com/huggingface/datasets/pull/1231
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1231", "html_url": "https://github.com/huggingface/datasets/pull/1231", "diff_url": "https://github.com/huggingface/datasets/pull/1231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1231.patch", "merged_at": "2020-12-07T16:43:23" }
true
758,119,342
1,230
Add Urdu fake news dataset
closed
[]
2020-12-07T03:19:50
2020-12-07T18:04:55
2020-12-07T16:57:54
@lhoestq opened a clean PR containing only relevant files. old PR #1125
chaitnayabasava
https://github.com/huggingface/datasets/pull/1230
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1230", "html_url": "https://github.com/huggingface/datasets/pull/1230", "diff_url": "https://github.com/huggingface/datasets/pull/1230.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1230.patch", "merged_at": "2020-12-07T16:57:54" }
true
758,100,707
1,229
Muchocine - Spanish movie reviews dataset
closed
[]
2020-12-07T02:23:29
2020-12-21T10:09:09
2020-12-21T10:09:09
mapmeld
https://github.com/huggingface/datasets/pull/1229
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1229", "html_url": "https://github.com/huggingface/datasets/pull/1229", "diff_url": "https://github.com/huggingface/datasets/pull/1229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1229.patch", "merged_at": "2020-12-21T10:09:09" }
true