id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
755,712,854
1,028
Add ASSET dataset for text simplification evaluation
closed
[]
2020-12-03T00:28:29
2020-12-17T10:03:06
2020-12-03T16:34:37
Adding the ASSET dataset from https://github.com/facebookresearch/asset One config for the simplification data, one for the human ratings of quality. The README.md borrows from that written by @juand-r
yjernite
https://github.com/huggingface/datasets/pull/1028
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1028", "html_url": "https://github.com/huggingface/datasets/pull/1028", "diff_url": "https://github.com/huggingface/datasets/pull/1028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1028.patch", "merged_at": "2020-12-03T16:34:37" }
true
755,695,420
1,027
Hi
closed
[]
2020-12-02T23:47:14
2020-12-03T16:42:41
2020-12-03T16:42:41
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
suemori87
https://github.com/huggingface/datasets/issues/1027
null
false
755,689,195
1,026
Lío o
closed
[]
2020-12-02T23:32:25
2020-12-03T16:42:47
2020-12-03T16:42:47
````l````````` ``` O ``` ````` Ño ``` ```` ```
ghost
https://github.com/huggingface/datasets/issues/1026
null
false
755,673,371
1,025
Add Sesotho Ner
closed
[]
2020-12-02T23:00:15
2020-12-16T16:27:03
2020-12-16T16:27:02
yvonnegitau
https://github.com/huggingface/datasets/pull/1025
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1025", "html_url": "https://github.com/huggingface/datasets/pull/1025", "diff_url": "https://github.com/huggingface/datasets/pull/1025.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1025.patch", "merged_at": null }
true
755,664,113
1,024
Add ZEST: ZEroShot learning from Task descriptions
closed
[]
2020-12-02T22:41:20
2020-12-03T19:21:00
2020-12-03T16:09:15
Adds the ZEST dataset on zero-shot learning from task descriptions from AI2. - Webpage: https://allenai.org/data/zest - Paper: https://arxiv.org/abs/2011.08115 The nature of this dataset made the supported task tags tricky if you wouldn't mind giving any feedback @yjernite. Also let me know if you think we should have a `other-task-generalization` or something like that...
joeddav
https://github.com/huggingface/datasets/pull/1024
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1024", "html_url": "https://github.com/huggingface/datasets/pull/1024", "diff_url": "https://github.com/huggingface/datasets/pull/1024.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1024.patch", "merged_at": "2020-12-03T16:09:14" }
true
755,655,752
1,023
Add Schema Guided Dialogue dataset
closed
[]
2020-12-02T22:26:01
2020-12-03T01:18:01
2020-12-03T01:18:01
This PR adds the Schema Guided Dialogue dataset created for the DSTC8 challenge - https://github.com/google-research-datasets/dstc8-schema-guided-dialogue A bit simpler than MultiWOZ, the only tricky thing is the sequence of dictionaries that had to be linearized. There is a config for the data proper, and a config for the schemas.
yjernite
https://github.com/huggingface/datasets/pull/1023
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1023", "html_url": "https://github.com/huggingface/datasets/pull/1023", "diff_url": "https://github.com/huggingface/datasets/pull/1023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1023.patch", "merged_at": "2020-12-03T01:18:01" }
true
755,651,377
1,022
add MRQA
closed
[]
2020-12-02T22:17:56
2020-12-04T00:34:26
2020-12-04T00:34:25
MRQA (shared task 2019) out of distribution generalization Framed as extractive question answering Dataset is the concatenation (of subsets) of existing QA datasets processed to match the SQuAD format
VictorSanh
https://github.com/huggingface/datasets/pull/1022
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1022", "html_url": "https://github.com/huggingface/datasets/pull/1022", "diff_url": "https://github.com/huggingface/datasets/pull/1022.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1022.patch", "merged_at": "2020-12-04T00:34:24" }
true
755,644,559
1,021
Add Gutenberg time references dataset
closed
[]
2020-12-02T22:05:26
2020-12-03T10:33:39
2020-12-03T10:33:38
This PR adds the gutenberg_time dataset: https://arxiv.org/abs/2011.04124
TevenLeScao
https://github.com/huggingface/datasets/pull/1021
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1021", "html_url": "https://github.com/huggingface/datasets/pull/1021", "diff_url": "https://github.com/huggingface/datasets/pull/1021.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1021.patch", "merged_at": "2020-12-03T10:33:38" }
true
755,601,450
1,020
Add Setswana NER
closed
[]
2020-12-02T20:52:07
2020-12-03T14:56:14
2020-12-03T14:56:14
yvonnegitau
https://github.com/huggingface/datasets/pull/1020
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1020", "html_url": "https://github.com/huggingface/datasets/pull/1020", "diff_url": "https://github.com/huggingface/datasets/pull/1020.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1020.patch", "merged_at": "2020-12-03T14:56:14" }
true
755,582,090
1,019
Add caWaC dataset
closed
[]
2020-12-02T20:18:55
2020-12-03T14:47:09
2020-12-03T14:47:09
Add dataset.
albertvillanova
https://github.com/huggingface/datasets/pull/1019
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1019", "html_url": "https://github.com/huggingface/datasets/pull/1019", "diff_url": "https://github.com/huggingface/datasets/pull/1019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1019.patch", "merged_at": "2020-12-03T14:47:09" }
true
755,570,882
1,018
Add Sepedi NER
closed
[]
2020-12-02T20:01:05
2020-12-03T21:47:03
2020-12-03T21:46:38
This is a new branch created for this dataset
yvonnegitau
https://github.com/huggingface/datasets/pull/1018
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1018", "html_url": "https://github.com/huggingface/datasets/pull/1018", "diff_url": "https://github.com/huggingface/datasets/pull/1018.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1018.patch", "merged_at": null }
true
755,558,175
1,017
Specify file encoding
closed
[]
2020-12-02T19:40:45
2020-12-03T00:44:25
2020-12-03T00:44:25
If not specified, Python uses system default, which for Windows is not "utf-8".
albertvillanova
https://github.com/huggingface/datasets/pull/1017
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1017", "html_url": "https://github.com/huggingface/datasets/pull/1017", "diff_url": "https://github.com/huggingface/datasets/pull/1017.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1017.patch", "merged_at": "2020-12-03T00:44:25" }
true
755,521,862
1,016
Add CLINC150 dataset
closed
[]
2020-12-02T18:44:30
2020-12-03T10:32:04
2020-12-03T10:32:04
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
sumanthd17
https://github.com/huggingface/datasets/pull/1016
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1016", "html_url": "https://github.com/huggingface/datasets/pull/1016", "diff_url": "https://github.com/huggingface/datasets/pull/1016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1016.patch", "merged_at": "2020-12-03T10:32:04" }
true
755,508,841
1,015
add hard dataset
closed
[]
2020-12-02T18:27:36
2020-12-03T15:03:54
2020-12-03T15:03:54
Hotel Reviews in Arabic language.
zaidalyafeai
https://github.com/huggingface/datasets/pull/1015
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1015", "html_url": "https://github.com/huggingface/datasets/pull/1015", "diff_url": "https://github.com/huggingface/datasets/pull/1015.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1015.patch", "merged_at": "2020-12-03T15:03:54" }
true
755,505,851
1,014
Add SciTLDR Dataset (Take 2)
closed
[]
2020-12-02T18:22:50
2020-12-02T18:55:10
2020-12-02T18:37:58
Adds the SciTLDR Dataset by AI2 Added the `README.md` card with tags to the best of my knowledge Multi-target summaries or TLDRs of Scientific Documents Continued from #986
bharatr21
https://github.com/huggingface/datasets/pull/1014
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1014", "html_url": "https://github.com/huggingface/datasets/pull/1014", "diff_url": "https://github.com/huggingface/datasets/pull/1014.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1014.patch", "merged_at": "2020-12-02T18:37:58" }
true
755,493,075
1,013
Adding CS restaurants dataset
closed
[]
2020-12-02T18:02:30
2020-12-02T18:25:20
2020-12-02T18:25:19
This PR adds the CS restaurants dataset; this is a re-opening of a previous PR with a chaotic commit history.
TevenLeScao
https://github.com/huggingface/datasets/pull/1013
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1013", "html_url": "https://github.com/huggingface/datasets/pull/1013", "diff_url": "https://github.com/huggingface/datasets/pull/1013.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1013.patch", "merged_at": "2020-12-02T18:25:19" }
true
755,485,658
1,012
Adding Evidence Inference Data:
closed
[]
2020-12-02T17:51:35
2020-12-03T15:04:46
2020-12-03T15:04:46
http://evidence-inference.ebm-nlp.com/download/ https://arxiv.org/pdf/2005.04177.pdf
Narsil
https://github.com/huggingface/datasets/pull/1012
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1012", "html_url": "https://github.com/huggingface/datasets/pull/1012", "diff_url": "https://github.com/huggingface/datasets/pull/1012.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1012.patch", "merged_at": "2020-12-03T15:04:46" }
true
755,463,726
1,011
Add Bilingual Corpus of Arabic-English Parallel Tweets
closed
[]
2020-12-02T17:20:02
2020-12-04T14:45:10
2020-12-04T14:44:33
Added Bilingual Corpus of Arabic-English Parallel Tweets. The link to the dataset can be found [here](https://alt.qcri.org/wp-content/uploads/2020/08/Bilingual-Corpus-of-Arabic-English-Parallel-Tweets.zip) and the paper can be found [here](https://www.aclweb.org/anthology/2020.bucc-1.3.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
sumanthd17
https://github.com/huggingface/datasets/pull/1011
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1011", "html_url": "https://github.com/huggingface/datasets/pull/1011", "diff_url": "https://github.com/huggingface/datasets/pull/1011.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1011.patch", "merged_at": "2020-12-04T14:44:33" }
true
755,432,143
1,010
Add NoReC: Norwegian Review Corpus
closed
[]
2020-12-02T16:38:29
2021-02-18T14:47:29
2021-02-18T14:47:28
abhishekkrthakur
https://github.com/huggingface/datasets/pull/1010
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1010", "html_url": "https://github.com/huggingface/datasets/pull/1010", "diff_url": "https://github.com/huggingface/datasets/pull/1010.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1010.patch", "merged_at": "2021-02-18T14:47:28" }
true
755,384,433
1,009
Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset.
closed
[]
2020-12-02T15:40:36
2020-12-03T13:16:30
2020-12-03T13:16:29
https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679
Narsil
https://github.com/huggingface/datasets/pull/1009
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1009", "html_url": "https://github.com/huggingface/datasets/pull/1009", "diff_url": "https://github.com/huggingface/datasets/pull/1009.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1009.patch", "merged_at": "2020-12-03T13:16:29" }
true
755,372,798
1,008
Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679
closed
[]
2020-12-02T15:28:05
2020-12-02T15:40:55
2020-12-02T15:40:55
null
Narsil
https://github.com/huggingface/datasets/pull/1008
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1008", "html_url": "https://github.com/huggingface/datasets/pull/1008", "diff_url": "https://github.com/huggingface/datasets/pull/1008.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1008.patch", "merged_at": null }
true
755,364,078
1,007
Include license file in source distribution
closed
[]
2020-12-02T15:17:43
2020-12-02T17:58:05
2020-12-02T17:58:05
It would be helpful to include the license file in the source distribution.
synapticarbors
https://github.com/huggingface/datasets/pull/1007
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1007", "html_url": "https://github.com/huggingface/datasets/pull/1007", "diff_url": "https://github.com/huggingface/datasets/pull/1007.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1007.patch", "merged_at": "2020-12-02T17:58:05" }
true
755,362,766
1,006
add yahoo_answers_topics
closed
[]
2020-12-02T15:16:13
2020-12-03T16:44:38
2020-12-02T18:01:32
This PR adds yahoo answers topic classification dataset. More info: https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset cc @joeddav, @yjernite
patil-suraj
https://github.com/huggingface/datasets/pull/1006
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1006", "html_url": "https://github.com/huggingface/datasets/pull/1006", "diff_url": "https://github.com/huggingface/datasets/pull/1006.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1006.patch", "merged_at": "2020-12-02T18:01:32" }
true
755,337,255
1,005
Adding Autshumato South african langages:
closed
[]
2020-12-02T14:47:33
2020-12-03T13:13:30
2020-12-03T13:13:30
https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype=database&filter_relational_operator=equals&filter=Multilingual+Text+Corpora%3A+Aligned
Narsil
https://github.com/huggingface/datasets/pull/1005
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1005", "html_url": "https://github.com/huggingface/datasets/pull/1005", "diff_url": "https://github.com/huggingface/datasets/pull/1005.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1005.patch", "merged_at": "2020-12-03T13:13:30" }
true
755,325,368
1,004
how large datasets are handled under the hood
closed
[]
2020-12-02T14:32:40
2022-10-05T12:13:29
2022-10-05T12:13:29
Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
rabeehkarimimahabadi
https://github.com/huggingface/datasets/issues/1004
null
false
755,310,318
1,003
Add multi_x_science_sum
closed
[]
2020-12-02T14:14:01
2020-12-02T17:39:05
2020-12-02T17:39:05
Add Multi-XScience Dataset. github repo: https://github.com/yaolu/Multi-XScience paper: [Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235)
moussaKam
https://github.com/huggingface/datasets/pull/1003
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1003", "html_url": "https://github.com/huggingface/datasets/pull/1003", "diff_url": "https://github.com/huggingface/datasets/pull/1003.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1003.patch", "merged_at": "2020-12-02T17:39:05" }
true
755,309,758
1,002
Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
closed
[]
2020-12-02T14:13:17
2020-12-07T16:58:03
2020-12-03T13:14:33
null
Narsil
https://github.com/huggingface/datasets/pull/1002
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1002", "html_url": "https://github.com/huggingface/datasets/pull/1002", "diff_url": "https://github.com/huggingface/datasets/pull/1002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1002.patch", "merged_at": "2020-12-03T13:14:33" }
true
755,309,071
1,001
Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
closed
[]
2020-12-02T14:12:30
2020-12-02T14:13:12
2020-12-02T14:13:12
null
Narsil
https://github.com/huggingface/datasets/pull/1001
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1001", "html_url": "https://github.com/huggingface/datasets/pull/1001", "diff_url": "https://github.com/huggingface/datasets/pull/1001.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1001.patch", "merged_at": null }
true
755,292,066
1,000
UM005: Urdu <> English Translation Dataset
closed
[]
2020-12-02T13:51:35
2020-12-04T15:34:30
2020-12-04T15:34:29
Adds Urdu-English dataset for machine translation: http://ufal.ms.mff.cuni.cz/umc/005-en-ur/
abhishekkrthakur
https://github.com/huggingface/datasets/pull/1000
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1000", "html_url": "https://github.com/huggingface/datasets/pull/1000", "diff_url": "https://github.com/huggingface/datasets/pull/1000.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1000.patch", "merged_at": "2020-12-04T15:34:29" }
true
755,246,786
999
add generated_reviews_enth
closed
[]
2020-12-02T12:50:43
2020-12-03T11:17:28
2020-12-03T11:17:28
`generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.
cstorm125
https://github.com/huggingface/datasets/pull/999
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/999", "html_url": "https://github.com/huggingface/datasets/pull/999", "diff_url": "https://github.com/huggingface/datasets/pull/999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/999.patch", "merged_at": "2020-12-03T11:17:28" }
true
755,235,356
998
adding yahoo_answers_qa
closed
[]
2020-12-02T12:33:54
2020-12-02T13:45:40
2020-12-02T13:26:06
Adding Yahoo Answers QA dataset. More info: https://ciir.cs.umass.edu/downloads/nfL6/
patil-suraj
https://github.com/huggingface/datasets/pull/998
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/998", "html_url": "https://github.com/huggingface/datasets/pull/998", "diff_url": "https://github.com/huggingface/datasets/pull/998.diff", "patch_url": "https://github.com/huggingface/datasets/pull/998.patch", "merged_at": "2020-12-02T13:26:06" }
true
755,185,517
997
Microsoft CodeXGlue
closed
[]
2020-12-02T11:21:18
2021-06-08T13:42:25
2021-06-08T13:42:24
Datasets from https://github.com/microsoft/CodeXGLUE This contains 13 datasets: code_x_glue_cc_clone_detection_big_clone_bench code_x_glue_cc_clone_detection_poj_104 code_x_glue_cc_cloze_testing_all code_x_glue_cc_cloze_testing_maxmin code_x_glue_cc_code_completion_line code_x_glue_cc_code_completion_token code_x_glue_cc_code_refinement code_x_glue_cc_code_to_code_trans code_x_glue_cc_defect_detection code_x_glue_ct_code_to_text code_x_glue_tc_nl_code_search_adv code_x_glue_tc_text_to_code code_x_glue_tt_text_to_text
madlag
https://github.com/huggingface/datasets/pull/997
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/997", "html_url": "https://github.com/huggingface/datasets/pull/997", "diff_url": "https://github.com/huggingface/datasets/pull/997.diff", "patch_url": "https://github.com/huggingface/datasets/pull/997.patch", "merged_at": null }
true
755,176,084
996
NotADirectoryError while loading the CNN/Dailymail dataset
closed
[]
2020-12-02T11:07:56
2022-02-17T14:13:39
2022-02-17T14:13:39
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
arc-bu
https://github.com/huggingface/datasets/issues/996
null
false
755,175,199
995
added dataset circa
closed
[]
2020-12-02T11:06:39
2020-12-04T10:58:16
2020-12-03T09:39:37
Dataset Circa added. Only README.md and dataset card left
bhavitvyamalik
https://github.com/huggingface/datasets/pull/995
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/995", "html_url": "https://github.com/huggingface/datasets/pull/995", "diff_url": "https://github.com/huggingface/datasets/pull/995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/995.patch", "merged_at": "2020-12-03T09:39:37" }
true
755,146,834
994
Add Sepedi ner corpus
closed
[]
2020-12-02T10:30:07
2020-12-03T10:19:14
2020-12-02T18:20:08
yvonnegitau
https://github.com/huggingface/datasets/pull/994
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/994", "html_url": "https://github.com/huggingface/datasets/pull/994", "diff_url": "https://github.com/huggingface/datasets/pull/994.diff", "patch_url": "https://github.com/huggingface/datasets/pull/994.patch", "merged_at": null }
true
755,135,768
993
Problem downloading amazon_reviews_multi
closed
[]
2020-12-02T10:15:57
2022-10-05T12:21:34
2022-10-05T12:21:34
Thanks for adding the dataset. After trying to load the dataset, I am getting the following error: `ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json ` I used the following code to load the dataset: `load_dataset( dataset_name, "all_languages", cache_dir=".data" )` I am using version 1.1.3 of `datasets` Note that I can perform a successfull `wget https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json`
hfawaz
https://github.com/huggingface/datasets/issues/993
null
false
755,124,963
992
Add CAIL 2018 dataset
closed
[]
2020-12-02T10:01:40
2020-12-02T16:49:02
2020-12-02T16:49:01
JetRunner
https://github.com/huggingface/datasets/pull/992
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/992", "html_url": "https://github.com/huggingface/datasets/pull/992", "diff_url": "https://github.com/huggingface/datasets/pull/992.diff", "patch_url": "https://github.com/huggingface/datasets/pull/992.patch", "merged_at": "2020-12-02T16:49:01" }
true
755,117,902
991
Adding farsi_news dataset (https://github.com/sci2lab/Farsi-datasets)
closed
[]
2020-12-02T09:52:19
2020-12-03T11:01:26
2020-12-03T11:01:26
null
Narsil
https://github.com/huggingface/datasets/pull/991
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/991", "html_url": "https://github.com/huggingface/datasets/pull/991", "diff_url": "https://github.com/huggingface/datasets/pull/991.diff", "patch_url": "https://github.com/huggingface/datasets/pull/991.patch", "merged_at": "2020-12-03T11:01:26" }
true
755,097,798
990
Add E2E NLG
closed
[]
2020-12-02T09:25:12
2020-12-03T13:08:05
2020-12-03T13:08:04
Adding the E2E NLG dataset. More info here : http://www.macs.hw.ac.uk/InteractionLab/E2E/ ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass.
lhoestq
https://github.com/huggingface/datasets/pull/990
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/990", "html_url": "https://github.com/huggingface/datasets/pull/990", "diff_url": "https://github.com/huggingface/datasets/pull/990.diff", "patch_url": "https://github.com/huggingface/datasets/pull/990.patch", "merged_at": "2020-12-03T13:08:04" }
true
755,079,394
989
Fix SV -> NO
closed
[]
2020-12-02T08:59:59
2020-12-02T09:18:21
2020-12-02T09:18:14
This PR fixes the small typo as seen in #956
jplu
https://github.com/huggingface/datasets/pull/989
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/989", "html_url": "https://github.com/huggingface/datasets/pull/989", "diff_url": "https://github.com/huggingface/datasets/pull/989.diff", "patch_url": "https://github.com/huggingface/datasets/pull/989.patch", "merged_at": "2020-12-02T09:18:14" }
true
755,069,159
988
making sure datasets are not loaded in memory and distributed training of them
closed
[]
2020-12-02T08:45:15
2022-10-05T13:00:42
2022-10-05T13:00:42
Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks
rabeehk
https://github.com/huggingface/datasets/issues/988
null
false
755,059,469
987
Add OPUS DOGC dataset
closed
[]
2020-12-02T08:30:32
2020-12-04T13:27:41
2020-12-04T13:27:41
albertvillanova
https://github.com/huggingface/datasets/pull/987
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/987", "html_url": "https://github.com/huggingface/datasets/pull/987", "diff_url": "https://github.com/huggingface/datasets/pull/987.diff", "patch_url": "https://github.com/huggingface/datasets/pull/987.patch", "merged_at": "2020-12-04T13:27:41" }
true
755,047,470
986
Add SciTLDR Dataset
closed
[]
2020-12-02T08:11:16
2020-12-02T18:37:22
2020-12-02T18:02:59
Adds the SciTLDR Dataset by AI2 Added README card with tags to the best of my knowledge Multi-target summaries or TLDRs of Scientific Documents
bharatr21
https://github.com/huggingface/datasets/pull/986
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/986", "html_url": "https://github.com/huggingface/datasets/pull/986", "diff_url": "https://github.com/huggingface/datasets/pull/986.diff", "patch_url": "https://github.com/huggingface/datasets/pull/986.patch", "merged_at": null }
true
755,020,564
985
Add GAP dataset
closed
[]
2020-12-02T07:25:11
2022-10-06T14:11:52
2020-12-02T16:16:32
GAP dataset Gender bias coreference resolution
VictorSanh
https://github.com/huggingface/datasets/pull/985
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/985", "html_url": "https://github.com/huggingface/datasets/pull/985", "diff_url": "https://github.com/huggingface/datasets/pull/985.diff", "patch_url": "https://github.com/huggingface/datasets/pull/985.patch", "merged_at": null }
true
755,009,916
984
committing Whoa file
closed
[]
2020-12-02T07:07:46
2020-12-02T16:15:29
2020-12-02T15:40:58
StulosDunamos
https://github.com/huggingface/datasets/pull/984
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/984", "html_url": "https://github.com/huggingface/datasets/pull/984", "diff_url": "https://github.com/huggingface/datasets/pull/984.diff", "patch_url": "https://github.com/huggingface/datasets/pull/984.patch", "merged_at": null }
true
754,966,620
983
add mc taco
closed
[]
2020-12-02T05:54:55
2020-12-02T15:37:47
2020-12-02T15:37:46
MC-TACO Temporal commonsense knowledge
VictorSanh
https://github.com/huggingface/datasets/pull/983
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/983", "html_url": "https://github.com/huggingface/datasets/pull/983", "diff_url": "https://github.com/huggingface/datasets/pull/983.diff", "patch_url": "https://github.com/huggingface/datasets/pull/983.patch", "merged_at": "2020-12-02T15:37:46" }
true
754,946,337
982
add prachathai67k take2
closed
[]
2020-12-02T05:12:01
2020-12-02T10:18:11
2020-12-02T10:18:11
I decided it will be faster to create a new pull request instead of fixing the rebase issues. continuing from https://github.com/huggingface/datasets/pull/954
cstorm125
https://github.com/huggingface/datasets/pull/982
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/982", "html_url": "https://github.com/huggingface/datasets/pull/982", "diff_url": "https://github.com/huggingface/datasets/pull/982.diff", "patch_url": "https://github.com/huggingface/datasets/pull/982.patch", "merged_at": "2020-12-02T10:18:11" }
true
754,937,612
981
add wisesight_sentiment take2
closed
[]
2020-12-02T04:50:59
2020-12-02T10:37:13
2020-12-02T10:37:13
Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one.
cstorm125
https://github.com/huggingface/datasets/pull/981
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/981", "html_url": "https://github.com/huggingface/datasets/pull/981", "diff_url": "https://github.com/huggingface/datasets/pull/981.diff", "patch_url": "https://github.com/huggingface/datasets/pull/981.patch", "merged_at": "2020-12-02T10:37:13" }
true
754,899,301
980
Wongnai - Thai reviews dataset
closed
[]
2020-12-02T03:20:08
2020-12-02T15:34:41
2020-12-02T15:30:05
40,000 reviews, previously released on GitHub ( https://github.com/wongnai/wongnai-corpus ) with an LGPL license, and on a closed Kaggle competition ( https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction/ )
mapmeld
https://github.com/huggingface/datasets/pull/980
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/980", "html_url": "https://github.com/huggingface/datasets/pull/980", "diff_url": "https://github.com/huggingface/datasets/pull/980.diff", "patch_url": "https://github.com/huggingface/datasets/pull/980.patch", "merged_at": "2020-12-02T15:30:04" }
true
754,893,337
979
[WIP] Add multi woz
closed
[]
2020-12-02T03:05:42
2020-12-02T16:07:16
2020-12-02T16:07:16
This PR adds version 2.2 of the Multi-domain Wizard of OZ dataset: https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2 It was a pretty big chunk of work to figure out the structure, so I stil have tol add the description to the README.md On the plus side the structure is broadly similar to that of the Google Schema Guided dialogue [dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue), so will take care of that one next.
yjernite
https://github.com/huggingface/datasets/pull/979
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/979", "html_url": "https://github.com/huggingface/datasets/pull/979", "diff_url": "https://github.com/huggingface/datasets/pull/979.diff", "patch_url": "https://github.com/huggingface/datasets/pull/979.patch", "merged_at": "2020-12-02T16:07:16" }
true
754,854,478
978
Add code refinement
closed
[]
2020-12-02T01:29:58
2020-12-07T01:52:58
2020-12-07T01:52:58
### OVERVIEW Millions of open-source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs Code refinement aims to automatically fix bugs in the code, which can contribute to reducing the cost of bug-fixes for developers. Given a piece of Java code with bugs, the task is to remove the bugs to output the refined code.
reshinthadithyan
https://github.com/huggingface/datasets/pull/978
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/978", "html_url": "https://github.com/huggingface/datasets/pull/978", "diff_url": "https://github.com/huggingface/datasets/pull/978.diff", "patch_url": "https://github.com/huggingface/datasets/pull/978.patch", "merged_at": null }
true
754,839,594
977
Add ROPES dataset
closed
[]
2020-12-02T00:52:10
2020-12-02T10:58:36
2020-12-02T10:58:35
ROPES dataset Reasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa. One thing to note: labels of the test set are hidden (leaderboard submission) so I encoded that as an empty list (ropes.py:L125)
VictorSanh
https://github.com/huggingface/datasets/pull/977
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/977", "html_url": "https://github.com/huggingface/datasets/pull/977", "diff_url": "https://github.com/huggingface/datasets/pull/977.diff", "patch_url": "https://github.com/huggingface/datasets/pull/977.patch", "merged_at": "2020-12-02T10:58:35" }
true
754,826,146
976
Arabic pos dialect
closed
[]
2020-12-02T00:21:13
2020-12-09T17:30:32
2020-12-09T17:30:32
A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP.
mcmillanmajora
https://github.com/huggingface/datasets/pull/976
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/976", "html_url": "https://github.com/huggingface/datasets/pull/976", "diff_url": "https://github.com/huggingface/datasets/pull/976.diff", "patch_url": "https://github.com/huggingface/datasets/pull/976.patch", "merged_at": null }
true
754,823,701
975
add MeTooMA dataset
closed
[]
2020-12-02T00:15:55
2020-12-02T10:58:56
2020-12-02T10:58:55
This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines. Paper: https://ojs.aaai.org/index.php/ICWSM/article/view/7292 Dataset Link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU --- annotations_creators: - expert-generated language_creators: - found languages: - en multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification - text-retrieval task_ids: - multi-class-classification - multi-label-classification --- # Dataset Card for #MeTooMA dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU - **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292 - **Point of Contact:** https://github.com/midas-research/MeTooMA ### Dataset Summary - The dataset consists of tweets belonging to #MeToo movement on Twitter, labeled into different categories. - This dataset includes more data points and has more labels than any of the previous datasets that contain social media posts about sexual abuse disclosures. Please refer to the Related Datasets of the publication for detailed information about this. - Due to Twitter's development policies, the authors provide only the tweet IDs and corresponding labels, other data can be fetched via Twitter API. - The data has been labeled by experts, with the majority taken into the account for deciding the final label. - The authors provide these labels for each of the tweets. - Relevance - Directed Hate - Generalized Hate - Sarcasm - Allegation - Justification - Refutation - Support - Oppose - The definitions for each task/label are in the main publication. - Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data extracted from this dataset. - The language of all the tweets in this dataset is English - Time period: October 2018 - December 2018 - Suggested Use Cases of this dataset: - Evaluating usage of linguistic acts such as hate-speech and sarcasm in the context of public sexual abuse disclosures. - Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations. - Identifying how influential people were portrayed on the public platform in the events of mass social movements. - Polarization analysis based on graph simulations of social nodes of users involved in the #MeToo movement. ### Supported Tasks and Leaderboards Multi-Label and Multi-Class Classification ### Languages English ## Dataset Structure - The dataset is structured into CSV format with TweetID and accompanying labels. - Train and Test sets are split into respective files. ### Data Instances Tweet ID and the appropriate labels ### Data Fields Tweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID ### Data Splits - Train: 7979 - Test: 1996 ## Dataset Creation ### Curation Rationale - Twitter was the major source of all the public disclosures of sexual abuse incidents during the #MeToo movement. - People expressed their opinions over issues that were previously missing from the social media space. - This provides an option to study the linguistic behaviors of social media users in an informal setting, therefore the authors decide to curate this annotated dataset. - The authors expect this dataset would be of great interest and use to both computational and socio-linguists. - For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media. ### Source Data - Source of all the data points in this dataset is a Twitter social media platform. #### Initial Data Collection and Normalization - All the tweets are mined from Twitter with initial search parameters identified using keywords from the #MeToo movement. - Redundant keywords were removed based on manual inspection. - Public streaming APIs of Twitter was used for querying with the selected keywords. - Based on text de-duplication and cosine similarity score, the set of tweets were pruned. - Non-English tweets were removed. - The final set was labeled by experts with the majority label taken into the account for deciding the final label. - Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292 #### Who are the source language producers? Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292 ### Annotations #### Annotation process - The authors chose against crowdsourcing for labeling this dataset due to its highly sensitive nature. - The annotators are domain experts having degrees in advanced clinical psychology and gender studies. - They were provided a guidelines document with instructions about each task and its definitions, labels, and examples. - They studied the document, worked on a few examples to get used to this annotation task. - They also provided feedback for improving the class definitions. - The annotation process is not mutually exclusive, implying that the presence of one label does not mean the absence of the other one. #### Who are the annotators? - The annotators are domain experts having a degree in clinical psychology and gender studies. - Please refer to the accompanying paper for a detailed annotation process. ### Personal and Sensitive Information - Considering Twitter's policy for distribution of data, only Tweet ID and applicable labels are shared for public use. - It is highly encouraged to use this dataset for scientific purposes only. - This dataset collection completely follows the Twitter mandated guidelines for distribution and usage. ## Considerations for Using the Data ### Social Impact of Dataset - The authors of this dataset do not intend to conduct a population-centric analysis of the #MeToo movement on Twitter. - The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these should be used to assist already existing human intervention tools and therapies. - Enough care has been taken to ensure that this work comes off as trying to target a specific person for their the personal stance of issues pertaining to the #MeToo movement. - The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner. - Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset and the social impact of this work. ### Discussion of Biases - The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of the community affected by sexual abuse. - Any work undertaken on this dataset should aim to minimize the bias against minority groups which might amplify in cases of a sudden outburst of public reactions over sensitive social media discussions. ### Other Known Limitations - Considering privacy concerns, social media practitioners should be aware of making automated interventions to aid the victims of sexual abuse as some people might not prefer to disclose their notions. - Concerned social media users might also repeal their social information if they found out that their information is being used for computational purposes, hence it is important to seek subtle individual consent before trying to profile authors involved in online discussions to uphold personal privacy. ## Additional Information Please refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU ### Dataset Curators - If you use the corpus in a product or application, then please credit the authors and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi] (http://midas.iiitd.edu.in) appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - If interested in the commercial use of the corpus, send an email to midas@iiitd.ac.in. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your social media data. - if interested in a collaborative research project. ### Licensing Information [More Information Needed] ### Citation Information Please cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292 ``` @article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={&lt;p&gt;In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.&lt;/p&#38;gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} } ```
akash418
https://github.com/huggingface/datasets/pull/975
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/975", "html_url": "https://github.com/huggingface/datasets/pull/975", "diff_url": "https://github.com/huggingface/datasets/pull/975.diff", "patch_url": "https://github.com/huggingface/datasets/pull/975.patch", "merged_at": "2020-12-02T10:58:55" }
true
754,811,185
974
Add MeTooMA Dataset
closed
[]
2020-12-01T23:44:01
2020-12-01T23:57:58
2020-12-01T23:57:58
akash418
https://github.com/huggingface/datasets/pull/974
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/974", "html_url": "https://github.com/huggingface/datasets/pull/974", "diff_url": "https://github.com/huggingface/datasets/pull/974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/974.patch", "merged_at": null }
true
754,807,963
973
Adding The Microsoft Terminology Collection dataset.
closed
[]
2020-12-01T23:36:23
2020-12-04T15:25:44
2020-12-04T15:12:46
leoxzhao
https://github.com/huggingface/datasets/pull/973
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/973", "html_url": "https://github.com/huggingface/datasets/pull/973", "diff_url": "https://github.com/huggingface/datasets/pull/973.diff", "patch_url": "https://github.com/huggingface/datasets/pull/973.patch", "merged_at": "2020-12-04T15:12:46" }
true
754,787,314
972
Add Children's Book Test (CBT) dataset
closed
[]
2020-12-01T22:53:26
2021-03-19T11:30:03
2021-03-19T11:30:03
Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016). Sentence completion given a few sentences as context from a children's book.
thomwolf
https://github.com/huggingface/datasets/pull/972
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/972", "html_url": "https://github.com/huggingface/datasets/pull/972", "diff_url": "https://github.com/huggingface/datasets/pull/972.diff", "patch_url": "https://github.com/huggingface/datasets/pull/972.patch", "merged_at": null }
true
754,784,041
971
add piqa
closed
[]
2020-12-01T22:47:04
2020-12-02T09:58:02
2020-12-02T09:58:01
Physical Interaction: Question Answering (commonsense) https://yonatanbisk.com/piqa/
VictorSanh
https://github.com/huggingface/datasets/pull/971
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/971", "html_url": "https://github.com/huggingface/datasets/pull/971", "diff_url": "https://github.com/huggingface/datasets/pull/971.diff", "patch_url": "https://github.com/huggingface/datasets/pull/971.patch", "merged_at": "2020-12-02T09:58:01" }
true
754,697,489
970
Add SWAG
closed
[]
2020-12-01T20:21:05
2020-12-02T09:55:16
2020-12-02T09:55:15
Commonsense NLI -> https://rowanzellers.com/swag/
VictorSanh
https://github.com/huggingface/datasets/pull/970
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/970", "html_url": "https://github.com/huggingface/datasets/pull/970", "diff_url": "https://github.com/huggingface/datasets/pull/970.diff", "patch_url": "https://github.com/huggingface/datasets/pull/970.patch", "merged_at": "2020-12-02T09:55:15" }
true
754,681,940
969
Add wiki auto dataset
closed
[]
2020-12-01T19:58:11
2020-12-02T16:19:14
2020-12-02T16:19:14
This PR adds the WikiAuto sentence simplification dataset https://github.com/chaojiang06/wiki-auto This is also a prospective GEM task, hence the README.md
yjernite
https://github.com/huggingface/datasets/pull/969
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/969", "html_url": "https://github.com/huggingface/datasets/pull/969", "diff_url": "https://github.com/huggingface/datasets/pull/969.diff", "patch_url": "https://github.com/huggingface/datasets/pull/969.patch", "merged_at": "2020-12-02T16:19:14" }
true
754,659,015
968
ADD Afrikaans NER
closed
[]
2020-12-01T19:23:03
2020-12-02T09:41:28
2020-12-02T09:41:28
Afrikaans NER corpus
yvonnegitau
https://github.com/huggingface/datasets/pull/968
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/968", "html_url": "https://github.com/huggingface/datasets/pull/968", "diff_url": "https://github.com/huggingface/datasets/pull/968.diff", "patch_url": "https://github.com/huggingface/datasets/pull/968.patch", "merged_at": "2020-12-02T09:41:28" }
true
754,578,988
967
Add CS Restaurants dataset
closed
[]
2020-12-01T17:17:37
2020-12-02T17:57:44
2020-12-02T17:57:25
This PR adds the Czech restaurants dataset for Czech NLG.
TevenLeScao
https://github.com/huggingface/datasets/pull/967
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/967", "html_url": "https://github.com/huggingface/datasets/pull/967", "diff_url": "https://github.com/huggingface/datasets/pull/967.diff", "patch_url": "https://github.com/huggingface/datasets/pull/967.patch", "merged_at": null }
true
754,558,686
966
Add CLINC150 Dataset
closed
[]
2020-12-01T16:50:13
2020-12-02T18:45:43
2020-12-02T18:45:30
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
sumanthd17
https://github.com/huggingface/datasets/pull/966
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/966", "html_url": "https://github.com/huggingface/datasets/pull/966", "diff_url": "https://github.com/huggingface/datasets/pull/966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/966.patch", "merged_at": null }
true
754,553,169
965
Add CLINC150 Dataset
closed
[]
2020-12-01T16:43:00
2020-12-01T16:51:16
2020-12-01T16:49:15
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
sumanthd17
https://github.com/huggingface/datasets/pull/965
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/965", "html_url": "https://github.com/huggingface/datasets/pull/965", "diff_url": "https://github.com/huggingface/datasets/pull/965.diff", "patch_url": "https://github.com/huggingface/datasets/pull/965.patch", "merged_at": null }
true
754,474,660
964
Adding the WebNLG dataset
closed
[]
2020-12-01T15:05:23
2020-12-02T17:34:05
2020-12-02T17:34:05
This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration. More information can be found [here](https://webnlg-challenge.loria.fr/) Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB even keeping only one example per file).
yjernite
https://github.com/huggingface/datasets/pull/964
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/964", "html_url": "https://github.com/huggingface/datasets/pull/964", "diff_url": "https://github.com/huggingface/datasets/pull/964.diff", "patch_url": "https://github.com/huggingface/datasets/pull/964.patch", "merged_at": "2020-12-02T17:34:05" }
true
754,451,234
963
add CODAH dataset
closed
[]
2020-12-01T14:37:05
2020-12-02T13:45:58
2020-12-02T13:21:25
Adding CODAH dataset. More info: https://github.com/Websail-NU/CODAH
patil-suraj
https://github.com/huggingface/datasets/pull/963
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/963", "html_url": "https://github.com/huggingface/datasets/pull/963", "diff_url": "https://github.com/huggingface/datasets/pull/963.diff", "patch_url": "https://github.com/huggingface/datasets/pull/963.patch", "merged_at": "2020-12-02T13:21:25" }
true
754,441,428
962
Add Danish Political Comments Dataset
closed
[]
2020-12-01T14:28:32
2020-12-03T10:31:55
2020-12-03T10:31:54
abhishekkrthakur
https://github.com/huggingface/datasets/pull/962
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/962", "html_url": "https://github.com/huggingface/datasets/pull/962", "diff_url": "https://github.com/huggingface/datasets/pull/962.diff", "patch_url": "https://github.com/huggingface/datasets/pull/962.patch", "merged_at": "2020-12-03T10:31:54" }
true
754,434,398
961
sample multiple datasets
closed
[]
2020-12-01T14:20:02
2024-06-17T08:23:20
2023-07-20T14:08:57
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
rabeehk
https://github.com/huggingface/datasets/issues/961
null
false
754,422,710
960
Add code to automate parts of the dataset card
closed
[]
2020-12-01T14:04:51
2023-09-24T09:50:38
2021-04-26T07:56:01
Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so.
patrickvonplaten
https://github.com/huggingface/datasets/pull/960
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/960", "html_url": "https://github.com/huggingface/datasets/pull/960", "diff_url": "https://github.com/huggingface/datasets/pull/960.diff", "patch_url": "https://github.com/huggingface/datasets/pull/960.patch", "merged_at": null }
true
754,418,610
959
Add Tunizi Dataset
closed
[]
2020-12-01T13:59:39
2020-12-03T14:21:41
2020-12-03T14:21:40
abhishekkrthakur
https://github.com/huggingface/datasets/pull/959
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/959", "html_url": "https://github.com/huggingface/datasets/pull/959", "diff_url": "https://github.com/huggingface/datasets/pull/959.diff", "patch_url": "https://github.com/huggingface/datasets/pull/959.patch", "merged_at": "2020-12-03T14:21:40" }
true
754,404,095
958
dataset(ncslgr): add initial loading script
closed
[]
2020-12-01T13:41:17
2020-12-07T16:35:39
2020-12-07T16:35:39
clean #789
AmitMY
https://github.com/huggingface/datasets/pull/958
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/958", "html_url": "https://github.com/huggingface/datasets/pull/958", "diff_url": "https://github.com/huggingface/datasets/pull/958.diff", "patch_url": "https://github.com/huggingface/datasets/pull/958.patch", "merged_at": "2020-12-07T16:35:39" }
true
754,380,073
957
Isixhosa ner corpus
closed
[]
2020-12-01T13:08:36
2020-12-01T18:14:58
2020-12-01T18:14:58
yvonnegitau
https://github.com/huggingface/datasets/pull/957
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/957", "html_url": "https://github.com/huggingface/datasets/pull/957", "diff_url": "https://github.com/huggingface/datasets/pull/957.diff", "patch_url": "https://github.com/huggingface/datasets/pull/957.patch", "merged_at": "2020-12-01T18:14:58" }
true
754,368,378
956
Add Norwegian NER
closed
[]
2020-12-01T12:51:02
2020-12-02T08:53:11
2020-12-01T18:09:21
This PR adds the [Norwegian NER](https://github.com/ljos/navnkjenner) dataset. I have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files.
jplu
https://github.com/huggingface/datasets/pull/956
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/956", "html_url": "https://github.com/huggingface/datasets/pull/956", "diff_url": "https://github.com/huggingface/datasets/pull/956.diff", "patch_url": "https://github.com/huggingface/datasets/pull/956.patch", "merged_at": "2020-12-01T18:09:21" }
true
754,367,291
955
Added PragmEval benchmark
closed
[]
2020-12-01T12:49:15
2020-12-04T10:43:32
2020-12-03T09:36:47
sileod
https://github.com/huggingface/datasets/pull/955
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/955", "html_url": "https://github.com/huggingface/datasets/pull/955", "diff_url": "https://github.com/huggingface/datasets/pull/955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/955.patch", "merged_at": "2020-12-03T09:36:47" }
true
754,362,012
954
add prachathai67k
closed
[]
2020-12-01T12:40:55
2020-12-02T05:12:11
2020-12-02T04:43:52
`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The prachathai-67k dataset was scraped from the news site Prachathai. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by @lukkiddd and cleaned by @cstorm125. You can also see preliminary exploration at https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb
cstorm125
https://github.com/huggingface/datasets/pull/954
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/954", "html_url": "https://github.com/huggingface/datasets/pull/954", "diff_url": "https://github.com/huggingface/datasets/pull/954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/954.patch", "merged_at": null }
true
754,359,942
953
added health_fact dataset
closed
[]
2020-12-01T12:37:44
2020-12-01T23:11:33
2020-12-01T23:11:33
Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact)
bhavitvyamalik
https://github.com/huggingface/datasets/pull/953
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/953", "html_url": "https://github.com/huggingface/datasets/pull/953", "diff_url": "https://github.com/huggingface/datasets/pull/953.diff", "patch_url": "https://github.com/huggingface/datasets/pull/953.patch", "merged_at": "2020-12-01T23:11:33" }
true
754,357,270
952
Add orange sum
closed
[]
2020-12-01T12:33:34
2020-12-01T15:44:00
2020-12-01T15:44:00
Add OrangeSum a french abstractive summarization dataset. Paper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
moussaKam
https://github.com/huggingface/datasets/pull/952
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/952", "html_url": "https://github.com/huggingface/datasets/pull/952", "diff_url": "https://github.com/huggingface/datasets/pull/952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/952.patch", "merged_at": "2020-12-01T15:44:00" }
true
754,349,979
951
Prachathai67k
closed
[]
2020-12-01T12:21:52
2020-12-01T12:29:53
2020-12-01T12:28:26
Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb). This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**: * `การเมือง` - politics * `สิทธิมนุษยชน` - human_rights * `คุณภาพชีวิต` - quality_of_life * `ต่างประเทศ` - international * `สังคม` - social * `สิ่งแวดล้อม` - environment * `เศรษฐกิจ` - economics * `วัฒนธรรม` - culture * `แรงงาน` - labor * `ความมั่นคง` - national_security * `ไอซีที` - ict * `การศึกษา` - education
cstorm125
https://github.com/huggingface/datasets/pull/951
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/951", "html_url": "https://github.com/huggingface/datasets/pull/951", "diff_url": "https://github.com/huggingface/datasets/pull/951.diff", "patch_url": "https://github.com/huggingface/datasets/pull/951.patch", "merged_at": null }
true
754,318,686
950
Support .xz file format
closed
[]
2020-12-01T11:34:48
2020-12-01T13:39:18
2020-12-01T13:39:18
Add support to extract/uncompress files in .xz format.
albertvillanova
https://github.com/huggingface/datasets/pull/950
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/950", "html_url": "https://github.com/huggingface/datasets/pull/950", "diff_url": "https://github.com/huggingface/datasets/pull/950.diff", "patch_url": "https://github.com/huggingface/datasets/pull/950.patch", "merged_at": "2020-12-01T13:39:18" }
true
754,317,777
949
Add GermaNER Dataset
closed
[]
2020-12-01T11:33:31
2020-12-03T14:06:41
2020-12-03T14:06:40
abhishekkrthakur
https://github.com/huggingface/datasets/pull/949
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/949", "html_url": "https://github.com/huggingface/datasets/pull/949", "diff_url": "https://github.com/huggingface/datasets/pull/949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/949.patch", "merged_at": "2020-12-03T14:06:40" }
true
754,306,260
948
docs(ADD_NEW_DATASET): correct indentation for script
closed
[]
2020-12-01T11:17:38
2020-12-01T11:25:18
2020-12-01T11:25:18
AmitMY
https://github.com/huggingface/datasets/pull/948
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/948", "html_url": "https://github.com/huggingface/datasets/pull/948", "diff_url": "https://github.com/huggingface/datasets/pull/948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/948.patch", "merged_at": "2020-12-01T11:25:18" }
true
754,286,658
947
Add europeana newspapers
closed
[]
2020-12-01T10:52:18
2020-12-02T09:42:35
2020-12-02T09:42:09
This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset.
jplu
https://github.com/huggingface/datasets/pull/947
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/947", "html_url": "https://github.com/huggingface/datasets/pull/947", "diff_url": "https://github.com/huggingface/datasets/pull/947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/947.patch", "merged_at": "2020-12-02T09:42:09" }
true
754,278,632
946
add PEC dataset
closed
[]
2020-12-01T10:41:41
2020-12-03T02:47:14
2020-12-03T02:47:14
A persona-based empathetic conversation dataset published at EMNLP 2020.
zhongpeixiang
https://github.com/huggingface/datasets/pull/946
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/946", "html_url": "https://github.com/huggingface/datasets/pull/946", "diff_url": "https://github.com/huggingface/datasets/pull/946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/946.patch", "merged_at": null }
true
754,273,920
945
Adding Babi dataset - English version
closed
[]
2020-12-01T10:35:36
2020-12-04T15:43:05
2020-12-04T15:42:54
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment.
thomwolf
https://github.com/huggingface/datasets/pull/945
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/945", "html_url": "https://github.com/huggingface/datasets/pull/945", "diff_url": "https://github.com/huggingface/datasets/pull/945.diff", "patch_url": "https://github.com/huggingface/datasets/pull/945.patch", "merged_at": null }
true
754,228,947
944
Add German Legal Entity Recognition Dataset
closed
[]
2020-12-01T09:38:22
2020-12-03T13:06:56
2020-12-03T13:06:55
abhishekkrthakur
https://github.com/huggingface/datasets/pull/944
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/944", "html_url": "https://github.com/huggingface/datasets/pull/944", "diff_url": "https://github.com/huggingface/datasets/pull/944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/944.patch", "merged_at": "2020-12-03T13:06:54" }
true
754,192,491
943
The FLUE Benchmark
closed
[]
2020-12-01T09:00:50
2020-12-01T15:24:38
2020-12-01T15:24:30
This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content. Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambiguation for Nouns that will be added later.
jplu
https://github.com/huggingface/datasets/pull/943
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/943", "html_url": "https://github.com/huggingface/datasets/pull/943", "diff_url": "https://github.com/huggingface/datasets/pull/943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/943.patch", "merged_at": "2020-12-01T15:24:30" }
true
754,162,318
942
D
closed
[]
2020-12-01T08:17:10
2020-12-03T16:42:53
2020-12-03T16:42:53
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CryptoMiKKi
https://github.com/huggingface/datasets/issues/942
null
false
754,141,321
941
Add People's Daily NER dataset
closed
[]
2020-12-01T07:48:53
2020-12-02T18:42:43
2020-12-02T18:42:41
JetRunner
https://github.com/huggingface/datasets/pull/941
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/941", "html_url": "https://github.com/huggingface/datasets/pull/941", "diff_url": "https://github.com/huggingface/datasets/pull/941.diff", "patch_url": "https://github.com/huggingface/datasets/pull/941.patch", "merged_at": "2020-12-02T18:42:41" }
true
754,010,753
940
Add MSRA NER dataset
closed
[]
2020-12-01T05:02:11
2020-12-04T09:29:40
2020-12-01T07:25:53
JetRunner
https://github.com/huggingface/datasets/pull/940
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/940", "html_url": "https://github.com/huggingface/datasets/pull/940", "diff_url": "https://github.com/huggingface/datasets/pull/940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/940.patch", "merged_at": "2020-12-01T07:25:53" }
true
753,965,405
939
add wisesight_sentiment
closed
[]
2020-12-01T03:06:39
2020-12-02T04:52:38
2020-12-02T04:35:51
Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question) Model Card: --- YAML tags: annotations_creators: - expert-generated language_creators: - found languages: - th licenses: - cc0-1.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for wisesight_sentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/PyThaiNLP/wisesight-sentiment - **Repository:** https://github.com/PyThaiNLP/wisesight-sentiment - **Paper:** - **Leaderboard:** https://www.kaggle.com/c/wisesight-sentiment/ - **Point of Contact:** https://github.com/PyThaiNLP/ ### Dataset Summary Wisesight Sentiment Corpus: Social media messages in Thai language with sentiment label (positive, neutral, negative, question) - Released to public domain under Creative Commons Zero v1.0 Universal license. - Labels: {"pos": 0, "neu": 1, "neg": 2, "q": 3} - Size: 26,737 messages - Language: Central Thai - Style: Informal and conversational. With some news headlines and advertisement. - Time period: Around 2016 to early 2019. With small amount from other period. - Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs. - Privacy: - Only messages that made available to the public on the internet (websites, blogs, social network sites). - For Facebook, this means the public comments (everyone can see) that made on a public page. - Private/protected messages and messages in groups, chat, and inbox are not included. - Alternations and modifications: - Keep in mind that this corpus does not statistically represent anything in the language register. - Large amount of messages are not in their original form. Personal data are removed or masked. - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact. (Mis)spellings are kept intact. - Messages longer than 2,000 characters are removed. - Long non-Thai messages are removed. Duplicated message (exact match) are removed. - More characteristics of the data can be explore [this notebook](https://github.com/PyThaiNLP/wisesight-sentiment/blob/master/exploration.ipynb) ### Supported Tasks and Leaderboards Sentiment analysis / [Kaggle Leaderboard](https://www.kaggle.com/c/wisesight-sentiment/) ### Languages Thai ## Dataset Structure ### Data Instances ``` {'category': 'pos', 'texts': 'น่าสนนน'} {'category': 'neu', 'texts': 'ครับ #phithanbkk'} {'category': 'neg', 'texts': 'ซื้อแต่ผ้าอนามัยแบบเย็นมาค่ะ แบบว่าอีห่ากูนอนไม่ได้'} {'category': 'q', 'texts': 'มีแอลกอฮอลมั้ยคะ'} ``` ### Data Fields - `texts`: texts - `category`: sentiment of texts ranging from `pos` (positive; 0), `neu` (neutral; 1), `neg` (negative; 2) and `q` (question; 3) ### Data Splits | | train | valid | test | |-----------|-------|-------|-------| | # samples | 21628 | 2404 | 2671 | | # neu | 11795 | 1291 | 1453 | | # neg | 5491 | 637 | 683 | | # pos | 3866 | 434 | 478 | | # q | 476 | 42 | 57 | | avg words | 27.21 | 27.18 | 27.12 | | avg chars | 89.82 | 89.50 | 90.36 | ## Dataset Creation ### Curation Rationale Originally, the dataset was conceived for the [In-class Kaggle Competition](https://www.kaggle.com/c/wisesight-sentiment/) at Chulalongkorn university by [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University). It has since become one of the benchmarks for sentiment analysis in Thai. ### Source Data #### Initial Data Collection and Normalization - Style: Informal and conversational. With some news headlines and advertisement. - Time period: Around 2016 to early 2019. With small amount from other period. - Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs. - Privacy: - Only messages that made available to the public on the internet (websites, blogs, social network sites). - For Facebook, this means the public comments (everyone can see) that made on a public page. - Private/protected messages and messages in groups, chat, and inbox are not included. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. - Alternations and modifications: - Keep in mind that this corpus does not statistically represent anything in the language register. - Large amount of messages are not in their original form. Personal data are removed or masked. - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact. - (Mis)spellings are kept intact. - Messages longer than 2,000 characters are removed. - Long non-Thai messages are removed. Duplicated message (exact match) are removed. #### Who are the source language producers? Social media users in Thailand ### Annotations #### Annotation process - Sentiment values are assigned by human annotators. - A human annotator put his/her best effort to assign just one label, out of four, to a message. - Agreement, enjoyment, and satisfaction are positive. Disagreement, sadness, and disappointment are negative. - Showing interest in a topic or in a product is counted as positive. In this sense, a question about a particular product could has a positive sentiment value, if it shows the interest in the product. - Saying that other product or service is better is counted as negative. - General information or news title tend to be counted as neutral. #### Who are the annotators? Outsourced annotators hired by [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) ### Personal and Sensitive Information - We trying to exclude any known personally identifiable information from this data set. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. ## Considerations for Using the Data ### Social Impact of Dataset - `wisesight_sentiment` is the first and one of the few open datasets for sentiment analysis of social media data in Thai - There are risks of personal information that escape the anonymization process ### Discussion of Biases - A message can be ambiguous. When possible, the judgement will be based solely on the text itself. - In some situation, like when the context is missing, the annotator may have to rely on his/her own world knowledge and just guess. - In some cases, the human annotator may have an access to the message's context, like an image. These additional information are not included as part of this corpus. ### Other Known Limitations - The labels are imbalanced; over half of the texts are `neu` (neutral) whereas there are very few `q` (question). - Misspellings in social media texts make word tokenization process for Thai difficult, thus impacting the model performance ## Additional Information ### Dataset Curators Thanks [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) community, [Kitsuchart Pasupa](http://www.it.kmitl.ac.th/~kitsuchart/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https://www.kaggle.com/c/wisesight-sentiment/ ### Licensing Information - If applicable, copyright of each message content belongs to the original poster. - **Annotation data (labels) are released to public domain.** - [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) helps facilitate the annotation, but does not necessarily agree upon the labels made by the human annotators. This annotation is for research purpose and does not reflect the professional work that Wisesight has been done for its customers. - The human annotator does not necessarily agree or disagree with the message. Likewise, the label he/she made to the message does not necessarily reflect his/her personal view towards the message. ### Citation Information Please cite the following if you make use of the dataset: Arthit Suriyawongkul, Ekapol Chuangsuwanich, Pattarawat Chormai, and Charin Polpanumas. 2019. **PyThaiNLP/wisesight-sentiment: First release.** September. BibTeX: ``` @software{bact_2019_3457447, author = {Suriyawongkul, Arthit and Chuangsuwanich, Ekapol and Chormai, Pattarawat and Polpanumas, Charin}, title = {PyThaiNLP/wisesight-sentiment: First release}, month = sep, year = 2019, publisher = {Zenodo}, version = {v1.0}, doi = {10.5281/zenodo.3457447}, url = {https://doi.org/10.5281/zenodo.3457447} } ```
cstorm125
https://github.com/huggingface/datasets/pull/939
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/939", "html_url": "https://github.com/huggingface/datasets/pull/939", "diff_url": "https://github.com/huggingface/datasets/pull/939.diff", "patch_url": "https://github.com/huggingface/datasets/pull/939.patch", "merged_at": null }
true
753,940,979
938
V-1.0.0 of isizulu_ner_corpus
closed
[]
2020-12-01T02:04:32
2020-12-01T23:34:36
2020-12-01T23:34:36
yvonnegitau
https://github.com/huggingface/datasets/pull/938
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/938", "html_url": "https://github.com/huggingface/datasets/pull/938", "diff_url": "https://github.com/huggingface/datasets/pull/938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/938.patch", "merged_at": null }
true
753,921,078
937
Local machine/cluster Beam Datasets example/tutorial
closed
[]
2020-12-01T01:11:43
2024-03-15T16:05:14
2024-03-15T16:05:14
Hi, I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output. Thanks! Shang
shangw-nvidia
https://github.com/huggingface/datasets/issues/937
null
false
753,915,603
936
Added HANS parses and categories
closed
[]
2020-12-01T00:58:16
2020-12-01T13:19:41
2020-12-01T13:19:40
This pull request adds HANS missing information: the sentence parses, as well as the heuristic category.
TevenLeScao
https://github.com/huggingface/datasets/pull/936
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/936", "html_url": "https://github.com/huggingface/datasets/pull/936", "diff_url": "https://github.com/huggingface/datasets/pull/936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/936.patch", "merged_at": "2020-12-01T13:19:40" }
true
753,863,055
935
add PIB dataset
closed
[]
2020-11-30T22:55:43
2020-12-01T23:17:11
2020-12-01T23:17:11
This pull request will add PIB dataset.
thevasudevgupta
https://github.com/huggingface/datasets/pull/935
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/935", "html_url": "https://github.com/huggingface/datasets/pull/935", "diff_url": "https://github.com/huggingface/datasets/pull/935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/935.patch", "merged_at": "2020-12-01T23:17:11" }
true
753,860,095
934
small updates to the "add new dataset" guide
closed
[]
2020-11-30T22:49:10
2020-12-01T04:56:22
2020-11-30T23:14:00
small updates (corrections/typos) to the "add new dataset" guide
VictorSanh
https://github.com/huggingface/datasets/pull/934
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/934", "html_url": "https://github.com/huggingface/datasets/pull/934", "diff_url": "https://github.com/huggingface/datasets/pull/934.diff", "patch_url": "https://github.com/huggingface/datasets/pull/934.patch", "merged_at": "2020-11-30T23:14:00" }
true
753,854,272
933
Add NumerSense
closed
[]
2020-11-30T22:36:33
2020-12-01T20:25:50
2020-12-01T19:51:56
Adds the NumerSense dataset - Webpage/leaderboard: https://inklab.usc.edu/NumerSense/ - Paper: https://arxiv.org/abs/2005.00683 - Description: NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. Basically, it's a benchmark to see whether your MLM can figure out the right number in a fill-in-the-blank task based on commonsense knowledge (a bird has **two** legs)
joeddav
https://github.com/huggingface/datasets/pull/933
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/933", "html_url": "https://github.com/huggingface/datasets/pull/933", "diff_url": "https://github.com/huggingface/datasets/pull/933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/933.patch", "merged_at": "2020-12-01T19:51:56" }
true
753,840,300
932
adding metooma dataset
closed
[]
2020-11-30T22:09:49
2020-12-02T00:37:54
2020-12-02T00:37:54
akash418
https://github.com/huggingface/datasets/pull/932
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/932", "html_url": "https://github.com/huggingface/datasets/pull/932", "diff_url": "https://github.com/huggingface/datasets/pull/932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/932.patch", "merged_at": null }
true
753,818,193
931
[WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32
closed
[]
2020-11-30T21:30:21
2022-10-03T09:40:09
2022-10-03T09:40:09
Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AABVENv_Q9rFtnM61liyzO0La/web_snippets_train.json.zip?dl=1` Didn't managed to see how to solve that. Putting aside for now.
thomwolf
https://github.com/huggingface/datasets/pull/931
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/931", "html_url": "https://github.com/huggingface/datasets/pull/931", "diff_url": "https://github.com/huggingface/datasets/pull/931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/931.patch", "merged_at": null }
true
753,801,204
930
Lambada
closed
[]
2020-11-30T21:02:33
2020-12-01T00:37:12
2020-12-01T00:37:11
Added LAMBADA dataset. A couple of points of attention (mostly because I am not sure) - The training data are compressed in a .tar file inside the main tar.gz file. I had to manually un-tar the training file to access the examples. - The dev and test splits don't have the `category` field so I put `None` by default. Happy to make changes if it doesn't respect the guidelines! Victor
VictorSanh
https://github.com/huggingface/datasets/pull/930
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/930", "html_url": "https://github.com/huggingface/datasets/pull/930", "diff_url": "https://github.com/huggingface/datasets/pull/930.diff", "patch_url": "https://github.com/huggingface/datasets/pull/930.patch", "merged_at": "2020-12-01T00:37:11" }
true
753,737,794
929
Add weibo NER dataset
closed
[]
2020-11-30T19:22:47
2020-12-03T13:36:55
2020-12-03T13:36:54
abhishekkrthakur
https://github.com/huggingface/datasets/pull/929
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/929", "html_url": "https://github.com/huggingface/datasets/pull/929", "diff_url": "https://github.com/huggingface/datasets/pull/929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/929.patch", "merged_at": "2020-12-03T13:36:54" }
true