id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
774,574,378
1,636
winogrande cannot be dowloaded
closed
[]
2020-12-24T22:28:22
2022-10-05T12:35:44
2022-10-05T12:35:44
Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks] File "./finetune_trainer.py", line 148, in <listcomp> for task in data_args.tasks] File "/workdir/seq2seq/data/tasks.py", line 65, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 466, in load_dataset return datasets.load_dataset('winogrande', 'winogrande_l', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/winogrande/winogrande.py yo/0 I1224 14:17:46.419031 31226 main shadow.py:122 > Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 260, in <module> main() File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) ```
ghost
https://github.com/huggingface/datasets/issues/1636
null
false
774,524,492
1,635
Persian Abstractive/Extractive Text Summarization
closed
[]
2020-12-24T17:47:12
2021-01-04T15:11:04
2021-01-04T15:11:04
Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included. ## Adding a Dataset - **Name:** *pn-summary* - **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.* - **Paper:** *https://arxiv.org/abs/2012.11204* - **Data:** *https://github.com/hooshvare/pn-summary/#download* - **Motivation:** *It is the first Persian abstractive/extractive Text summarization dataset (like cnn_dailymail for English)!* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
m3hrdadfi
https://github.com/huggingface/datasets/issues/1635
null
false
774,487,934
1,634
Inspecting datasets per category
closed
[]
2020-12-24T15:26:34
2022-10-04T14:57:33
2022-10-04T14:57:33
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
ghost
https://github.com/huggingface/datasets/issues/1634
null
false
774,422,603
1,633
social_i_qa wrong format of labels
closed
[]
2020-12-24T13:11:54
2020-12-30T17:18:49
2020-12-30T17:18:49
Hi, there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent. so label is 'label': '1\n', not '1' thanks ``` >>> import datasets >>> from datasets import load_dataset >>> dataset = load_dataset( ... 'social_i_qa') cahce dir /julia/cache/datasets Downloading: 4.72kB [00:00, 3.52MB/s] cahce dir /julia/cache/datasets Downloading: 2.19kB [00:00, 1.81MB/s] Using custom data configuration default Reusing dataset social_i_qa (/julia/datasets/social_i_qa/default/0.1.0/4a4190cc2d2482d43416c2167c0c5dccdd769d4482e84893614bd069e5c3ba06) >>> dataset['train'][0] {'answerA': 'like attending', 'answerB': 'like staying home', 'answerC': 'a good friend to have', 'context': 'Cameron decided to have a barbecue and gathered her friends together.', 'label': '1\n', 'question': 'How would Others feel as a result?'} ```
ghost
https://github.com/huggingface/datasets/issues/1633
null
false
774,388,625
1,632
SICK dataset
closed
[]
2020-12-24T12:40:14
2021-02-05T15:49:25
2021-02-05T15:49:25
Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you. ## Adding a Dataset - **Name:** SICK - **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic, and semantic phenomena. - **Paper:** https://www.aclweb.org/anthology/L14-1314/ - **Data:** http://marcobaroni.org/composes/sick.html - **Motivation:** This dataset is well-known in the NLP community used for recognizing entailment between sentences. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
rabeehk
https://github.com/huggingface/datasets/issues/1632
null
false
774,349,222
1,631
Update README.md
closed
[]
2020-12-24T11:45:52
2020-12-28T17:35:41
2020-12-28T17:16:04
I made small change for citation
savasy
https://github.com/huggingface/datasets/pull/1631
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1631", "html_url": "https://github.com/huggingface/datasets/pull/1631", "diff_url": "https://github.com/huggingface/datasets/pull/1631.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1631.patch", "merged_at": "2020-12-28T17:16:04" }
true
774,332,129
1,630
Adding UKP Argument Aspect Similarity Corpus
closed
[]
2020-12-24T11:01:31
2022-10-05T12:36:12
2022-10-05T12:36:12
Hi, this would be great to have this dataset included. ## Adding a Dataset - **Name:** UKP Argument Aspect Similarity Corpus - **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as either “high similarity”, “some similarity”, “no similarity” or “not related” with respect to the topic. - **Paper:** https://www.aclweb.org/anthology/P19-1054/ - **Data:** https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998 - **Motivation:** this is one of the datasets currently used frequently in recent adapter papers like https://arxiv.org/pdf/2005.00247.pdf Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Thank you
rabeehk
https://github.com/huggingface/datasets/issues/1630
null
false
774,255,716
1,629
add wongnai_reviews test set labels
closed
[]
2020-12-24T08:02:31
2020-12-28T17:23:39
2020-12-28T17:23:39
- add test set labels provided by @ekapolc - refactor `star_rating` to a `datasets.features.ClassLabel` field
cstorm125
https://github.com/huggingface/datasets/pull/1629
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1629", "html_url": "https://github.com/huggingface/datasets/pull/1629", "diff_url": "https://github.com/huggingface/datasets/pull/1629.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1629.patch", "merged_at": "2020-12-28T17:23:39" }
true
774,091,411
1,628
made suggested changes to hate-speech-and-offensive-language
closed
[]
2020-12-23T23:25:32
2020-12-28T10:11:20
2020-12-28T10:11:20
MisbahKhan789
https://github.com/huggingface/datasets/pull/1628
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1628", "html_url": "https://github.com/huggingface/datasets/pull/1628", "diff_url": "https://github.com/huggingface/datasets/pull/1628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1628.patch", "merged_at": "2020-12-28T10:11:20" }
true
773,960,255
1,627
`Dataset.map` disable progress bar
closed
[]
2020-12-23T17:53:42
2025-05-16T16:36:24
2020-12-26T19:57:17
I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that?
Nickil21
https://github.com/huggingface/datasets/issues/1627
null
false
773,840,368
1,626
Fix dataset_dict.shuffle with single seed
closed
[]
2020-12-23T14:33:36
2021-01-04T10:00:04
2021-01-04T10:00:03
Fix #1610 I added support for single integer used in `DatasetDict.shuffle`. Previously only a dictionary of seed was allowed. Moreover I added the missing `seed` parameter. Previously only `seeds` was allowed.
lhoestq
https://github.com/huggingface/datasets/pull/1626
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1626", "html_url": "https://github.com/huggingface/datasets/pull/1626", "diff_url": "https://github.com/huggingface/datasets/pull/1626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1626.patch", "merged_at": "2021-01-04T10:00:03" }
true
773,771,596
1,625
Fixed bug in the shape property
closed
[]
2020-12-23T13:33:21
2021-01-02T23:22:52
2020-12-23T14:13:13
Fix to the bug reported in issue #1622. Just replaced `return tuple(self._indices.num_rows, self._data.num_columns)` by `return (self._indices.num_rows, self._data.num_columns)`.
noaonoszko
https://github.com/huggingface/datasets/pull/1625
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1625", "html_url": "https://github.com/huggingface/datasets/pull/1625", "diff_url": "https://github.com/huggingface/datasets/pull/1625.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1625.patch", "merged_at": "2020-12-23T14:13:13" }
true
773,669,700
1,624
Cannot download ade_corpus_v2
closed
[]
2020-12-23T10:58:14
2021-08-03T05:08:54
2021-08-03T05:08:54
I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2 but received this error : `Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module combined_path, github_file_path, file_path FileNotFoundError: Couldn't find file locally at ade_corpus_v2/ade_corpus_v2.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py`
him1411
https://github.com/huggingface/datasets/issues/1624
null
false
772,950,710
1,623
Add CLIMATE-FEVER dataset
closed
[]
2020-12-22T13:34:05
2020-12-22T17:53:53
2020-12-22T17:53:53
As suggested by @SBrandeis , fresh PR that adds CLIMATE-FEVER. Replaces PR #1579. --- A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: * Homepage: http://climatefever.ai * Paper: https://arxiv.org/abs/2012.00614
tdiggelm
https://github.com/huggingface/datasets/pull/1623
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1623", "html_url": "https://github.com/huggingface/datasets/pull/1623", "diff_url": "https://github.com/huggingface/datasets/pull/1623.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1623.patch", "merged_at": "2020-12-22T17:53:53" }
true
772,940,768
1,622
Can't call shape on the output of select()
closed
[]
2020-12-22T13:18:40
2020-12-23T13:37:13
2020-12-23T13:37:12
I get the error `TypeError: tuple expected at most 1 argument, got 2` when calling `shape` on the output of `select()`. It's line 531 in shape in arrow_dataset.py that causes the problem: ``return tuple(self._indices.num_rows, self._data.num_columns)`` This makes sense, since `tuple(num1, num2)` is not a valid call. Full code to reproduce: ```python dataset = load_dataset("cnn_dailymail", "3.0.0") train_set = dataset["train"] t = train_set.select(range(10)) print(t.shape)
noaonoszko
https://github.com/huggingface/datasets/issues/1622
null
false
772,940,417
1,621
updated dutch_social.py for loading jsonl (lines instead of list) files
closed
[]
2020-12-22T13:18:11
2020-12-23T11:51:51
2020-12-23T11:51:51
the data_loader is modified to load files on the fly. Earlier it was reading the entire file and then processing the records Pls refer to previous PR #1321
skyprince999
https://github.com/huggingface/datasets/pull/1621
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1621", "html_url": "https://github.com/huggingface/datasets/pull/1621", "diff_url": "https://github.com/huggingface/datasets/pull/1621.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1621.patch", "merged_at": "2020-12-23T11:51:51" }
true
772,620,056
1,620
Adding myPOS2017 dataset
closed
[]
2020-12-22T04:04:55
2022-10-03T09:38:23
2022-10-03T09:38:23
myPOS Corpus (Myanmar Part-of-Speech Corpus) for Myanmar language NLP Research and Developments
hungluumfc
https://github.com/huggingface/datasets/pull/1620
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1620", "html_url": "https://github.com/huggingface/datasets/pull/1620", "diff_url": "https://github.com/huggingface/datasets/pull/1620.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1620.patch", "merged_at": null }
true
772,508,558
1,619
data loader for reading comprehension task
closed
[]
2020-12-21T22:40:34
2020-12-28T10:32:53
2020-12-28T10:32:53
added doc2dial data loader and dummy data for reading comprehension task.
songfeng
https://github.com/huggingface/datasets/pull/1619
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1619", "html_url": "https://github.com/huggingface/datasets/pull/1619", "diff_url": "https://github.com/huggingface/datasets/pull/1619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1619.patch", "merged_at": "2020-12-28T10:32:53" }
true
772,248,730
1,618
Can't filter language:EN on https://huggingface.co/datasets
closed
[]
2020-12-21T15:23:23
2020-12-22T17:17:00
2020-12-22T17:16:09
When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge: ![screenshot](https://user-images.githubusercontent.com/4547987/102792244-892e1f00-43a8-11eb-9e89-4826ca201a87.png)
davidefiocco
https://github.com/huggingface/datasets/issues/1618
null
false
772,084,764
1,617
cifar10 initial commit
closed
[]
2020-12-21T11:18:50
2020-12-22T10:18:05
2020-12-22T10:11:28
CIFAR-10 dataset. Didn't add the tagging since there are no vision related tags.
czabo
https://github.com/huggingface/datasets/pull/1617
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1617", "html_url": "https://github.com/huggingface/datasets/pull/1617", "diff_url": "https://github.com/huggingface/datasets/pull/1617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1617.patch", "merged_at": "2020-12-22T10:11:28" }
true
772,074,229
1,616
added TurkishMovieSentiment dataset
closed
[]
2020-12-21T11:03:16
2020-12-24T07:08:41
2020-12-23T16:50:06
This PR adds the **TurkishMovieSentiment: This dataset contains turkish movie reviews.** - **Homepage:** [https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks](https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks) - **Point of Contact:** [Mustafa Keskin](https://www.linkedin.com/in/mustfkeskin/)
yavuzKomecoglu
https://github.com/huggingface/datasets/pull/1616
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1616", "html_url": "https://github.com/huggingface/datasets/pull/1616", "diff_url": "https://github.com/huggingface/datasets/pull/1616.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1616.patch", "merged_at": "2020-12-23T16:50:06" }
true
771,641,088
1,615
Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
open
[]
2020-12-20T17:27:38
2021-06-25T13:11:33
null
Hello, I'm having issue downloading TriviaQA dataset with `load_dataset`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```python import datasets dataset = datasets.load_dataset("trivia_qa", "rc", cache_dir = "./datasets") ``` ## The output: 1. Download begins: ``` Downloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to /cs/labs/gabis/sapirweissbuch/tr ivia_qa/rc/1.1.0/e734e28133f4d9a353af322aa52b9f266f6f27cbf2f072690a1694e577546b0d... Downloading: 17%|███████████████████▉ | 446M/2.67G [00:37<04:45, 7.77MB/s] ``` 2. 100% is reached 3. It got stuck here for about an hour, and added additional 30G of data to "./datasets" directory. I killed the process eventually. A similar issue can be observed in Google Colab: https://colab.research.google.com/drive/1nn1Lw02GhfGFylzbS2j6yksGjPo7kkN-?usp=sharing ## Expected behaviour: The dataset "TriviaQA" should be successfully downloaded.
SapirWeissbuch
https://github.com/huggingface/datasets/issues/1615
null
false
771,577,050
1,613
Add id_clickbait
closed
[]
2020-12-20T12:24:49
2020-12-22T17:45:27
2020-12-22T17:45:27
This is the CLICK-ID dataset, a collection of annotated clickbait Indonesian news headlines that was collected from 12 local online news
cahya-wirawan
https://github.com/huggingface/datasets/pull/1613
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1613", "html_url": "https://github.com/huggingface/datasets/pull/1613", "diff_url": "https://github.com/huggingface/datasets/pull/1613.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1613.patch", "merged_at": "2020-12-22T17:45:27" }
true
771,558,160
1,612
Adding wiki asp dataset as new PR
closed
[]
2020-12-20T10:25:08
2020-12-21T14:13:33
2020-12-21T14:13:33
Hi @lhoestq, Adding wiki asp as new branch because #1539 has other commits. This version has dummy data for each domain <20/30KB.
katnoria
https://github.com/huggingface/datasets/pull/1612
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1612", "html_url": "https://github.com/huggingface/datasets/pull/1612", "diff_url": "https://github.com/huggingface/datasets/pull/1612.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1612.patch", "merged_at": "2020-12-21T14:13:33" }
true
771,486,456
1,611
shuffle with torch generator
closed
[]
2020-12-20T00:57:14
2022-06-01T15:30:13
2022-06-01T15:30:13
Hi I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I really need to make shuffle work with this generator and I was wondering what I can do about this issue, thanks for your help @lhoestq
rabeehkarimimahabadi
https://github.com/huggingface/datasets/issues/1611
null
false
771,453,599
1,610
shuffle does not accept seed
closed
[]
2020-12-19T20:59:39
2021-01-04T10:00:03
2021-01-04T10:00:03
Hi I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
rabeehk
https://github.com/huggingface/datasets/issues/1610
null
false
771,421,881
1,609
Not able to use 'jigsaw_toxicity_pred' dataset
closed
[]
2020-12-19T17:35:48
2020-12-22T16:42:24
2020-12-22T16:42:23
When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing): ``` from datasets import list_datasets, list_metrics, load_dataset, load_metric ds = load_dataset("jigsaw_toxicity_pred") ``` I see below error: > FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 280 raise FileNotFoundError( 281 "Couldn't find file locally at {}, or remotely at {} or {}".format( --> 282 combined_path, github_file_path, file_path 283 ) 284 ) FileNotFoundError: Couldn't find file locally at jigsaw_toxicity_pred/jigsaw_toxicity_pred.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py
jassimran
https://github.com/huggingface/datasets/issues/1609
null
false
771,329,434
1,608
adding ted_talks_iwslt
closed
[]
2020-12-19T07:36:41
2021-01-02T15:44:12
2021-01-02T15:44:11
UPDATE2: (2nd Jan) Wrote a long writeup on the slack channel. I don't think this approach is correct. Basically this created language pairs (109*108) Running the `pytest `went for more than 40+ hours and it was still running! So working on a different approach, such that the number of configs = number of languages. Will make a new pull request with that. UPDATE: This requires manual download dataset This is a draft version
skyprince999
https://github.com/huggingface/datasets/pull/1608
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1608", "html_url": "https://github.com/huggingface/datasets/pull/1608", "diff_url": "https://github.com/huggingface/datasets/pull/1608.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1608.patch", "merged_at": null }
true
771,325,852
1,607
modified tweets hate speech detection
closed
[]
2020-12-19T07:13:40
2020-12-21T16:08:48
2020-12-21T16:08:48
darshan-gandhi
https://github.com/huggingface/datasets/pull/1607
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1607", "html_url": "https://github.com/huggingface/datasets/pull/1607", "diff_url": "https://github.com/huggingface/datasets/pull/1607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1607.patch", "merged_at": "2020-12-21T16:08:48" }
true
771,116,455
1,606
added Semantic Scholar Open Research Corpus
closed
[]
2020-12-18T19:21:24
2021-02-03T09:30:59
2021-02-03T09:30:59
I picked up this dataset [Semantic Scholar Open Research Corpus](https://allenai.org/data/s2orc) but it contains 6000 files to be downloaded. I tried the current code with 100 files and it worked fine (took ~15GB space). For 6000 files it would occupy ~900GB space which I don’t have. Can someone from the HF team with that much of disk space help me with generate dataset_infos and dummy_data?
bhavitvyamalik
https://github.com/huggingface/datasets/pull/1606
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1606", "html_url": "https://github.com/huggingface/datasets/pull/1606", "diff_url": "https://github.com/huggingface/datasets/pull/1606.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1606.patch", "merged_at": "2021-02-03T09:30:59" }
true
770,979,620
1,605
Navigation version breaking
closed
[]
2020-12-18T15:36:24
2022-10-05T12:35:11
2022-10-05T12:35:11
Hi, when navigating docs (Chrome, Ubuntu) (e.g. on this page: https://huggingface.co/docs/datasets/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version: ![image](https://user-images.githubusercontent.com/3007947/102632187-02cad080-414f-11eb-813b-28f3c8d80def.png) **Edit:** this actually happens _only_ if you open a link to a concrete subsection. IMO, the best way to fix this without getting too deep into the intricacies of retrieving version numbers from the URL would be to change [this](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L112) line to: ``` let label = (version in versionMapping) ? version : stableVersion ``` which delegates the check to the (already maintained) keys of the version mapping dictionary & should be more robust. There's a similar ternary expression [here](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L97) which should also fail in this case. I'd also suggest swapping this [block](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L80-L90) to `string.contains(version) for version in versionMapping` which might be more robust. I'd add a PR myself but I'm by no means competent in JS :) I also have a side question wrt. docs versioning: I'm trying to make docs for a project which are versioned alike to your dropdown versioning. I was wondering how do you handle storage of multiple doc versions on your server? Do you update what `https://huggingface.co/docs/datasets` points to for every stable release & manually create new folders for each released version? So far I'm building & publishing (scping) the docs to the server with a github action which works well for a single version, but would ideally need to reorder the public files triggered on a new release.
mttk
https://github.com/huggingface/datasets/issues/1605
null
false
770,862,112
1,604
Add tests for the download functions ?
closed
[]
2020-12-18T12:49:25
2022-10-05T13:04:24
2022-10-05T13:04:24
AFAIK the download functions in `DownloadManager` are not tested yet. It could be good to add some to ensure behavior is as expected.
SBrandeis
https://github.com/huggingface/datasets/issues/1604
null
false
770,857,221
1,603
Add retries to HTTP requests
closed
[]
2020-12-18T12:41:31
2020-12-22T15:34:07
2020-12-22T15:34:07
## What does this PR do ? Adding retries to HTTP GET & HEAD requests, when they fail with a `ConnectTimeout` exception. The "canonical" way to do this is to use [urllib's Retry class](https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#urllib3.util.Retry) and wrap it in a [HttpAdapter](https://requests.readthedocs.io/en/master/api/#requests.adapters.HTTPAdapter). Seems a bit overkill to me, plus it forces us to use the `requests.Session` object. I prefer this simpler implementation. I'm open to remarks and suggestions @lhoestq @yjernite Fixes #1102
SBrandeis
https://github.com/huggingface/datasets/pull/1603
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1603", "html_url": "https://github.com/huggingface/datasets/pull/1603", "diff_url": "https://github.com/huggingface/datasets/pull/1603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1603.patch", "merged_at": "2020-12-22T15:34:06" }
true
770,841,810
1,602
second update of id_newspapers_2018
closed
[]
2020-12-18T12:16:37
2020-12-22T10:41:15
2020-12-22T10:41:14
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
cahya-wirawan
https://github.com/huggingface/datasets/pull/1602
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1602", "html_url": "https://github.com/huggingface/datasets/pull/1602", "diff_url": "https://github.com/huggingface/datasets/pull/1602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1602.patch", "merged_at": "2020-12-22T10:41:14" }
true
770,758,914
1,601
second update of the id_newspapers_2018
closed
[]
2020-12-18T10:10:20
2020-12-18T12:15:31
2020-12-18T12:15:31
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
cahya-wirawan
https://github.com/huggingface/datasets/pull/1601
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1601", "html_url": "https://github.com/huggingface/datasets/pull/1601", "diff_url": "https://github.com/huggingface/datasets/pull/1601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1601.patch", "merged_at": null }
true
770,582,960
1,600
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
closed
[]
2020-12-18T05:37:10
2023-05-03T04:22:55
2020-12-21T07:38:58
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong? ``` from datasets import load_dataset dataset = load_dataset('csv', data_files='data.txt') dataset = dataset.train_test_split(test_size=0.1) ``` > AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
david-waterworth
https://github.com/huggingface/datasets/issues/1600
null
false
770,431,389
1,599
add Korean Sarcasm Dataset
closed
[]
2020-12-17T22:49:56
2021-09-17T16:54:32
2020-12-23T17:25:59
stevhliu
https://github.com/huggingface/datasets/pull/1599
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1599", "html_url": "https://github.com/huggingface/datasets/pull/1599", "diff_url": "https://github.com/huggingface/datasets/pull/1599.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1599.patch", "merged_at": "2020-12-23T17:25:59" }
true
770,332,440
1,598
made suggested changes in fake-news-english
closed
[]
2020-12-17T20:06:29
2020-12-18T09:43:58
2020-12-18T09:43:57
MisbahKhan789
https://github.com/huggingface/datasets/pull/1598
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1598", "html_url": "https://github.com/huggingface/datasets/pull/1598", "diff_url": "https://github.com/huggingface/datasets/pull/1598.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1598.patch", "merged_at": "2020-12-18T09:43:57" }
true
770,276,140
1,597
adding hate-speech-and-offensive-language
closed
[]
2020-12-17T18:35:15
2020-12-23T23:27:17
2020-12-23T23:27:16
MisbahKhan789
https://github.com/huggingface/datasets/pull/1597
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1597", "html_url": "https://github.com/huggingface/datasets/pull/1597", "diff_url": "https://github.com/huggingface/datasets/pull/1597.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1597.patch", "merged_at": null }
true
770,260,531
1,596
made suggested changes to hate-speech-and-offensive-language
closed
[]
2020-12-17T18:09:26
2020-12-17T18:36:02
2020-12-17T18:35:53
MisbahKhan789
https://github.com/huggingface/datasets/pull/1596
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1596", "html_url": "https://github.com/huggingface/datasets/pull/1596", "diff_url": "https://github.com/huggingface/datasets/pull/1596.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1596.patch", "merged_at": null }
true
770,153,693
1,595
Logiqa en
closed
[]
2020-12-17T15:42:00
2022-10-03T09:38:30
2022-10-03T09:38:30
logiqa in english.
aclifton314
https://github.com/huggingface/datasets/pull/1595
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1595", "html_url": "https://github.com/huggingface/datasets/pull/1595", "diff_url": "https://github.com/huggingface/datasets/pull/1595.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1595.patch", "merged_at": null }
true
769,747,767
1,594
connection error
closed
[]
2020-12-17T09:18:34
2022-06-01T15:33:42
2022-06-01T15:33:41
Hi I am hitting to this error, thanks ``` > Traceback (most recent call last): File "finetune_t5_trainer.py", line 379, in <module> main() File "finetune_t5_trainer.py", line 208, in main if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO File "finetune_t5_trainer.py", line 207, in <dictcomp> for task in data_args.eval_tasks} File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 66, in load_dataset return datasets.load_dataset(self.task.name, split=split, script_version="master") File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/boolq/boolq.py el/0 I1217 01:11:33.898849 354161 main shadow.py:210 Current job status: FINISHED ```
rabeehkarimimahabadi
https://github.com/huggingface/datasets/issues/1594
null
false
769,611,386
1,593
Access to key in DatasetDict map
closed
[]
2020-12-17T07:02:20
2022-10-05T13:47:28
2022-10-05T12:33:06
It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be nice if there can be a flag, similar to `with_indices`, that allows the callable to know the key inside `DatasetDict`.
ZhaofengWu
https://github.com/huggingface/datasets/issues/1593
null
false
769,383,714
1,591
IWSLT-17 Link Broken
closed
[]
2020-12-17T00:46:42
2020-12-18T08:06:36
2020-12-18T08:05:28
``` FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
ZhaofengWu
https://github.com/huggingface/datasets/issues/1591
null
false
769,242,858
1,590
Add helper to resolve namespace collision
closed
[]
2020-12-16T20:17:24
2022-06-01T15:32:04
2022-06-01T15:32:04
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
jramapuram
https://github.com/huggingface/datasets/issues/1590
null
false
769,187,141
1,589
Update doc2dial.py
closed
[]
2020-12-16T18:50:56
2022-07-06T15:19:57
2022-07-06T15:19:57
Added data loader for machine reading comprehension tasks proposed in the Doc2Dial EMNLP 2020 paper.
songfeng
https://github.com/huggingface/datasets/pull/1589
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1589", "html_url": "https://github.com/huggingface/datasets/pull/1589", "diff_url": "https://github.com/huggingface/datasets/pull/1589.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1589.patch", "merged_at": null }
true
769,068,227
1,588
Modified hind encorp
closed
[]
2020-12-16T16:28:14
2020-12-16T22:41:53
2020-12-16T17:20:28
description added, unnecessary comments removed from .py and readme.md reformated @lhoestq for #1584
rahul-art
https://github.com/huggingface/datasets/pull/1588
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1588", "html_url": "https://github.com/huggingface/datasets/pull/1588", "diff_url": "https://github.com/huggingface/datasets/pull/1588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1588.patch", "merged_at": "2020-12-16T17:20:28" }
true
768,929,877
1,587
Add nq_open question answering dataset
closed
[]
2020-12-16T14:22:08
2020-12-17T16:07:10
2020-12-17T16:07:10
this is pr is a copy of #1506 due to messed up git history in that pr.
Nilanshrajput
https://github.com/huggingface/datasets/pull/1587
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1587", "html_url": "https://github.com/huggingface/datasets/pull/1587", "diff_url": "https://github.com/huggingface/datasets/pull/1587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1587.patch", "merged_at": "2020-12-17T16:07:10" }
true
768,864,502
1,586
added irc disentangle dataset
closed
[]
2020-12-16T13:25:58
2021-01-29T10:28:53
2021-01-29T10:28:53
added irc disentanglement dataset
dhruvjoshi1998
https://github.com/huggingface/datasets/pull/1586
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1586", "html_url": "https://github.com/huggingface/datasets/pull/1586", "diff_url": "https://github.com/huggingface/datasets/pull/1586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1586.patch", "merged_at": "2021-01-29T10:28:53" }
true
768,831,171
1,585
FileNotFoundError for `amazon_polarity`
closed
[]
2020-12-16T12:51:05
2020-12-16T16:02:56
2020-12-16T16:02:56
Version: `datasets==v1.1.3` ### Reproduction ```python from datasets import load_dataset data = load_dataset("amazon_polarity") ``` crashes with ```bash FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file locally at amazon_polarity/amazon_polarity.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ```
phtephanx
https://github.com/huggingface/datasets/issues/1585
null
false
768,820,406
1,584
Load hind encorp
closed
[]
2020-12-16T12:38:38
2020-12-18T02:27:24
2020-12-18T02:27:24
reformated well documented, yaml tags added, code
rahul-art
https://github.com/huggingface/datasets/pull/1584
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1584", "html_url": "https://github.com/huggingface/datasets/pull/1584", "diff_url": "https://github.com/huggingface/datasets/pull/1584.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1584.patch", "merged_at": null }
true
768,795,986
1,583
Update metrics docstrings.
closed
[]
2020-12-16T12:14:18
2020-12-18T18:39:06
2020-12-18T18:39:06
#1478 Correcting the argument descriptions for metrics. Let me know if there's any issues.
Fraser-Greenlee
https://github.com/huggingface/datasets/pull/1583
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1583", "html_url": "https://github.com/huggingface/datasets/pull/1583", "diff_url": "https://github.com/huggingface/datasets/pull/1583.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1583.patch", "merged_at": "2020-12-18T18:39:06" }
true
768,776,617
1,582
Adding wiki lingua dataset as new branch
closed
[]
2020-12-16T11:53:07
2020-12-17T18:06:46
2020-12-17T18:06:45
Adding the dataset as new branch as advised here: #1470
katnoria
https://github.com/huggingface/datasets/pull/1582
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1582", "html_url": "https://github.com/huggingface/datasets/pull/1582", "diff_url": "https://github.com/huggingface/datasets/pull/1582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1582.patch", "merged_at": "2020-12-17T18:06:45" }
true
768,320,594
1,581
Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
closed
[]
2020-12-16T00:02:21
2021-06-17T15:40:45
2021-06-17T15:40:45
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`: ``` $ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data -v $(pwd):/root -v $(pwd)/models/:/root/models -v $(pwd)/saved_models/:/root/saved_models -e "HOST_HOSTNAME=$(hostname)" hf-error:latest /bin/bash ________ _______________ ___ __/__________________________________ ____/__ /________ __ __ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / / _ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ / /_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/ You are running this container as user with ID 1000 and group 1000, which should map to the ID and group for your user on the Docker host. Great! tf-docker /root > python Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers 2020-12-15 23:53:21.165827: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 5, in <module> from .trainer_utils import EvaluationStrategy File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 25, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 88, in <module> import datasets # noqa: F401 File "/usr/local/lib/python3.6/dist-packages/datasets/__init__.py", line 26, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 40, in <module> from .arrow_reader import ArrowReader File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 31, in <module> from .utils import cached_path, logging File "/usr/local/lib/python3.6/dist-packages/datasets/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/usr/local/lib/python3.6/dist-packages/datasets/utils/download_manager.py", line 25, in <module> from .file_utils import HF_DATASETS_CACHE, cached_path, get_from_cache, hash_url_to_filename File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 118, in <module> os.makedirs(HF_MODULES_CACHE, exist_ok=True) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/.cache' ``` I've pinned the problem to `RUN pip install datasets`, and by commenting it you can actually import transformers correctly. Another workaround I've found is creating the directory and giving permissions to it directly on the Dockerfile. ``` FROM tensorflow/tensorflow:latest-gpu-jupyter WORKDIR /root EXPOSE 80 EXPOSE 8888 EXPOSE 6006 ENV SHELL /bin/bash ENV PATH="/root/.local/bin:${PATH}" ENV CUDA_CACHE_PATH="/root/cache/cuda" ENV CUDA_CACHE_MAXSIZE="4294967296" ENV TFHUB_CACHE_DIR="/root/cache/tfhub" RUN pip install --upgrade pip RUN apt update -y && apt upgrade -y RUN pip install transformers #Installing datasets will throw the error, try commenting and rebuilding RUN pip install datasets #Another workaround is creating the directory and give permissions explicitly #RUN mkdir /.cache #RUN chmod 777 /.cache ```
eduardofv
https://github.com/huggingface/datasets/issues/1581
null
false
768,111,377
1,580
made suggested changes in diplomacy_detection.py
closed
[]
2020-12-15T19:52:00
2020-12-16T10:27:52
2020-12-16T10:27:52
MisbahKhan789
https://github.com/huggingface/datasets/pull/1580
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1580", "html_url": "https://github.com/huggingface/datasets/pull/1580", "diff_url": "https://github.com/huggingface/datasets/pull/1580.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1580.patch", "merged_at": "2020-12-16T10:27:52" }
true
767,808,465
1,579
Adding CLIMATE-FEVER dataset
closed
[]
2020-12-15T16:49:22
2020-12-22T13:43:16
2020-12-22T13:43:15
This PR request the addition of the CLIMATE-FEVER dataset: A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: - Homepage: <http://climatefever.ai> - Paper: <https://arxiv.org/abs/2012.00614>
tdiggelm
https://github.com/huggingface/datasets/pull/1579
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1579", "html_url": "https://github.com/huggingface/datasets/pull/1579", "diff_url": "https://github.com/huggingface/datasets/pull/1579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1579.patch", "merged_at": null }
true
767,760,513
1,578
update multiwozv22 checksums
closed
[]
2020-12-15T16:13:52
2020-12-15T17:06:29
2020-12-15T17:06:29
a file was updated on the GitHub repo for the dataset
yjernite
https://github.com/huggingface/datasets/pull/1578
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1578", "html_url": "https://github.com/huggingface/datasets/pull/1578", "diff_url": "https://github.com/huggingface/datasets/pull/1578.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1578.patch", "merged_at": "2020-12-15T17:06:29" }
true
767,342,432
1,577
Add comet metric
closed
[]
2020-12-15T08:56:00
2021-01-14T13:33:10
2021-01-14T13:33:10
Hey! I decided to add our new Crosslingual Optimized Metric for Evaluation of Translation (COMET) to the list of the available metrics. COMET was [presented at EMNLP20](https://www.aclweb.org/anthology/2020.emnlp-main.213/) and it is the highest performing metric, so far, on the WMT19 benchmark. We also participated in the [WMT20 Metrics shared task ](http://www.statmt.org/wmt20/pdf/2020.wmt-1.101.pdf) where once again COMET was validated as a top-performing metric. I hope that this metric will help researcher's and industry workers to better validate their MT systems in the future 🤗 ! Cheers, Ricardo
ricardorei
https://github.com/huggingface/datasets/pull/1577
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1577", "html_url": "https://github.com/huggingface/datasets/pull/1577", "diff_url": "https://github.com/huggingface/datasets/pull/1577.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1577.patch", "merged_at": "2021-01-14T13:33:10" }
true
767,080,645
1,576
Remove the contributors section
closed
[]
2020-12-15T01:47:15
2020-12-15T12:53:47
2020-12-15T12:53:46
sourcerer is down
clmnt
https://github.com/huggingface/datasets/pull/1576
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1576", "html_url": "https://github.com/huggingface/datasets/pull/1576", "diff_url": "https://github.com/huggingface/datasets/pull/1576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1576.patch", "merged_at": "2020-12-15T12:53:46" }
true
767,076,374
1,575
Hind_Encorp all done
closed
[]
2020-12-15T01:36:02
2020-12-16T15:15:17
2020-12-16T15:15:17
rahul-art
https://github.com/huggingface/datasets/pull/1575
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1575", "html_url": "https://github.com/huggingface/datasets/pull/1575", "diff_url": "https://github.com/huggingface/datasets/pull/1575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1575.patch", "merged_at": null }
true
767,015,317
1,574
Diplomacy detection 3
closed
[]
2020-12-14T23:28:51
2020-12-14T23:29:32
2020-12-14T23:29:32
MisbahKhan789
https://github.com/huggingface/datasets/pull/1574
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1574", "html_url": "https://github.com/huggingface/datasets/pull/1574", "diff_url": "https://github.com/huggingface/datasets/pull/1574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1574.patch", "merged_at": null }
true
767,011,938
1,573
adding dataset for diplomacy detection-2
closed
[]
2020-12-14T23:21:37
2020-12-14T23:36:57
2020-12-14T23:36:57
MisbahKhan789
https://github.com/huggingface/datasets/pull/1573
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1573", "html_url": "https://github.com/huggingface/datasets/pull/1573", "diff_url": "https://github.com/huggingface/datasets/pull/1573.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1573.patch", "merged_at": null }
true
767,008,470
1,572
add Gnad10 dataset
closed
[]
2020-12-14T23:15:02
2021-09-17T16:54:37
2020-12-16T16:52:30
reference [PR#1317](https://github.com/huggingface/datasets/pull/1317)
stevhliu
https://github.com/huggingface/datasets/pull/1572
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1572", "html_url": "https://github.com/huggingface/datasets/pull/1572", "diff_url": "https://github.com/huggingface/datasets/pull/1572.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1572.patch", "merged_at": "2020-12-16T16:52:30" }
true
766,981,721
1,571
Fixing the KILT tasks to match our current standards
closed
[]
2020-12-14T22:26:12
2020-12-14T23:07:41
2020-12-14T23:07:41
This introduces a few changes to the Knowledge Intensive Learning task benchmark to bring it more in line with our current datasets, including adding the (minimal) dataset card and having one config per sub-task
yjernite
https://github.com/huggingface/datasets/pull/1571
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1571", "html_url": "https://github.com/huggingface/datasets/pull/1571", "diff_url": "https://github.com/huggingface/datasets/pull/1571.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1571.patch", "merged_at": "2020-12-14T23:07:41" }
true
766,830,545
1,570
Documentation for loading CSV datasets misleads the user
closed
[]
2020-12-14T19:04:37
2020-12-22T19:30:12
2020-12-21T13:47:09
Documentation for loading CSV datasets misleads the user into thinking setting `quote_char' to False will disable quoting. There are two problems here: i) `quote_char' is misspelled, must be `quotechar' ii) the documentation should mention `quoting'
onurgu
https://github.com/huggingface/datasets/pull/1570
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1570", "html_url": "https://github.com/huggingface/datasets/pull/1570", "diff_url": "https://github.com/huggingface/datasets/pull/1570.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1570.patch", "merged_at": "2020-12-21T13:47:09" }
true
766,758,895
1,569
added un_ga dataset
closed
[]
2020-12-14T17:42:04
2020-12-15T15:28:58
2020-12-15T15:28:58
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset. With suggested changes in #1330
param087
https://github.com/huggingface/datasets/pull/1569
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1569", "html_url": "https://github.com/huggingface/datasets/pull/1569", "diff_url": "https://github.com/huggingface/datasets/pull/1569.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1569.patch", "merged_at": "2020-12-15T15:28:58" }
true
766,722,994
1,568
Added the dataset clickbait_news_bg
closed
[]
2020-12-14T17:03:00
2020-12-15T18:28:56
2020-12-15T18:28:56
There was a problem with my [previous PR 1445](https://github.com/huggingface/datasets/pull/1445) after rebasing, so I'm copying the dataset code into a new branch and submitting a new PR.
tsvm
https://github.com/huggingface/datasets/pull/1568
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1568", "html_url": "https://github.com/huggingface/datasets/pull/1568", "diff_url": "https://github.com/huggingface/datasets/pull/1568.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1568.patch", "merged_at": "2020-12-15T18:28:56" }
true
766,382,609
1,567
[wording] Update Readme.md
closed
[]
2020-12-14T12:34:52
2020-12-15T12:54:07
2020-12-15T12:54:06
Make the features of the library clearer.
thomwolf
https://github.com/huggingface/datasets/pull/1567
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1567", "html_url": "https://github.com/huggingface/datasets/pull/1567", "diff_url": "https://github.com/huggingface/datasets/pull/1567.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1567.patch", "merged_at": "2020-12-15T12:54:06" }
true
766,354,236
1,566
Add Microsoft Research Sequential Question Answering (SQA) Dataset
closed
[]
2020-12-14T12:02:30
2020-12-15T15:24:22
2020-12-15T15:24:22
For more information: https://msropendata.com/datasets/b25190ed-0f59-47b1-9211-5962858142c2
mattbui
https://github.com/huggingface/datasets/pull/1566
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1566", "html_url": "https://github.com/huggingface/datasets/pull/1566", "diff_url": "https://github.com/huggingface/datasets/pull/1566.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1566.patch", "merged_at": "2020-12-15T15:24:22" }
true
766,333,940
1,565
Create README.md
closed
[]
2020-12-14T11:40:23
2021-03-25T14:01:49
2021-03-25T14:01:49
ManuelFay
https://github.com/huggingface/datasets/pull/1565
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1565", "html_url": "https://github.com/huggingface/datasets/pull/1565", "diff_url": "https://github.com/huggingface/datasets/pull/1565.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1565.patch", "merged_at": "2021-03-25T14:01:49" }
true
766,266,609
1,564
added saudinewsnet
closed
[]
2020-12-14T10:35:09
2020-12-22T09:51:04
2020-12-22T09:51:04
I'm having issues in creating the dummy data. I'm still investigating how to fix it. I'll close the PR if I couldn't find a solution
abdulelahsm
https://github.com/huggingface/datasets/pull/1564
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1564", "html_url": "https://github.com/huggingface/datasets/pull/1564", "diff_url": "https://github.com/huggingface/datasets/pull/1564.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1564.patch", "merged_at": "2020-12-22T09:51:04" }
true
766,211,931
1,563
adding tmu-gfm-dataset
closed
[]
2020-12-14T09:45:30
2020-12-21T10:21:04
2020-12-21T10:07:13
Adding TMU-GFM-Dataset for Grammatical Error Correction. https://github.com/tmu-nlp/TMU-GFM-Dataset A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. More detail about the creation of the dataset can be found in [Yoshimura et al. (2020)](https://www.aclweb.org/anthology/2020.coling-main.573.pdf).
forest1988
https://github.com/huggingface/datasets/pull/1563
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1563", "html_url": "https://github.com/huggingface/datasets/pull/1563", "diff_url": "https://github.com/huggingface/datasets/pull/1563.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1563.patch", "merged_at": "2020-12-21T10:07:13" }
true
765,981,749
1,562
Add dataset COrpus of Urdu News TExt Reuse (COUNTER).
closed
[]
2020-12-14T06:32:48
2020-12-21T13:14:46
2020-12-21T13:14:46
arkhalid
https://github.com/huggingface/datasets/pull/1562
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1562", "html_url": "https://github.com/huggingface/datasets/pull/1562", "diff_url": "https://github.com/huggingface/datasets/pull/1562.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1562.patch", "merged_at": "2020-12-21T13:14:46" }
true
765,831,436
1,561
Lama
closed
[]
2020-12-14T03:27:10
2020-12-28T09:51:47
2020-12-28T09:51:47
This the LAMA dataset for probing facts and common sense from language models. See https://github.com/facebookresearch/LAMA for more details.
huu4ontocord
https://github.com/huggingface/datasets/pull/1561
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1561", "html_url": "https://github.com/huggingface/datasets/pull/1561", "diff_url": "https://github.com/huggingface/datasets/pull/1561.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1561.patch", "merged_at": "2020-12-28T09:51:47" }
true
765,814,964
1,560
Adding the BrWaC dataset
closed
[]
2020-12-14T03:03:56
2020-12-18T15:56:56
2020-12-18T15:56:55
Adding the BrWaC dataset, a large corpus of Portuguese language texts
jonatasgrosman
https://github.com/huggingface/datasets/pull/1560
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1560", "html_url": "https://github.com/huggingface/datasets/pull/1560", "diff_url": "https://github.com/huggingface/datasets/pull/1560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1560.patch", "merged_at": "2020-12-18T15:56:55" }
true
765,714,183
1,559
adding dataset card information to CONTRIBUTING.md
closed
[]
2020-12-14T00:08:43
2020-12-14T17:55:03
2020-12-14T17:55:03
Added a documentation line and link to the full sprint guide in the "How to add a dataset" section, and a section on how to contribute to the dataset card of an existing dataset. And a thank you note at the end :hugs:
yjernite
https://github.com/huggingface/datasets/pull/1559
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1559", "html_url": "https://github.com/huggingface/datasets/pull/1559", "diff_url": "https://github.com/huggingface/datasets/pull/1559.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1559.patch", "merged_at": "2020-12-14T17:55:03" }
true
765,707,907
1,558
Adding Igbo NER data
closed
[]
2020-12-13T23:52:11
2020-12-21T14:38:20
2020-12-21T14:38:20
This PR adds the Igbo NER dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner
purvimisal
https://github.com/huggingface/datasets/pull/1558
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1558", "html_url": "https://github.com/huggingface/datasets/pull/1558", "diff_url": "https://github.com/huggingface/datasets/pull/1558.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1558.patch", "merged_at": "2020-12-21T14:38:20" }
true
765,693,927
1,557
HindEncorp again commited
closed
[]
2020-12-13T23:09:02
2020-12-15T10:37:05
2020-12-15T10:37:04
rahul-art
https://github.com/huggingface/datasets/pull/1557
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1557", "html_url": "https://github.com/huggingface/datasets/pull/1557", "diff_url": "https://github.com/huggingface/datasets/pull/1557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1557.patch", "merged_at": null }
true
765,689,730
1,556
add bswac
closed
[]
2020-12-13T22:55:35
2020-12-18T15:14:28
2020-12-18T15:14:27
IvanZidov
https://github.com/huggingface/datasets/pull/1556
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1556", "html_url": "https://github.com/huggingface/datasets/pull/1556", "diff_url": "https://github.com/huggingface/datasets/pull/1556.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1556.patch", "merged_at": "2020-12-18T15:14:27" }
true
765,681,607
1,555
Added Opus TedTalks
closed
[]
2020-12-13T22:29:33
2020-12-18T09:44:43
2020-12-18T09:44:43
Dataset : http://opus.nlpl.eu/TedTalks.php
rkc007
https://github.com/huggingface/datasets/pull/1555
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1555", "html_url": "https://github.com/huggingface/datasets/pull/1555", "diff_url": "https://github.com/huggingface/datasets/pull/1555.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1555.patch", "merged_at": "2020-12-18T09:44:43" }
true
765,675,148
1,554
Opus CAPES added
closed
[]
2020-12-13T22:11:34
2020-12-18T09:54:57
2020-12-18T08:46:59
Dataset : http://opus.nlpl.eu/CAPES.php
rkc007
https://github.com/huggingface/datasets/pull/1554
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1554", "html_url": "https://github.com/huggingface/datasets/pull/1554", "diff_url": "https://github.com/huggingface/datasets/pull/1554.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1554.patch", "merged_at": null }
true
765,670,083
1,553
added air_dialogue
closed
[]
2020-12-13T21:59:02
2020-12-23T11:20:40
2020-12-23T11:20:39
UPDATE2 (3797ce5): Updated for multi-configs UPDATE (7018082): manually created the dummy_datasets. All tests were cleared locally. Pushed it to origin/master DRAFT VERSION (57fdb20): (_no longer draft_) Uploaded the air_dialogue database. dummy_data creation was failing in local, since the original downloaded file has some nested folders. Pushing it since the tests with real data was cleared. Will re-check & update via manually creating some dummy_data
skyprince999
https://github.com/huggingface/datasets/pull/1553
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1553", "html_url": "https://github.com/huggingface/datasets/pull/1553", "diff_url": "https://github.com/huggingface/datasets/pull/1553.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1553.patch", "merged_at": "2020-12-23T11:20:39" }
true
765,664,411
1,552
Added OPUS ParaCrawl
closed
[]
2020-12-13T21:44:29
2020-12-21T09:50:26
2020-12-21T09:50:25
Dataset : http://opus.nlpl.eu/ParaCrawl.php
rkc007
https://github.com/huggingface/datasets/pull/1552
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1552", "html_url": "https://github.com/huggingface/datasets/pull/1552", "diff_url": "https://github.com/huggingface/datasets/pull/1552.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1552.patch", "merged_at": "2020-12-21T09:50:25" }
true
765,621,879
1,551
Monero
closed
[]
2020-12-13T19:56:48
2022-10-03T09:38:35
2022-10-03T09:38:35
Biomedical Romanian dataset :)
iliemihai
https://github.com/huggingface/datasets/pull/1551
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1551", "html_url": "https://github.com/huggingface/datasets/pull/1551", "diff_url": "https://github.com/huggingface/datasets/pull/1551.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1551.patch", "merged_at": null }
true
765,620,925
1,550
Add offensive langauge dravidian dataset
closed
[]
2020-12-13T19:54:19
2020-12-18T15:52:49
2020-12-18T14:25:30
jamespaultg
https://github.com/huggingface/datasets/pull/1550
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1550", "html_url": "https://github.com/huggingface/datasets/pull/1550", "diff_url": "https://github.com/huggingface/datasets/pull/1550.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1550.patch", "merged_at": "2020-12-18T14:25:30" }
true
765,612,905
1,549
Generics kb new branch
closed
[]
2020-12-13T19:33:10
2020-12-21T13:55:09
2020-12-21T13:55:09
Datasets need manual downloads. Have thus created dummy data as well. But pytest on real and dummy data are failing. I have completed the readme , tags and other required things. I need to create the metadata json once tests get successful. Opening a PR while working with Yacine Jernite to resolve my pytest issues.
bpatidar
https://github.com/huggingface/datasets/pull/1549
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1549", "html_url": "https://github.com/huggingface/datasets/pull/1549", "diff_url": "https://github.com/huggingface/datasets/pull/1549.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1549.patch", "merged_at": "2020-12-21T13:55:09" }
true
765,592,336
1,548
Fix `🤗Datasets` - `tfds` differences link + a few aesthetics
closed
[]
2020-12-13T18:48:21
2020-12-15T12:55:27
2020-12-15T12:55:27
VIVelev
https://github.com/huggingface/datasets/pull/1548
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1548", "html_url": "https://github.com/huggingface/datasets/pull/1548", "diff_url": "https://github.com/huggingface/datasets/pull/1548.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1548.patch", "merged_at": "2020-12-15T12:55:27" }
true
765,562,792
1,547
Adding PolEval2019 Machine Translation Task dataset
closed
[]
2020-12-13T17:50:03
2023-04-03T09:20:23
2020-12-21T16:13:21
Facing an error with pytest in training. Dummy data is passing. README has to be updated.
vrindaprabhu
https://github.com/huggingface/datasets/pull/1547
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1547", "html_url": "https://github.com/huggingface/datasets/pull/1547", "diff_url": "https://github.com/huggingface/datasets/pull/1547.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1547.patch", "merged_at": "2020-12-21T16:13:21" }
true
765,559,923
1,546
Add persian ner dataset
closed
[]
2020-12-13T17:45:48
2020-12-23T09:53:03
2020-12-23T09:53:03
Adding the following dataset: https://github.com/HaniehP/PersianNER
KMFODA
https://github.com/huggingface/datasets/pull/1546
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1546", "html_url": "https://github.com/huggingface/datasets/pull/1546", "diff_url": "https://github.com/huggingface/datasets/pull/1546.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1546.patch", "merged_at": "2020-12-23T09:53:03" }
true
765,550,283
1,545
add hrwac
closed
[]
2020-12-13T17:31:54
2020-12-18T13:35:17
2020-12-18T13:35:17
IvanZidov
https://github.com/huggingface/datasets/pull/1545
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1545", "html_url": "https://github.com/huggingface/datasets/pull/1545", "diff_url": "https://github.com/huggingface/datasets/pull/1545.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1545.patch", "merged_at": "2020-12-18T13:35:17" }
true
765,514,828
1,544
Added Wiki Summary Dataset
closed
[]
2020-12-13T16:33:46
2020-12-18T16:20:06
2020-12-18T16:17:18
Wiki Summary: Dataset extracted from Persian Wikipedia into the form of articles and highlights. Link: https://github.com/m3hrdadfi/wiki-summary
tanmoyio
https://github.com/huggingface/datasets/pull/1544
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1544", "html_url": "https://github.com/huggingface/datasets/pull/1544", "diff_url": "https://github.com/huggingface/datasets/pull/1544.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1544.patch", "merged_at": "2020-12-18T16:17:18" }
true
765,476,196
1,543
adding HindEncorp
closed
[]
2020-12-13T15:39:07
2020-12-13T23:35:53
2020-12-13T23:35:53
adding Hindi Wikipedia corpus
rahul-art
https://github.com/huggingface/datasets/pull/1543
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1543", "html_url": "https://github.com/huggingface/datasets/pull/1543", "diff_url": "https://github.com/huggingface/datasets/pull/1543.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1543.patch", "merged_at": null }
true
765,439,746
1,542
fix typo readme
closed
[]
2020-12-13T14:41:22
2020-12-13T17:16:41
2020-12-13T17:16:40
clmnt
https://github.com/huggingface/datasets/pull/1542
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1542", "html_url": "https://github.com/huggingface/datasets/pull/1542", "diff_url": "https://github.com/huggingface/datasets/pull/1542.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1542.patch", "merged_at": "2020-12-13T17:16:40" }
true
765,430,586
1,541
connection issue while downloading data
closed
[]
2020-12-13T14:27:00
2022-10-05T12:33:29
2022-10-05T12:33:29
Hi I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. thanks ``` Traceback (most recent call last): File "finetune_t5_trainer.py", line 361, in <module> main() File "finetune_t5_trainer.py", line 269, in main add_prefix=False if training_args.train_adapters else True) File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 306, in load_dataset return datasets.load_dataset('glue', 'cola', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) ```
rabeehkarimimahabadi
https://github.com/huggingface/datasets/issues/1541
null
false
765,357,702
1,540
added TTC4900: A Benchmark Data for Turkish Text Categorization dataset
closed
[]
2020-12-13T12:43:33
2020-12-18T10:09:01
2020-12-18T10:09:01
This PR adds the TTC4900 dataset which is a Turkish Text Categorization dataset by me and @basakbuluz. Homepage: [https://www.kaggle.com/savasy/ttc4900](https://www.kaggle.com/savasy/ttc4900) Point of Contact: [Savaş Yıldırım](mailto:savasy@gmail.com) / @savasy
yavuzKomecoglu
https://github.com/huggingface/datasets/pull/1540
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1540", "html_url": "https://github.com/huggingface/datasets/pull/1540", "diff_url": "https://github.com/huggingface/datasets/pull/1540.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1540.patch", "merged_at": "2020-12-18T10:09:01" }
true
765,338,910
1,539
Added Wiki Asp dataset
closed
[]
2020-12-13T12:18:34
2020-12-22T10:16:01
2020-12-22T10:16:01
Hello, I have added Wiki Asp dataset. Please review the PR.
katnoria
https://github.com/huggingface/datasets/pull/1539
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1539", "html_url": "https://github.com/huggingface/datasets/pull/1539", "diff_url": "https://github.com/huggingface/datasets/pull/1539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1539.patch", "merged_at": null }
true
765,139,739
1,538
tweets_hate_speech_detection
closed
[]
2020-12-13T07:37:53
2020-12-21T15:54:28
2020-12-21T15:54:27
darshan-gandhi
https://github.com/huggingface/datasets/pull/1538
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1538", "html_url": "https://github.com/huggingface/datasets/pull/1538", "diff_url": "https://github.com/huggingface/datasets/pull/1538.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1538.patch", "merged_at": null }
true
765,095,210
1,537
added ohsumed
closed
[]
2020-12-13T06:58:23
2020-12-17T18:28:16
2020-12-17T18:28:16
UPDATE2: PR passed all tests. Now waiting for review. UPDATE: pushed a new version. cross fingers that it should complete all the tests! :) If it passes all tests then it's not a draft version. This is a draft version
skyprince999
https://github.com/huggingface/datasets/pull/1537
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1537", "html_url": "https://github.com/huggingface/datasets/pull/1537", "diff_url": "https://github.com/huggingface/datasets/pull/1537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1537.patch", "merged_at": "2020-12-17T18:28:16" }
true
765,043,121
1,536
Add Hippocorpus Dataset
closed
[]
2020-12-13T06:13:02
2020-12-15T13:41:17
2020-12-15T13:40:11
manandey
https://github.com/huggingface/datasets/pull/1536
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1536", "html_url": "https://github.com/huggingface/datasets/pull/1536", "diff_url": "https://github.com/huggingface/datasets/pull/1536.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1536.patch", "merged_at": "2020-12-15T13:40:11" }
true
764,977,542
1,535
Adding Igbo monolingual dataset
closed
[]
2020-12-13T05:16:37
2020-12-21T14:39:49
2020-12-21T14:39:49
This PR adds the Igbo Monolingual dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling Paper: https://arxiv.org/abs/2004.00648
purvimisal
https://github.com/huggingface/datasets/pull/1535
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1535", "html_url": "https://github.com/huggingface/datasets/pull/1535", "diff_url": "https://github.com/huggingface/datasets/pull/1535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1535.patch", "merged_at": "2020-12-21T14:39:48" }
true