url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
3.09B
node_id
stringlengths
18
24
number
int64
2
7.59k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[ns, tz=UTC]date
2020-04-14 18:18:51
2025-05-27 13:46:05
updated_at
timestamp[ns, tz=UTC]date
2020-04-29 09:23:05
2025-06-09 22:00:16
closed_at
timestamp[ns, tz=UTC]date
2020-04-29 09:23:05
2025-06-06 16:12:36
author_association
stringclasses
4 values
type
float64
active_lock_reason
float64
sub_issues_summary
dict
body
stringlengths
0
228k
βŒ€
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
draft
float64
pull_request
null
time_to_close_hours
float64
0.01
28.8k
__index_level_0__
int64
18
7.53k
https://api.github.com/repos/huggingface/datasets/issues/1743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1743/comments
https://api.github.com/repos/huggingface/datasets/issues/1743/events
https://github.com/huggingface/datasets/issues/1743
787,631,412
MDU6SXNzdWU3ODc2MzE0MTI=
1,743
Issue while Creating Custom Metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url"...
[]
closed
false
null
[]
null
[ "Currently it's only possible to define the features for the two columns `references` and `predictions`.\r\nThe data for these columns can then be passed to `metric.add_batch` and `metric.compute`.\r\nInstead of defining more columns `text`, `offset_mapping` and `ground` you must include them in either references a...
2021-01-17T07:01:14Z
2022-06-01T15:49:34Z
2022-06-01T15:49:34Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi Team, I am trying to create a custom metric for my training as follows, where f1 is my own metric: ```python def _info(self): # TODO: Specifies the datasets.MetricInfo object return datasets.MetricInfo( # This is the description that will appear on the metrics page. ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1743/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1743/timeline
null
completed
null
null
12,008.805556
5,805
https://api.github.com/repos/huggingface/datasets/issues/1741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1741/comments
https://api.github.com/repos/huggingface/datasets/issues/1741/events
https://github.com/huggingface/datasets/issues/1741
787,327,060
MDU6SXNzdWU3ODczMjcwNjA=
1,741
error when run fine_tuning on text_classification
{ "avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4", "events_url": "https://api.github.com/users/XiaoYang66/events{/privacy}", "followers_url": "https://api.github.com/users/XiaoYang66/followers", "following_url": "https://api.github.com/users/XiaoYang66/following{/other_user}", "gists_url"...
[]
closed
false
null
[]
null
[ "none" ]
2021-01-16T02:23:19Z
2021-01-16T02:39:28Z
2021-01-16T02:39:18Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
dataset:sem_eval_2014_task_1 pretrained_model:bert-base-uncased error description: when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.researc...
{ "avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4", "events_url": "https://api.github.com/users/XiaoYang66/events{/privacy}", "followers_url": "https://api.github.com/users/XiaoYang66/followers", "following_url": "https://api.github.com/users/XiaoYang66/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1741/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1741/timeline
null
completed
null
null
0.266389
5,807
https://api.github.com/repos/huggingface/datasets/issues/1733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1733/comments
https://api.github.com/repos/huggingface/datasets/issues/1733/events
https://github.com/huggingface/datasets/issues/1733
784,903,002
MDU6SXNzdWU3ODQ5MDMwMDI=
1,733
connection issue with glue, what is the data url for glue?
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.git...
[]
closed
false
null
[]
null
[ "Hello @juliahane, which config of GLUE causes you trouble?\r\nThe URLs are defined in the dataset script source code: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py" ]
2021-01-13T08:37:40Z
2021-08-04T18:13:55Z
2021-08-04T18:13:55Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not thanks
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1733/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1733/timeline
null
completed
null
null
4,881.604167
5,815
https://api.github.com/repos/huggingface/datasets/issues/1731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1731/comments
https://api.github.com/repos/huggingface/datasets/issues/1731/events
https://github.com/huggingface/datasets/issues/1731
784,744,674
MDU6SXNzdWU3ODQ3NDQ2NzQ=
1,731
Couldn't reach swda.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/13365326?v=4", "events_url": "https://api.github.com/users/yangp725/events{/privacy}", "followers_url": "https://api.github.com/users/yangp725/followers", "following_url": "https://api.github.com/users/yangp725/following{/other_user}", "gists_url": "htt...
[]
closed
false
null
[]
null
[ "Hi @yangp725,\r\nThe SWDA has been added very recently and has not been released yet, thus it is not available in the `1.2.0` version of πŸ€—`datasets`.\r\nYou can still access it by installing the latest version of the library (master branch), by following instructions in [this issue](https://github.com/huggingface...
2021-01-13T02:57:40Z
2021-01-13T11:17:40Z
2021-01-13T11:17:40Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/13365326?v=4", "events_url": "https://api.github.com/users/yangp725/events{/privacy}", "followers_url": "https://api.github.com/users/yangp725/followers", "following_url": "https://api.github.com/users/yangp725/following{/other_user}", "gists_url": "htt...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1731/timeline
null
completed
null
null
8.333333
5,817
https://api.github.com/repos/huggingface/datasets/issues/1729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1729/comments
https://api.github.com/repos/huggingface/datasets/issues/1729/events
https://github.com/huggingface/datasets/issues/1729
784,565,898
MDU6SXNzdWU3ODQ1NjU4OTg=
1,729
Is there support for Deep learning datasets?
{ "avatar_url": "https://avatars.githubusercontent.com/u/28235457?v=4", "events_url": "https://api.github.com/users/pablodz/events{/privacy}", "followers_url": "https://api.github.com/users/pablodz/followers", "following_url": "https://api.github.com/users/pablodz/following{/other_user}", "gists_url": "https:...
[]
closed
false
null
[]
null
[ "Hi @ZurMaD!\r\nThanks for your interest in πŸ€— `datasets`. Support for image datasets is at an early stage, with CIFAR-10 added in #1617 \r\nMNIST is also on the way: #1730 \r\n\r\nIf you feel like adding another image dataset, I would advise starting by reading the [ADD_NEW_DATASET.md](https://github.com/huggingfa...
2021-01-12T20:22:41Z
2021-03-31T04:24:07Z
2021-03-31T04:24:07Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/28235457?v=4", "events_url": "https://api.github.com/users/pablodz/events{/privacy}", "followers_url": "https://api.github.com/users/pablodz/followers", "following_url": "https://api.github.com/users/pablodz/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1729/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1729/timeline
null
completed
null
null
1,856.023889
5,819
https://api.github.com/repos/huggingface/datasets/issues/1728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1728/comments
https://api.github.com/repos/huggingface/datasets/issues/1728/events
https://github.com/huggingface/datasets/issues/1728
784,458,342
MDU6SXNzdWU3ODQ0NTgzNDI=
1,728
Add an entry to an arrow dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4", "events_url": "https://api.github.com/users/ameet-1997/events{/privacy}", "followers_url": "https://api.github.com/users/ameet-1997/followers", "following_url": "https://api.github.com/users/ameet-1997/following{/other_user}", "gists_url"...
[]
closed
false
null
[]
null
[ "Hi @ameet-1997,\r\nI think what you are looking for is the `concatenate_datasets` function: https://huggingface.co/docs/datasets/processing.html?highlight=concatenate#concatenate-several-datasets\r\n\r\nFor your use case, I would use the [`map` method](https://huggingface.co/docs/datasets/processing.html?highlight...
2021-01-12T18:01:47Z
2021-01-18T19:15:32Z
2021-01-18T19:15:32Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Is it possible to add an entry to a dataset object? **Motivation: I want to transform the sentences in the dataset and add them to the original dataset** For example, say we have the following code: ``` python from datasets import load_dataset # Load a dataset and print the first examples in the training s...
{ "avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4", "events_url": "https://api.github.com/users/ameet-1997/events{/privacy}", "followers_url": "https://api.github.com/users/ameet-1997/followers", "following_url": "https://api.github.com/users/ameet-1997/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1728/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1728/timeline
null
completed
null
null
145.229167
5,820
https://api.github.com/repos/huggingface/datasets/issues/1727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1727/comments
https://api.github.com/repos/huggingface/datasets/issues/1727/events
https://github.com/huggingface/datasets/issues/1727
784,435,131
MDU6SXNzdWU3ODQ0MzUxMzE=
1,727
BLEURT score calculation raises UnrecognizedFlagError
{ "avatar_url": "https://avatars.githubusercontent.com/u/6603920?v=4", "events_url": "https://api.github.com/users/nadavo/events{/privacy}", "followers_url": "https://api.github.com/users/nadavo/followers", "following_url": "https://api.github.com/users/nadavo/following{/other_user}", "gists_url": "https://ap...
[]
closed
false
null
[]
null
[ "Upgrading tensorflow to version 2.4.0 solved the issue.", "I still have the same error even with TF 2.4.0.", "And I have the same error with TF 2.4.1. I believe this issue should be reopened. Any ideas?!", "I'm seeing the same issue with TF 2.4.1 when running the following in https://colab.research.google.co...
2021-01-12T17:27:02Z
2022-06-01T16:06:02Z
2022-06-01T16:06:02Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`. My environment: ``` python==3.8.5 datasets==1.2.0 tensorflow==2.3.1 cudatoolkit==11.0.221 ``` Test code for reproducing the error: ``` from datasets import load_metric bleurt = load_me...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1727/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1727/timeline
null
completed
null
null
12,118.65
5,821
https://api.github.com/repos/huggingface/datasets/issues/1725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1725/comments
https://api.github.com/repos/huggingface/datasets/issues/1725/events
https://github.com/huggingface/datasets/issues/1725
784,182,273
MDU6SXNzdWU3ODQxODIyNzM=
1,725
load the local dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/41193842?v=4", "events_url": "https://api.github.com/users/xinjicong/events{/privacy}", "followers_url": "https://api.github.com/users/xinjicong/followers", "following_url": "https://api.github.com/users/xinjicong/following{/other_user}", "gists_url": "...
[]
closed
false
null
[]
null
[ "You should rephrase your question or give more examples and details on what you want to do.\r\n\r\nit’s not possible to understand it and help you with only this information.", "sorry for that.\r\ni want to know how could i load the train set and the test set from the local ,which api or function should i use .\...
2021-01-12T12:12:55Z
2022-06-01T16:00:59Z
2022-06-01T16:00:59Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
your guidebook's example is like >>>from datasets import load_dataset >>> dataset = load_dataset('json', data_files='my_file.json') but the first arg is path... so how should i do if i want to load the local dataset for model training? i will be grateful if you can help me handle this problem! thanks a lot!
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1725/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1725/timeline
null
completed
null
null
12,123.801111
5,823
https://api.github.com/repos/huggingface/datasets/issues/1724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1724/comments
https://api.github.com/repos/huggingface/datasets/issues/1724/events
https://github.com/huggingface/datasets/issues/1724
784,023,338
MDU6SXNzdWU3ODQwMjMzMzg=
1,724
could not run models on a offline server successfully
{ "avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4", "events_url": "https://api.github.com/users/lkcao/events{/privacy}", "followers_url": "https://api.github.com/users/lkcao/followers", "following_url": "https://api.github.com/users/lkcao/following{/other_user}", "gists_url": "https://api....
[]
closed
false
null
[]
null
[ "Transferred to `datasets` based on the stack trace.", "Hi @lkcao !\r\nYour issue is indeed related to `datasets`. In addition to installing the package manually, you will need to download the `text.py` script on your server. You'll find it (under `datasets/datasets/text`: https://github.com/huggingface/datasets/...
2021-01-12T06:08:06Z
2022-10-05T12:39:07Z
2022-10-05T12:39:07Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, I really need your help about this. I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows: ![image](https://us...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1724/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1724/timeline
null
completed
null
null
15,150.516944
5,825
https://api.github.com/repos/huggingface/datasets/issues/1718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1718/comments
https://api.github.com/repos/huggingface/datasets/issues/1718/events
https://github.com/huggingface/datasets/issues/1718
783,474,753
MDU6SXNzdWU3ODM0NzQ3NTM=
1,718
Possible cache miss in datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4", "events_url": "https://api.github.com/users/ofirzaf/events{/privacy}", "followers_url": "https://api.github.com/users/ofirzaf/followers", "following_url": "https://api.github.com/users/ofirzaf/following{/other_user}", "gists_url": "https:...
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nI was able to reproduce thanks to your code and find the origin of the bug.\r\nThe cache was not reusing the same file because one object was not deterministic. It comes from a conversion from `set` to `list` in the `datasets.arrrow_dataset.transmit_format` function, where the resulting l...
2021-01-11T15:37:31Z
2022-06-29T14:54:42Z
2021-01-26T02:47:59Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache. I have attached an example script that for me reproduces the problem. In the attached example the second map function always recomputes instead of loading fr...
{ "avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4", "events_url": "https://api.github.com/users/ofirzaf/events{/privacy}", "followers_url": "https://api.github.com/users/ofirzaf/followers", "following_url": "https://api.github.com/users/ofirzaf/following{/other_user}", "gists_url": "https:...
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1718/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1718/timeline
null
completed
null
null
347.174444
5,830
https://api.github.com/repos/huggingface/datasets/issues/1717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1717/comments
https://api.github.com/repos/huggingface/datasets/issues/1717/events
https://github.com/huggingface/datasets/issues/1717
783,074,255
MDU6SXNzdWU3ODMwNzQyNTU=
1,717
SciFact dataset - minor changes
{ "avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4", "events_url": "https://api.github.com/users/dwadden/events{/privacy}", "followers_url": "https://api.github.com/users/dwadden/followers", "following_url": "https://api.github.com/users/dwadden/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "Hi Dave,\r\nYou are more than welcome to open a PR to make these changes! πŸ€—\r\nYou will find the relevant information about opening a PR in the [contributing guide](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) and in the [dataset addition guide](https://github.com/huggingface/datasets/blob...
2021-01-11T05:26:40Z
2021-01-26T02:52:17Z
2021-01-26T02:52:17Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated! I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this? It also looks like the dataset is being downloa...
{ "avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4", "events_url": "https://api.github.com/users/dwadden/events{/privacy}", "followers_url": "https://api.github.com/users/dwadden/followers", "following_url": "https://api.github.com/users/dwadden/following{/other_user}", "gists_url": "https:/...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1717/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1717/timeline
null
completed
null
null
357.426944
5,831
https://api.github.com/repos/huggingface/datasets/issues/1713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1713/comments
https://api.github.com/repos/huggingface/datasets/issues/1713/events
https://github.com/huggingface/datasets/issues/1713
782,337,723
MDU6SXNzdWU3ODIzMzc3MjM=
1,713
Installation using conda
{ "avatar_url": "https://avatars.githubusercontent.com/u/9393002?v=4", "events_url": "https://api.github.com/users/pranav-s/events{/privacy}", "followers_url": "https://api.github.com/users/pranav-s/followers", "following_url": "https://api.github.com/users/pranav-s/following{/other_user}", "gists_url": "http...
[]
closed
false
null
[]
null
[ "Yes indeed the idea is to have the next release on conda cc @LysandreJik ", "Great! Did you guys have a timeframe in mind for the next release?\r\n\r\nThank you for all the great work in developing this library.", "I think we can have `datasets` on conda by next week. Will see what I can do!", "Thank you. Lo...
2021-01-08T19:12:15Z
2021-09-17T12:47:40Z
2021-09-17T12:47:40Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and...
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1713/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1713/timeline
null
completed
null
null
6,041.590278
5,835
https://api.github.com/repos/huggingface/datasets/issues/1710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1710/comments
https://api.github.com/repos/huggingface/datasets/issues/1710/events
https://github.com/huggingface/datasets/issues/1710
781,914,951
MDU6SXNzdWU3ODE5MTQ5NTE=
1,710
IsADirectoryError when trying to download C4
{ "avatar_url": "https://avatars.githubusercontent.com/u/5771366?v=4", "events_url": "https://api.github.com/users/fredriko/events{/privacy}", "followers_url": "https://api.github.com/users/fredriko/followers", "following_url": "https://api.github.com/users/fredriko/following{/other_user}", "gists_url": "http...
[]
closed
false
null
[]
null
[ "I haven't tested C4 on my side so there so there may be a few bugs in the code/adjustments to make.\r\nHere it looks like in c4.py, line 190 one of the `files_to_download` is `'/'` which is invalid.\r\nValid files are paths to local files or URLs to remote files.", "Fixed once processed data is used instead:\r\n...
2021-01-08T07:31:30Z
2022-08-04T11:56:10Z
2022-08-04T11:55:04Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
**TLDR**: I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure. How can the problem be fixed? **VERBOSE**: I use Python version 3.7 and have the following dependencies listed in my project: ``` datasets==1.2.0 apache-beam==2.26.0 ``` When runn...
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1710/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1710/timeline
null
completed
null
null
13,756.392778
5,838
https://api.github.com/repos/huggingface/datasets/issues/1709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1709/comments
https://api.github.com/repos/huggingface/datasets/issues/1709/events
https://github.com/huggingface/datasets/issues/1709
781,875,640
MDU6SXNzdWU3ODE4NzU2NDA=
1,709
Databases
{ "avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4", "events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}", "followers_url": "https://api.github.com/users/JimmyJim1/followers", "following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}", "gists_url": "...
[]
closed
false
null
[]
null
[]
2021-01-08T06:14:03Z
2021-01-08T09:00:08Z
2021-01-08T09:00:08Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "htt...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1709/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1709/timeline
null
completed
null
null
2.768056
5,839
https://api.github.com/repos/huggingface/datasets/issues/1708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1708/comments
https://api.github.com/repos/huggingface/datasets/issues/1708/events
https://github.com/huggingface/datasets/issues/1708
781,631,455
MDU6SXNzdWU3ODE2MzE0NTU=
1,708
<html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
{ "avatar_url": "https://avatars.githubusercontent.com/u/77126849?v=4", "events_url": "https://api.github.com/users/Louiejay54/events{/privacy}", "followers_url": "https://api.github.com/users/Louiejay54/followers", "following_url": "https://api.github.com/users/Louiejay54/following{/other_user}", "gists_url"...
[]
closed
false
null
[]
null
[]
2021-01-07T21:45:24Z
2021-01-08T09:00:01Z
2021-01-08T09:00:01Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "htt...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1708/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1708/timeline
null
completed
null
null
11.243611
5,840
https://api.github.com/repos/huggingface/datasets/issues/1701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1701/comments
https://api.github.com/repos/huggingface/datasets/issues/1701/events
https://github.com/huggingface/datasets/issues/1701
781,345,717
MDU6SXNzdWU3ODEzNDU3MTc=
1,701
Some datasets miss dataset_infos.json or dummy_data.zip
{ "avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4", "events_url": "https://api.github.com/users/madlag/events{/privacy}", "followers_url": "https://api.github.com/users/madlag/followers", "following_url": "https://api.github.com/users/madlag/following{/other_user}", "gists_url": "https://api...
[]
closed
false
null
[]
null
[ "Thanks for reporting.\r\nWe should indeed add all the missing dummy_data.zip and also the dataset_infos.json at least for lm1b, reclor and wikihow.\r\n\r\nFor c4 I haven't tested the script and I think we'll require some optimizations regarding beam datasets before processing it.\r\n", "Closing since the dummy d...
2021-01-07T14:17:13Z
2022-11-04T15:11:16Z
2022-11-04T15:06:00Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json : ``` c4 lm1b reclor wikihow ``` And some does not have a dummy_data.zip : ``` kor_nli math_dataset mlqa ms_marco newsgroup qa4mre qanga...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1701/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1701/timeline
null
completed
null
null
15,984.813056
5,847
https://api.github.com/repos/huggingface/datasets/issues/1696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1696/comments
https://api.github.com/repos/huggingface/datasets/issues/1696/events
https://github.com/huggingface/datasets/issues/1696
781,096,918
MDU6SXNzdWU3ODEwOTY5MTg=
1,696
Unable to install datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/12635475?v=4", "events_url": "https://api.github.com/users/glee2429/events{/privacy}", "followers_url": "https://api.github.com/users/glee2429/followers", "following_url": "https://api.github.com/users/glee2429/following{/other_user}", "gists_url": "htt...
[]
closed
false
null
[]
null
[ "Maybe try to create a virtual env with python 3.8 or 3.7", "Thanks, @thomwolf! I fixed the issue by downgrading python to 3.7. ", "Damn sorry", "Damn sorry" ]
2021-01-07T07:24:37Z
2021-01-08T00:33:05Z
2021-01-07T22:06:05Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
** Edit ** I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight! **Short description** I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). Howev...
{ "avatar_url": "https://avatars.githubusercontent.com/u/12635475?v=4", "events_url": "https://api.github.com/users/glee2429/events{/privacy}", "followers_url": "https://api.github.com/users/glee2429/followers", "following_url": "https://api.github.com/users/glee2429/following{/other_user}", "gists_url": "htt...
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1696/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1696/timeline
null
completed
null
null
14.691111
5,852
https://api.github.com/repos/huggingface/datasets/issues/1686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1686/comments
https://api.github.com/repos/huggingface/datasets/issues/1686/events
https://github.com/huggingface/datasets/issues/1686
778,921,684
MDU6SXNzdWU3Nzg5MjE2ODQ=
1,686
Dataset Error: DaNE contains empty samples at the end
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_...
[]
closed
false
null
[]
null
[ "Thanks for reporting, I opened a PR to fix that", "One the PR is merged the fix will be available in the next release of `datasets`.\r\n\r\nIf you don't want to wait the next release you can still load the script from the master branch with\r\n\r\n```python\r\nload_dataset(\"dane\", script_version=\"master\")\r\...
2021-01-05T11:54:26Z
2021-01-05T14:01:09Z
2021-01-05T14:00:13Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors. ```python >>> import datasets [...] >>> dataset = datasets.load_dataset("dane") [...] >>> dataset["test"][-1] {'dep_ids': [], 'dep_labels': ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1686/timeline
null
completed
null
null
2.096389
5,861
https://api.github.com/repos/huggingface/datasets/issues/1683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1683/comments
https://api.github.com/repos/huggingface/datasets/issues/1683/events
https://github.com/huggingface/datasets/issues/1683
778,287,612
MDU6SXNzdWU3NzgyODc2MTI=
1,683
`ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext
{ "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url":...
[]
closed
false
null
[]
null
[ "Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.\r\n\r\nTo fix that can you try to remove one of the `[0]...
2021-01-04T18:47:53Z
2021-01-04T19:04:45Z
2021-01-04T19:04:45Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
It seems to fail the final batch ): steps to reproduce: ``` from datasets import load_dataset from elasticsearch import Elasticsearch import torch from transformers import file_utils, set_seed from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast MAX_SEQ_LENGTH = 256 ctx_encoder = DPRCon...
{ "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url":...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1683/timeline
null
completed
null
null
0.281111
5,864
https://api.github.com/repos/huggingface/datasets/issues/1681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1681/comments
https://api.github.com/repos/huggingface/datasets/issues/1681/events
https://github.com/huggingface/datasets/issues/1681
777,644,163
MDU6SXNzdWU3Nzc2NDQxNjM=
1,681
Dataset "dane" missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_...
[]
closed
false
null
[]
null
[ "Hi @KennethEnevoldsen ,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of datasets.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of datasets using pip:\r\npip i...
2021-01-03T14:03:03Z
2021-01-05T08:35:35Z
2021-01-05T08:35:13Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
the `dane` dataset appear to be missing in the latest version (1.1.3). ```python >>> import datasets >>> datasets.__version__ '1.1.3' >>> "dane" in datasets.list_datasets() True ``` As we can see it should be present, but doesn't seem to be findable when using `load_dataset`. ```python >>> datasets.load...
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1681/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1681/timeline
null
completed
null
null
42.536111
5,866
https://api.github.com/repos/huggingface/datasets/issues/1679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1679/comments
https://api.github.com/repos/huggingface/datasets/issues/1679/events
https://github.com/huggingface/datasets/issues/1679
777,587,792
MDU6SXNzdWU3Nzc1ODc3OTI=
1,679
Can't import cc100 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4", "events_url": "https://api.github.com/users/alighofrani95/events{/privacy}", "followers_url": "https://api.github.com/users/alighofrani95/followers", "following_url": "https://api.github.com/users/alighofrani95/following{/other_user}", "g...
[]
closed
false
null
[]
null
[ "cc100 was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `cc100` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nlang = \"en\"\r\ndataset = load_dataset(\"cc100\", la...
2021-01-03T07:12:56Z
2022-10-05T12:42:25Z
2022-10-05T12:42:25Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
There is some issue to import cc100 dataset. ``` from datasets import load_dataset dataset = load_dataset("cc100") ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py During handling of the above exception, another exception occur...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1679/timeline
null
completed
null
null
15,365.491389
5,868
https://api.github.com/repos/huggingface/datasets/issues/1675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1675/comments
https://api.github.com/repos/huggingface/datasets/issues/1675/events
https://github.com/huggingface/datasets/issues/1675
777,367,320
MDU6SXNzdWU3NzczNjczMjA=
1,675
Add the 800GB Pile dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://a...
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",...
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "The pile dataset would be very nice.\r\nBenchmarks show that pile trained models achieve better results than most of actually trained models", "The pile can very easily be added and adapted using this [tfds implementation](https://github.com/EleutherAI/The-Pile/blob/master/the_pile/tfds_pile.py) from the repo. \...
2021-01-01T22:58:12Z
2021-12-01T15:29:07Z
2021-12-01T15:29:07Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement - **Paper:*...
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",...
{ "+1": 5, "-1": 0, "confused": 1, "eyes": 2, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 5, "total_count": 13, "url": "https://api.github.com/repos/huggingface/datasets/issues/1675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1675/timeline
null
completed
null
null
8,008.515278
5,872
https://api.github.com/repos/huggingface/datasets/issues/1674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1674/comments
https://api.github.com/repos/huggingface/datasets/issues/1674/events
https://github.com/huggingface/datasets/issues/1674
777,321,840
MDU6SXNzdWU3NzczMjE4NDA=
1,674
dutch_social can't be loaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/10134844?v=4", "events_url": "https://api.github.com/users/koenvandenberge/events{/privacy}", "followers_url": "https://api.github.com/users/koenvandenberge/followers", "following_url": "https://api.github.com/users/koenvandenberge/following{/other_user}"...
[]
closed
false
null
[]
null
[ "exactly the same issue in some other datasets.\r\nDid you find any solution??\r\n", "Hi @koenvandenberge and @alighofrani95!\r\nThe datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the...
2021-01-01T17:37:08Z
2022-10-05T13:03:26Z
2022-10-05T13:03:26Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` (base) Koens-MacBook-Pro:~ koe...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1674/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1674/timeline
null
completed
null
null
15,403.438333
5,873
https://api.github.com/repos/huggingface/datasets/issues/1673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1673/comments
https://api.github.com/repos/huggingface/datasets/issues/1673/events
https://github.com/huggingface/datasets/issues/1673
777,263,651
MDU6SXNzdWU3NzcyNjM2NTE=
1,673
Unable to Download Hindi Wikipedia Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/30871963?v=4", "events_url": "https://api.github.com/users/aditya3498/events{/privacy}", "followers_url": "https://api.github.com/users/aditya3498/followers", "following_url": "https://api.github.com/users/aditya3498/following{/other_user}", "gists_url"...
[]
closed
false
null
[]
null
[ "Currently this dataset is only available when the library is installed from source since it was added after the last release.\r\n\r\nWe pin the dataset version with the library version so that people can have a reproducible dataset and processing when pinning the library.\r\n\r\nWe'll see if we can provide access ...
2021-01-01T10:52:53Z
2021-01-05T10:22:12Z
2021-01-05T10:22:12Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso...
{ "avatar_url": "https://avatars.githubusercontent.com/u/30871963?v=4", "events_url": "https://api.github.com/users/aditya3498/events{/privacy}", "followers_url": "https://api.github.com/users/aditya3498/followers", "following_url": "https://api.github.com/users/aditya3498/following{/other_user}", "gists_url"...
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1673/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1673/timeline
null
completed
null
null
95.488611
5,874
https://api.github.com/repos/huggingface/datasets/issues/1672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1672/comments
https://api.github.com/repos/huggingface/datasets/issues/1672/events
https://github.com/huggingface/datasets/issues/1672
777,258,941
MDU6SXNzdWU3NzcyNTg5NDE=
1,672
load_dataset hang on file_lock
{ "avatar_url": "https://avatars.githubusercontent.com/u/69860107?v=4", "events_url": "https://api.github.com/users/tomacai/events{/privacy}", "followers_url": "https://api.github.com/users/tomacai/followers", "following_url": "https://api.github.com/users/tomacai/following{/other_user}", "gists_url": "https:...
[]
closed
false
null
[]
null
[ "Can you try to upgrade to a more recent version of datasets?", "Thank, upgrading to 1.1.3 resolved the issue.", "Having the same issue with `datasets 1.1.3` of `1.5.0` (both tracebacks look the same) and `kilt_wikipedia`, Ubuntu 20.04\r\n\r\n```py\r\nIn [1]: from datasets import load_dataset ...
2021-01-01T10:25:07Z
2021-03-31T16:24:13Z
2021-01-01T11:47:36Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab. Transformers: 3.3.1 Datasets: 1.0.2 Windows 10 (also tested in WSL) ``` datasets.logging.set_verbosity_debug() datasets. train_dataset = load_dataset('squad', split='train') valid_dataset = load_dataset('squad', split='validat...
{ "avatar_url": "https://avatars.githubusercontent.com/u/69860107?v=4", "events_url": "https://api.github.com/users/tomacai/events{/privacy}", "followers_url": "https://api.github.com/users/tomacai/followers", "following_url": "https://api.github.com/users/tomacai/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1672/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1672/timeline
null
completed
null
null
1.374722
5,875
https://api.github.com/repos/huggingface/datasets/issues/1671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1671/comments
https://api.github.com/repos/huggingface/datasets/issues/1671/events
https://github.com/huggingface/datasets/issues/1671
776,652,193
MDU6SXNzdWU3NzY2NTIxOTM=
1,671
connection issue
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/followin...
[]
closed
false
null
[]
null
[ "Also, mayjor issue for me is the format issue, even if I go through changing the whole code to use load_from_disk, then if I do \r\n\r\nd = datasets.load_from_disk(\"imdb\")\r\nd = d[\"train\"][:10] => the format of this is no more in datasets format\r\nthis is different from you call load_datasets(\"train[10]\")\...
2020-12-30T21:56:20Z
2022-10-05T12:42:12Z
2022-10-05T12:42:12Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this. If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library r...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1671/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1671/timeline
null
completed
null
null
15,446.764444
5,876
https://api.github.com/repos/huggingface/datasets/issues/1669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1669/comments
https://api.github.com/repos/huggingface/datasets/issues/1669/events
https://github.com/huggingface/datasets/issues/1669
776,608,386
MDU6SXNzdWU3NzY2MDgzODY=
1,669
wiki_dpr dataset pre-processesing performance
{ "avatar_url": "https://avatars.githubusercontent.com/u/753898?v=4", "events_url": "https://api.github.com/users/dbarnhart/events{/privacy}", "followers_url": "https://api.github.com/users/dbarnhart/followers", "following_url": "https://api.github.com/users/dbarnhart/following{/other_user}", "gists_url": "ht...
[]
closed
false
null
[]
null
[ "Sorry, double posted." ]
2020-12-30T19:41:09Z
2020-12-30T19:42:25Z
2020-12-30T19:42:25Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won't repeat the concerns around multipro...
{ "avatar_url": "https://avatars.githubusercontent.com/u/753898?v=4", "events_url": "https://api.github.com/users/dbarnhart/events{/privacy}", "followers_url": "https://api.github.com/users/dbarnhart/followers", "following_url": "https://api.github.com/users/dbarnhart/following{/other_user}", "gists_url": "ht...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1669/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1669/timeline
null
completed
null
null
0.021111
5,878
https://api.github.com/repos/huggingface/datasets/issues/1662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1662/comments
https://api.github.com/repos/huggingface/datasets/issues/1662/events
https://github.com/huggingface/datasets/issues/1662
775,890,154
MDU6SXNzdWU3NzU4OTAxNTQ=
1,662
Arrow file is too large when saving vector data
{ "avatar_url": "https://avatars.githubusercontent.com/u/22360336?v=4", "events_url": "https://api.github.com/users/weiwangorg/events{/privacy}", "followers_url": "https://api.github.com/users/weiwangorg/followers", "following_url": "https://api.github.com/users/weiwangorg/following{/other_user}", "gists_url"...
[]
closed
false
null
[]
null
[ "Hi !\r\nThe arrow file size is due to the embeddings. Indeed if they're stored as float32 then the total size of the embeddings is\r\n\r\n20 000 000 vectors * 768 dimensions * 4 bytes per dimension ~= 60GB\r\n\r\nIf you want to reduce the size you can consider using quantization for example, or maybe using dimensi...
2020-12-29T13:23:12Z
2021-01-21T14:12:39Z
2021-01-21T14:12:39Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file?
{ "avatar_url": "https://avatars.githubusercontent.com/u/22360336?v=4", "events_url": "https://api.github.com/users/weiwangorg/events{/privacy}", "followers_url": "https://api.github.com/users/weiwangorg/followers", "following_url": "https://api.github.com/users/weiwangorg/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1662/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1662/timeline
null
completed
null
null
552.824167
5,885
https://api.github.com/repos/huggingface/datasets/issues/1647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1647/comments
https://api.github.com/repos/huggingface/datasets/issues/1647/events
https://github.com/huggingface/datasets/issues/1647
775,525,799
MDU6SXNzdWU3NzU1MjU3OTk=
1,647
NarrativeQA fails to load with `load_dataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/56408839?v=4", "events_url": "https://api.github.com/users/eric-mitchell/events{/privacy}", "followers_url": "https://api.github.com/users/eric-mitchell/followers", "following_url": "https://api.github.com/users/eric-mitchell/following{/other_user}", "g...
[]
closed
false
null
[]
null
[ "Hi @eric-mitchell,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip i...
2020-12-28T18:16:09Z
2021-01-05T12:05:08Z
2021-01-03T17:58:05Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at https://r...
{ "avatar_url": "https://avatars.githubusercontent.com/u/56408839?v=4", "events_url": "https://api.github.com/users/eric-mitchell/events{/privacy}", "followers_url": "https://api.github.com/users/eric-mitchell/followers", "following_url": "https://api.github.com/users/eric-mitchell/following{/other_user}", "g...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1647/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1647/timeline
null
completed
null
null
143.698889
5,900
https://api.github.com/repos/huggingface/datasets/issues/1644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1644/comments
https://api.github.com/repos/huggingface/datasets/issues/1644/events
https://github.com/huggingface/datasets/issues/1644
775,375,880
MDU6SXNzdWU3NzUzNzU4ODA=
1,644
HoVeR dataset fails to load
{ "avatar_url": "https://avatars.githubusercontent.com/u/1473778?v=4", "events_url": "https://api.github.com/users/urikz/events{/privacy}", "followers_url": "https://api.github.com/users/urikz/followers", "following_url": "https://api.github.com/users/urikz/following{/other_user}", "gists_url": "https://api.g...
[]
closed
false
null
[]
null
[ "Hover was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `hover` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"hover\")\r\n```" ]
2020-12-28T12:27:07Z
2022-10-05T12:40:34Z
2022-10-05T12:40:34Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library. Steps to reproduce the error: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("hover") Traceback (most recent call last): ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1644/timeline
null
completed
null
null
15,504.224167
5,903
https://api.github.com/repos/huggingface/datasets/issues/1643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1643/comments
https://api.github.com/repos/huggingface/datasets/issues/1643/events
https://github.com/huggingface/datasets/issues/1643
775,280,046
MDU6SXNzdWU3NzUyODAwNDY=
1,643
Dataset social_bias_frames 404
{ "avatar_url": "https://avatars.githubusercontent.com/u/7501517?v=4", "events_url": "https://api.github.com/users/atemate/events{/privacy}", "followers_url": "https://api.github.com/users/atemate/followers", "following_url": "https://api.github.com/users/atemate/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "I see, master is already fixed in https://github.com/huggingface/datasets/commit/9e058f098a0919efd03a136b9b9c3dec5076f626" ]
2020-12-28T08:35:34Z
2020-12-28T08:38:07Z
2020-12-28T08:38:07Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
``` >>> from datasets import load_dataset >>> dataset = load_dataset("social_bias_frames") ... Downloading and preparing dataset social_bias_frames/default ... ~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/7501517?v=4", "events_url": "https://api.github.com/users/atemate/events{/privacy}", "followers_url": "https://api.github.com/users/atemate/followers", "following_url": "https://api.github.com/users/atemate/following{/other_user}", "gists_url": "https:/...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1643/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1643/timeline
null
completed
null
null
0.0425
5,904
https://api.github.com/repos/huggingface/datasets/issues/1641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1641/comments
https://api.github.com/repos/huggingface/datasets/issues/1641/events
https://github.com/huggingface/datasets/issues/1641
775,110,872
MDU6SXNzdWU3NzUxMTA4NzI=
1,641
muchocine dataset cannot be dowloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https:/...
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" }, { "color": "2edb81", "default": false, ...
closed
false
null
[]
null
[ "I have encountered the same error with `v1.0.1` and `v1.0.2` on both Windows and Linux environments. However, cloning the repo and using the path to the dataset's root directory worked for me. Even after having the dataset cached - passing the path is the only way (for now) to load the dataset.\r\n\r\n```python\r\...
2020-12-27T21:26:28Z
2021-08-03T05:07:29Z
2021-08-03T05:07:29Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1641/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1641/timeline
null
completed
null
null
5,239.683611
5,906
https://api.github.com/repos/huggingface/datasets/issues/1639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1639/comments
https://api.github.com/repos/huggingface/datasets/issues/1639/events
https://github.com/huggingface/datasets/issues/1639
774,903,472
MDU6SXNzdWU3NzQ5MDM0NzI=
1,639
bug with sst2 in glue
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.git...
[]
closed
false
null
[]
null
[ "Maybe you can use nltk's treebank detokenizer ?\r\n```python\r\nfrom nltk.tokenize.treebank import TreebankWordDetokenizer\r\n\r\nTreebankWordDetokenizer().detokenize(\"it 's a charming and often affecting journey . \".split())\r\n# \"it's a charming and often affecting journey.\"\r\n```", "I am looking for alte...
2020-12-26T16:57:23Z
2022-10-05T12:40:16Z
2022-10-05T12:40:16Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below. Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1639/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1639/timeline
null
completed
null
null
15,547.714722
5,908
https://api.github.com/repos/huggingface/datasets/issues/1636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1636/comments
https://api.github.com/repos/huggingface/datasets/issues/1636/events
https://github.com/huggingface/datasets/issues/1636
774,574,378
MDU6SXNzdWU3NzQ1NzQzNzg=
1,636
winogrande cannot be dowloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.git...
[]
closed
false
null
[]
null
[ "I have same issue for other datasets (`myanmar_news` in my case).\r\n\r\nA version of `datasets` runs correctly on my local machine (**without GPU**) which looking for the dataset at \r\n```\r\nhttps://raw.githubusercontent.com/huggingface/datasets/master/datasets/myanmar_news/myanmar_news.py\r\n```\r\n\r\nMeanwhi...
2020-12-24T22:28:22Z
2022-10-05T12:35:44Z
2022-10-05T12:35:44Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks] File "./finetune_trainer.py", ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1636/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1636/timeline
null
completed
null
null
15,590.122778
5,911
https://api.github.com/repos/huggingface/datasets/issues/1635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1635/comments
https://api.github.com/repos/huggingface/datasets/issues/1635/events
https://github.com/huggingface/datasets/issues/1635
774,524,492
MDU6SXNzdWU3NzQ1MjQ0OTI=
1,635
Persian Abstractive/Extractive Text Summarization
{ "avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4", "events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}", "followers_url": "https://api.github.com/users/m3hrdadfi/followers", "following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}", "gists_url": "h...
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-12-24T17:47:12Z
2021-01-04T15:11:04Z
2021-01-04T15:11:04Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included. ## Adding a Dataset - **Name:** *pn-summary* - **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abs...
{ "avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4", "events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}", "followers_url": "https://api.github.com/users/m3hrdadfi/followers", "following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}", "gists_url": "h...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1635/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1635/timeline
null
completed
null
null
261.397778
5,912
https://api.github.com/repos/huggingface/datasets/issues/1634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1634/comments
https://api.github.com/repos/huggingface/datasets/issues/1634/events
https://github.com/huggingface/datasets/issues/1634
774,487,934
MDU6SXNzdWU3NzQ0ODc5MzQ=
1,634
Inspecting datasets per category
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.git...
[]
closed
false
null
[]
null
[ "That's interesting, can you tell me what you think would be useful to access to inspect a dataset?\r\n\r\nYou can filter them in the hub with the search by the way: https://huggingface.co/datasets have you seen it?", "Hi @thomwolf \r\nthank you, I was not aware of this, I was looking into the data viewer linked ...
2020-12-24T15:26:34Z
2022-10-04T14:57:33Z
2022-10-04T14:57:33Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1634/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1634/timeline
null
completed
null
null
15,575.516389
5,913
https://api.github.com/repos/huggingface/datasets/issues/1633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1633/comments
https://api.github.com/repos/huggingface/datasets/issues/1633/events
https://github.com/huggingface/datasets/issues/1633
774,422,603
MDU6SXNzdWU3NzQ0MjI2MDM=
1,633
social_i_qa wrong format of labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.git...
[]
closed
false
null
[]
null
[ "@lhoestq, should I raise a PR for this? Just a minor change while reading labels text file", "Sure feel free to open a PR thanks !" ]
2020-12-24T13:11:54Z
2020-12-30T17:18:49Z
2020-12-30T17:18:49Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent. so label is 'label': '1\n', not '1' thanks ``` >>> import datasets >>> from datasets import load_dataset >>> dataset = load_dataset( ... 'social_i_qa') cahce dir /jul...
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1633/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1633/timeline
null
completed
null
null
148.115278
5,914
https://api.github.com/repos/huggingface/datasets/issues/1632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1632/comments
https://api.github.com/repos/huggingface/datasets/issues/1632/events
https://github.com/huggingface/datasets/issues/1632
774,388,625
MDU6SXNzdWU3NzQzODg2MjU=
1,632
SICK dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-12-24T12:40:14Z
2021-02-05T15:49:25Z
2021-02-05T15:49:25Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you. ## Adding a Dataset - **Name:** SICK - **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical,...
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1632/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1632/timeline
null
completed
null
null
1,035.153056
5,915
https://api.github.com/repos/huggingface/datasets/issues/1630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1630/comments
https://api.github.com/repos/huggingface/datasets/issues/1630/events
https://github.com/huggingface/datasets/issues/1630
774,332,129
MDU6SXNzdWU3NzQzMzIxMjk=
1,630
Adding UKP Argument Aspect Similarity Corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "Adding a link to the guide on adding a dataset if someone want to give it a try: https://github.com/huggingface/datasets#add-a-new-dataset-to-the-hub\r\n\r\nwe should add this guide to the issue template @lhoestq ", "thanks @thomwolf , this is added now. The template is correct, sorry my mistake not to include i...
2020-12-24T11:01:31Z
2022-10-05T12:36:12Z
2022-10-05T12:36:12Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, this would be great to have this dataset included. ## Adding a Dataset - **Name:** UKP Argument Aspect Similarity Corpus - **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as ei...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1630/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1630/timeline
null
completed
null
null
15,601.578056
5,917
https://api.github.com/repos/huggingface/datasets/issues/1627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1627/comments
https://api.github.com/repos/huggingface/datasets/issues/1627/events
https://github.com/huggingface/datasets/issues/1627
773,960,255
MDU6SXNzdWU3NzM5NjAyNTU=
1,627
`Dataset.map` disable progress bar
{ "avatar_url": "https://avatars.githubusercontent.com/u/8767964?v=4", "events_url": "https://api.github.com/users/Nickil21/events{/privacy}", "followers_url": "https://api.github.com/users/Nickil21/followers", "following_url": "https://api.github.com/users/Nickil21/following{/other_user}", "gists_url": "http...
[]
closed
false
null
[]
null
[ "Progress bar can be disabled like this:\r\n```python\r\nfrom datasets.utils.logging import set_verbosity_error\r\nset_verbosity_error()\r\n```\r\n\r\nThere is this line in `Dataset.map`:\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nSo any logging level higher than `WARNING...
2020-12-23T17:53:42Z
2025-05-16T16:36:24Z
2020-12-26T19:57:17Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8767964?v=4", "events_url": "https://api.github.com/users/Nickil21/events{/privacy}", "followers_url": "https://api.github.com/users/Nickil21/followers", "following_url": "https://api.github.com/users/Nickil21/following{/other_user}", "gists_url": "http...
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1627/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1627/timeline
null
completed
null
null
74.059722
5,920
https://api.github.com/repos/huggingface/datasets/issues/1624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1624/comments
https://api.github.com/repos/huggingface/datasets/issues/1624/events
https://github.com/huggingface/datasets/issues/1624
773,669,700
MDU6SXNzdWU3NzM2Njk3MDA=
1,624
Cannot download ade_corpus_v2
{ "avatar_url": "https://avatars.githubusercontent.com/u/20259310?v=4", "events_url": "https://api.github.com/users/him1411/events{/privacy}", "followers_url": "https://api.github.com/users/him1411/followers", "following_url": "https://api.github.com/users/him1411/following{/other_user}", "gists_url": "https:...
[]
closed
false
null
[]
null
[ "Hi @him1411, the dataset you are trying to load has been added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip install git+htt...
2020-12-23T10:58:14Z
2021-08-03T05:08:54Z
2021-08-03T05:08:54Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2 but received this error : `Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_con...
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",...
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1624/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1624/timeline
null
completed
null
null
5,346.177778
5,923
https://api.github.com/repos/huggingface/datasets/issues/1622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1622/comments
https://api.github.com/repos/huggingface/datasets/issues/1622/events
https://github.com/huggingface/datasets/issues/1622
772,940,768
MDU6SXNzdWU3NzI5NDA3Njg=
1,622
Can't call shape on the output of select()
{ "avatar_url": "https://avatars.githubusercontent.com/u/47183162?v=4", "events_url": "https://api.github.com/users/noaonoszko/events{/privacy}", "followers_url": "https://api.github.com/users/noaonoszko/followers", "following_url": "https://api.github.com/users/noaonoszko/following{/other_user}", "gists_url"...
[]
closed
false
null
[]
null
[ "Indeed that's a typo, do you want to open a PR to fix it?", "Yes, created a PR" ]
2020-12-22T13:18:40Z
2020-12-23T13:37:13Z
2020-12-23T13:37:12Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I get the error `TypeError: tuple expected at most 1 argument, got 2` when calling `shape` on the output of `select()`. It's line 531 in shape in arrow_dataset.py that causes the problem: ``return tuple(self._indices.num_rows, self._data.num_columns)`` This makes sense, since `tuple(num1, num2)` is not a valid call....
{ "avatar_url": "https://avatars.githubusercontent.com/u/47183162?v=4", "events_url": "https://api.github.com/users/noaonoszko/events{/privacy}", "followers_url": "https://api.github.com/users/noaonoszko/followers", "following_url": "https://api.github.com/users/noaonoszko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1622/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1622/timeline
null
completed
null
null
24.308889
5,925
https://api.github.com/repos/huggingface/datasets/issues/1618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1618/comments
https://api.github.com/repos/huggingface/datasets/issues/1618/events
https://github.com/huggingface/datasets/issues/1618
772,248,730
MDU6SXNzdWU3NzIyNDg3MzA=
1,618
Can't filter language:EN on https://huggingface.co/datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists...
[]
closed
false
null
[]
null
[ "cc'ing @mapmeld ", "Full language list is now deployed to https://huggingface.co/datasets ! Recommend close", "Cool @mapmeld ! My 2 cents (for a next iteration), it would be cool to have a small search widget in the filter dropdown as you have a ton of languages now here! Closing this in the meantime." ]
2020-12-21T15:23:23Z
2020-12-22T17:17:00Z
2020-12-22T17:16:09Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge: ![screenshot](https://user-images.githubuse...
{ "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1618/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1618/timeline
null
completed
null
null
25.879444
5,929
https://api.github.com/repos/huggingface/datasets/issues/1611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1611/comments
https://api.github.com/repos/huggingface/datasets/issues/1611/events
https://github.com/huggingface/datasets/issues/1611
771,486,456
MDU6SXNzdWU3NzE0ODY0NTY=
1,611
shuffle with torch generator
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/followin...
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Is there a way one can convert the two generator? not sure overall what alternatives I could have to shuffle the datasets with a torch generator, thanks ", "@lhoestq let me please expalin in more details, maybe you could help me suggesting an alternative to solve the issue for now, I have multiple large dataset...
2020-12-20T00:57:14Z
2022-06-01T15:30:13Z
2022-06-01T15:30:13Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1611/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1611/timeline
null
completed
null
null
12,686.549722
5,935
https://api.github.com/repos/huggingface/datasets/issues/1610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1610/comments
https://api.github.com/repos/huggingface/datasets/issues/1610/events
https://github.com/huggingface/datasets/issues/1610
771,453,599
MDU6SXNzdWU3NzE0NTM1OTk=
1,610
shuffle does not accept seed
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi, did you check the doc on `shuffle`?\r\nhttps://huggingface.co/docs/datasets/package_reference/main_classes.html?datasets.Dataset.shuffle#datasets.Dataset.shuffle", "Hi Thomas\r\nthanks for reponse, yes, I did checked it, but this does not work for me please see \r\n\r\n```\r\n(internship) rkarimi@italix17:/i...
2020-12-19T20:59:39Z
2021-01-04T10:00:03Z
2021-01-04T10:00:03Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1610/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1610/timeline
null
completed
null
null
373.006667
5,936
https://api.github.com/repos/huggingface/datasets/issues/1609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1609/comments
https://api.github.com/repos/huggingface/datasets/issues/1609/events
https://github.com/huggingface/datasets/issues/1609
771,421,881
MDU6SXNzdWU3NzE0MjE4ODE=
1,609
Not able to use 'jigsaw_toxicity_pred' dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7424133?v=4", "events_url": "https://api.github.com/users/jassimran/events{/privacy}", "followers_url": "https://api.github.com/users/jassimran/followers", "following_url": "https://api.github.com/users/jassimran/following{/other_user}", "gists_url": "h...
[]
closed
false
null
[]
null
[ "Hi @jassimran,\r\nThe `jigsaw_toxicity_pred` dataset has not been released yet, it will be available with version 2 of `datasets`, coming soon.\r\nYou can still access it by installing the master (unreleased) version of datasets directly :\r\n`pip install git+https://github.com/huggingface/datasets.git@master`\r\n...
2020-12-19T17:35:48Z
2020-12-22T16:42:24Z
2020-12-22T16:42:23Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing): ``` from datasets import list_datasets, list_metrics, load_dataset, load_metric ds = load_dataset("jigsaw_toxicity_pred") ``` I see below error: >...
{ "avatar_url": "https://avatars.githubusercontent.com/u/7424133?v=4", "events_url": "https://api.github.com/users/jassimran/events{/privacy}", "followers_url": "https://api.github.com/users/jassimran/followers", "following_url": "https://api.github.com/users/jassimran/following{/other_user}", "gists_url": "h...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1609/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1609/timeline
null
completed
null
null
71.109722
5,937
https://api.github.com/repos/huggingface/datasets/issues/1605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1605/comments
https://api.github.com/repos/huggingface/datasets/issues/1605/events
https://github.com/huggingface/datasets/issues/1605
770,979,620
MDU6SXNzdWU3NzA5Nzk2MjA=
1,605
Navigation version breaking
{ "avatar_url": "https://avatars.githubusercontent.com/u/3007947?v=4", "events_url": "https://api.github.com/users/mttk/events{/privacy}", "followers_url": "https://api.github.com/users/mttk/followers", "following_url": "https://api.github.com/users/mttk/following{/other_user}", "gists_url": "https://api.gith...
[]
closed
false
null
[]
null
[ "Not relevant for our current docs :)." ]
2020-12-18T15:36:24Z
2022-10-05T12:35:11Z
2022-10-05T12:35:11Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, when navigating docs (Chrome, Ubuntu) (e.g. on this page: https://huggingface.co/docs/datasets/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version: ![image](https://user-images.githubusercontent.com/3007947/102632187-02cad080-...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 1, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1605/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1605/timeline
null
completed
null
null
15,740.979722
5,941
https://api.github.com/repos/huggingface/datasets/issues/1604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1604/comments
https://api.github.com/repos/huggingface/datasets/issues/1604/events
https://github.com/huggingface/datasets/issues/1604
770,862,112
MDU6SXNzdWU3NzA4NjIxMTI=
1,604
Add tests for the download functions ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "...
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "We have some tests now for it under `tests/test_download_manager.py`." ]
2020-12-18T12:49:25Z
2022-10-05T13:04:24Z
2022-10-05T13:04:24Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
AFAIK the download functions in `DownloadManager` are not tested yet. It could be good to add some to ensure behavior is as expected.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1604/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1604/timeline
null
completed
null
null
15,744.249722
5,942
https://api.github.com/repos/huggingface/datasets/issues/1600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1600/comments
https://api.github.com/repos/huggingface/datasets/issues/1600/events
https://github.com/huggingface/datasets/issues/1600
770,582,960
MDU6SXNzdWU3NzA1ODI5NjA=
1,600
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
{ "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user...
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
null
[]
null
[ "Hi @david-waterworth!\r\n\r\nAs indicated in the error message, `load_dataset(\"csv\")` returns a `DatasetDict` object, which is mapping of `str` to `Dataset` objects. I believe in this case the behavior is to return a `train` split with all the data.\r\n`train_test_split` is a method of the `Dataset` object, so y...
2020-12-18T05:37:10Z
2023-05-03T04:22:55Z
2020-12-21T07:38:58Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong? ``` from datasets import load_dataset dataset = load_dataset('csv', data_files='data.txt') dataset = dataset.train_test_split(test_size=0.1) ``` > AttributeError: 'DatasetDict' object has no at...
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1600/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1600/timeline
null
completed
null
null
74.03
5,946
https://api.github.com/repos/huggingface/datasets/issues/1594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1594/comments
https://api.github.com/repos/huggingface/datasets/issues/1594/events
https://github.com/huggingface/datasets/issues/1594
769,747,767
MDU6SXNzdWU3Njk3NDc3Njc=
1,594
connection error
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/followin...
[]
closed
false
null
[]
null
[ "This happen quite often when they are too many concurrent requests to github.\r\n\r\ni can understand it’s a bit cumbersome to handle on the user side. Maybe we should try a few times in the lib (eg with timeout) before failing, what do you think @lhoestq ?", "Yes currently there's no retry afaik. We should add ...
2020-12-17T09:18:34Z
2022-06-01T15:33:42Z
2022-06-01T15:33:41Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am hitting to this error, thanks ``` > Traceback (most recent call last): File "finetune_t5_trainer.py", line 379, in <module> main() File "finetune_t5_trainer.py", line 208, in main if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO File "finetune_t5_tr...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1594/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1594/timeline
null
completed
null
null
12,750.251944
5,952
https://api.github.com/repos/huggingface/datasets/issues/1593
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1593/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1593/comments
https://api.github.com/repos/huggingface/datasets/issues/1593/events
https://github.com/huggingface/datasets/issues/1593
769,611,386
MDU6SXNzdWU3Njk2MTEzODY=
1,593
Access to key in DatasetDict map
{ "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url"...
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Indeed that would be cool\r\n\r\nAlso FYI right now the easiest way to do this is\r\n```python\r\ndataset_dict[\"train\"] = dataset_dict[\"train\"].map(my_transform_for_the_train_set)\r\ndataset_dict[\"test\"] = dataset_dict[\"test\"].map(my_transform_for_the_test_set)\r\n```", "I don't feel like adding an extra...
2020-12-17T07:02:20Z
2022-10-05T13:47:28Z
2022-10-05T12:33:06Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be n...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1593/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1593/timeline
null
completed
null
null
15,773.512778
5,953
https://api.github.com/repos/huggingface/datasets/issues/1591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1591/comments
https://api.github.com/repos/huggingface/datasets/issues/1591/events
https://github.com/huggingface/datasets/issues/1591
769,383,714
MDU6SXNzdWU3NjkzODM3MTQ=
1,591
IWSLT-17 Link Broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url"...
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "2edb81", ...
closed
false
null
[]
null
[ "Sorry, this is a duplicate of #1287. Not sure why it didn't come up when I searched `iwslt` in the issues list.", "Closing this since its a duplicate" ]
2020-12-17T00:46:42Z
2020-12-18T08:06:36Z
2020-12-18T08:05:28Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
``` FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1591/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1591/timeline
null
completed
null
null
31.312778
5,954
https://api.github.com/repos/huggingface/datasets/issues/1590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1590/comments
https://api.github.com/repos/huggingface/datasets/issues/1590/events
https://github.com/huggingface/datasets/issues/1590
769,242,858
MDU6SXNzdWU3NjkyNDI4NTg=
1,590
Add helper to resolve namespace collision
{ "avatar_url": "https://avatars.githubusercontent.com/u/8204807?v=4", "events_url": "https://api.github.com/users/jramapuram/events{/privacy}", "followers_url": "https://api.github.com/users/jramapuram/followers", "following_url": "https://api.github.com/users/jramapuram/following{/other_user}", "gists_url":...
[]
closed
false
null
[]
null
[ "Do you have an example?", "I was thinking about using something like [importlib](https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly) to over-ride the collision. \r\n\r\n**Reason requested**: I use the [following template](https://github.com/jramapuram/ml_base/) repo where I house a...
2020-12-16T20:17:24Z
2022-06-01T15:32:04Z
2022-06-01T15:32:04Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1590/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1590/timeline
null
completed
null
null
12,763.244444
5,955
https://api.github.com/repos/huggingface/datasets/issues/1585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1585/comments
https://api.github.com/repos/huggingface/datasets/issues/1585/events
https://github.com/huggingface/datasets/issues/1585
768,831,171
MDU6SXNzdWU3Njg4MzExNzE=
1,585
FileNotFoundError for `amazon_polarity`
{ "avatar_url": "https://avatars.githubusercontent.com/u/24647404?v=4", "events_url": "https://api.github.com/users/phtephanx/events{/privacy}", "followers_url": "https://api.github.com/users/phtephanx/followers", "following_url": "https://api.github.com/users/phtephanx/following{/other_user}", "gists_url": "...
[]
closed
false
null
[]
null
[ "Hi @phtephanx , the `amazon_polarity` dataset has not been released yet. It will be available in the coming soon v2of `datasets` :) \r\n\r\nYou can still access it now if you want, but you will need to install datasets via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`" ...
2020-12-16T12:51:05Z
2020-12-16T16:02:56Z
2020-12-16T16:02:56Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Version: `datasets==v1.1.3` ### Reproduction ```python from datasets import load_dataset data = load_dataset("amazon_polarity") ``` crashes with ```bash FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py ``` and ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1585/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1585/timeline
null
completed
null
null
3.1975
5,960
https://api.github.com/repos/huggingface/datasets/issues/1581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1581/comments
https://api.github.com/repos/huggingface/datasets/issues/1581/events
https://github.com/huggingface/datasets/issues/1581
768,320,594
MDU6SXNzdWU3NjgzMjA1OTQ=
1,581
Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
{ "avatar_url": "https://avatars.githubusercontent.com/u/702586?v=4", "events_url": "https://api.github.com/users/eduardofv/events{/privacy}", "followers_url": "https://api.github.com/users/eduardofv/followers", "following_url": "https://api.github.com/users/eduardofv/following{/other_user}", "gists_url": "ht...
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nYou can override the directory in which cache file are stored using for example\r\n```\r\nENV HF_HOME=\"/root/cache/hf_cache_home\"\r\n```\r\n\r\nThis way both `transformers` and `datasets` will use this directory instead of the default `.cache`", "Great, thanks. I didn't see documentat...
2020-12-16T00:02:21Z
2021-06-17T15:40:45Z
2021-06-17T15:40:45Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`: ``` $ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/702586?v=4", "events_url": "https://api.github.com/users/eduardofv/events{/privacy}", "followers_url": "https://api.github.com/users/eduardofv/followers", "following_url": "https://api.github.com/users/eduardofv/following{/other_user}", "gists_url": "ht...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1581/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1581/timeline
null
completed
null
null
4,407.64
5,964
https://api.github.com/repos/huggingface/datasets/issues/1541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1541/comments
https://api.github.com/repos/huggingface/datasets/issues/1541/events
https://github.com/huggingface/datasets/issues/1541
765,430,586
MDU6SXNzdWU3NjU0MzA1ODY=
1,541
connection issue while downloading data
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/followin...
[]
closed
false
null
[]
null
[ "could you tell me how I can avoid download, by pre-downloading the data first, put them in a folder so the code does not try to redownload? could you tell me the path to put the downloaded data, and how to do it? thanks\r\n@lhoestq ", "Does your instance have an internet connection ?\r\n\r\nIf you don't have an ...
2020-12-13T14:27:00Z
2022-10-05T12:33:29Z
2022-10-05T12:33:29Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. t...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1541/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1541/timeline
null
completed
null
null
15,862.108056
6,004
https://api.github.com/repos/huggingface/datasets/issues/1514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1514/comments
https://api.github.com/repos/huggingface/datasets/issues/1514/events
https://github.com/huggingface/datasets/issues/1514
764,017,148
MDU6SXNzdWU3NjQwMTcxNDg=
1,514
how to get all the options of a property in datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
null
[]
null
[ "In a dataset, labels correspond to the `ClassLabel` feature that has the `names` property that returns string represenation of the integer classes (or `num_classes` to get the number of different classes).", "I think the `features` attribute of the dataset object is what you are looking for:\r\n```\r\n>>> datase...
2020-12-12T16:24:08Z
2022-05-25T16:27:29Z
2022-05-25T16:27:29Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi could you tell me how I can get all unique options of a property of dataset? for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1514/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1514/timeline
null
completed
null
null
12,696.055833
6,031
https://api.github.com/repos/huggingface/datasets/issues/1478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1478/comments
https://api.github.com/repos/huggingface/datasets/issues/1478/events
https://github.com/huggingface/datasets/issues/1478
762,293,076
MDU6SXNzdWU3NjIyOTMwNzY=
1,478
Inconsistent argument names.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8402500?v=4", "events_url": "https://api.github.com/users/Fraser-Greenlee/events{/privacy}", "followers_url": "https://api.github.com/users/Fraser-Greenlee/followers", "following_url": "https://api.github.com/users/Fraser-Greenlee/following{/other_user}",...
[]
closed
false
null
[]
null
[ "Also for the `Accuracy` metric the `accuracy_score` method should have its args in the opposite order so `accuracy_score(predictions, references,,,)`.", "Thanks for pointing this out ! πŸ•΅πŸ» \r\nPredictions and references should indeed be swapped in the docstring.\r\nHowever, the call to `accuracy_score` should n...
2020-12-11T12:19:38Z
2020-12-19T15:03:39Z
2020-12-19T15:03:39Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Just find it a wee bit odd that in the transformers library `predictions` are those made by the model: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61 While in many datasets metrics they are the ground truth labels: https://github.com/huggingface/datasets/blob/c3f5...
{ "avatar_url": "https://avatars.githubusercontent.com/u/8402500?v=4", "events_url": "https://api.github.com/users/Fraser-Greenlee/events{/privacy}", "followers_url": "https://api.github.com/users/Fraser-Greenlee/followers", "following_url": "https://api.github.com/users/Fraser-Greenlee/following{/other_user}",...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1478/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1478/timeline
null
completed
null
null
194.733611
6,067
https://api.github.com/repos/huggingface/datasets/issues/1452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1452/comments
https://api.github.com/repos/huggingface/datasets/issues/1452/events
https://github.com/huggingface/datasets/issues/1452
761,104,924
MDU6SXNzdWU3NjExMDQ5MjQ=
1,452
SNLI dataset contains labels with value -1
{ "avatar_url": "https://avatars.githubusercontent.com/u/11405654?v=4", "events_url": "https://api.github.com/users/aarnetalman/events{/privacy}", "followers_url": "https://api.github.com/users/aarnetalman/followers", "following_url": "https://api.github.com/users/aarnetalman/following{/other_user}", "gists_u...
[]
closed
false
null
[]
null
[ "I believe the `-1` label is used for missing/NULL data as per HuggingFace Dataset conventions. If I recall correctly SNLI has some entries with no (gold) labels in the dataset.", "Ah, you're right. The dataset has some pairs with missing labels. Thanks for reminding me." ]
2020-12-10T10:16:55Z
2020-12-10T17:49:55Z
2020-12-10T17:49:55Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
``` import datasets nli_data = datasets.load_dataset("snli") train_data = nli_data['train'] train_labels = train_data['label'] label_set = set(train_labels) print(label_set) ``` **Output:** `{0, 1, 2, -1}`
{ "avatar_url": "https://avatars.githubusercontent.com/u/11405654?v=4", "events_url": "https://api.github.com/users/aarnetalman/events{/privacy}", "followers_url": "https://api.github.com/users/aarnetalman/followers", "following_url": "https://api.github.com/users/aarnetalman/following{/other_user}", "gists_u...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1452/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1452/timeline
null
completed
null
null
7.55
6,093
https://api.github.com/repos/huggingface/datasets/issues/1444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1444/comments
https://api.github.com/repos/huggingface/datasets/issues/1444/events
https://github.com/huggingface/datasets/issues/1444
761,055,651
MDU6SXNzdWU3NjEwNTU2NTE=
1,444
FileNotFound remotly, can't load a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4", "events_url": "https://api.github.com/users/sadakmed/events{/privacy}", "followers_url": "https://api.github.com/users/sadakmed/followers", "following_url": "https://api.github.com/users/sadakmed/following{/other_user}", "gists_url": "htt...
[]
closed
false
null
[]
null
[ "This dataset will be available in version-2 of the library. If you want to use this dataset now, install datasets from `master` branch rather.\r\n\r\nCommand to install datasets from `master` branch:\r\n`!pip install git+https://github.com/huggingface/datasets.git@master`", "Closing this, thanks @VasudevGupta7 "...
2020-12-10T09:14:47Z
2020-12-15T17:41:14Z
2020-12-15T17:41:14Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
```py !pip install datasets import datasets as ds corpus = ds.load_dataset('large_spanish_corpus') ``` gives the error > FileNotFoundError: Couldn't find file locally at large_spanish_corpus/large_spanish_corpus.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/large_spa...
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1444/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1444/timeline
null
completed
null
null
128.440833
6,101
https://api.github.com/repos/huggingface/datasets/issues/1422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1422/comments
https://api.github.com/repos/huggingface/datasets/issues/1422/events
https://github.com/huggingface/datasets/issues/1422
760,707,113
MDU6SXNzdWU3NjA3MDcxMTM=
1,422
Can't map dataset (loaded from csv)
{ "avatar_url": "https://avatars.githubusercontent.com/u/28161779?v=4", "events_url": "https://api.github.com/users/SolomidHero/events{/privacy}", "followers_url": "https://api.github.com/users/SolomidHero/followers", "following_url": "https://api.github.com/users/SolomidHero/following{/other_user}", "gists_u...
[]
closed
false
null
[]
null
[ "Please could you post the whole script? I can't reproduce your issue. After updating the feature names/labels to match with the data, everything works fine for me. Try to update datasets/transformers to the newest version.", "Actually, the problem was how `tokenize` function was defined. This was completely my s...
2020-12-09T22:05:42Z
2020-12-17T18:13:40Z
2020-12-17T18:13:40Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes. Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to class...
{ "avatar_url": "https://avatars.githubusercontent.com/u/28161779?v=4", "events_url": "https://api.github.com/users/SolomidHero/events{/privacy}", "followers_url": "https://api.github.com/users/SolomidHero/followers", "following_url": "https://api.github.com/users/SolomidHero/following{/other_user}", "gists_u...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1422/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1422/timeline
null
completed
null
null
188.132778
6,123
https://api.github.com/repos/huggingface/datasets/issues/1299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1299/comments
https://api.github.com/repos/huggingface/datasets/issues/1299/events
https://github.com/huggingface/datasets/issues/1299
759,414,566
MDU6SXNzdWU3NTk0MTQ1NjY=
1,299
can't load "german_legal_entity_recognition" dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/59837137?v=4", "events_url": "https://api.github.com/users/nataly-obr/events{/privacy}", "followers_url": "https://api.github.com/users/nataly-obr/followers", "following_url": "https://api.github.com/users/nataly-obr/following{/other_user}", "gists_url"...
[]
closed
false
null
[]
null
[ "Please if you could tell me more about the error? \r\n\r\n1. Please check the directory you've been working on\r\n2. Check for any typos", "> Please if you could tell me more about the error?\r\n> \r\n> 1. Please check the directory you've been working on\r\n> 2. Check for any typos\r\n\r\nError happens during t...
2020-12-08T12:42:01Z
2020-12-16T16:03:13Z
2020-12-16T16:03:13Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co...
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1299/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1299/timeline
null
completed
null
null
195.353333
6,246
https://api.github.com/repos/huggingface/datasets/issues/1290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1290/comments
https://api.github.com/repos/huggingface/datasets/issues/1290/events
https://github.com/huggingface/datasets/issues/1290
759,339,989
MDU6SXNzdWU3NTkzMzk5ODk=
1,290
imdb dataset cannot be downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "Hi @rabeehk , I am unable to reproduce your problem locally.\r\nCan you try emptying the cache (removing the content of `/idiap/temp/rkarimi/cache_home_1/datasets`) and retry ?", "Hi,\r\nthanks, I did remove the cache and still the same error here\r\n\r\n```\r\n>>> a = datasets.load_dataset(\"imdb\", split=\"tra...
2020-12-08T10:47:36Z
2020-12-24T17:38:09Z
2020-12-24T17:38:09Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
hi please find error below getting imdb train spli: thanks ` datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")` errors ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset imdb/plain_text (d...
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1290/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1290/timeline
null
completed
null
null
390.8425
6,255
https://api.github.com/repos/huggingface/datasets/issues/1287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1287/comments
https://api.github.com/repos/huggingface/datasets/issues/1287/events
https://github.com/huggingface/datasets/issues/1287
759,300,992
MDU6SXNzdWU3NTkzMDA5OTI=
1,287
'iwslt2017-ro-nl', cannot be downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "the same issue with datasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=split), ..... ", "even with setting master like the following command, still remains \r\n\r\ndatasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=\"train\", script_version=\"master\")\r\n", "Looks like the data has been ...
2020-12-08T09:56:55Z
2022-06-13T10:41:33Z
2022-06-13T10:41:33Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am trying `>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")` getting this error thank you for your help ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (downlo...
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1287/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1287/timeline
null
completed
null
null
13,248.743889
6,258
https://api.github.com/repos/huggingface/datasets/issues/1286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1286/comments
https://api.github.com/repos/huggingface/datasets/issues/1286/events
https://github.com/huggingface/datasets/issues/1286
759,291,509
MDU6SXNzdWU3NTkyOTE1MDk=
1,286
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "I remember also getting the same issue for several other translation datasets like all the iwslt2017 group, this is blokcing me and I really need to fix it and I was wondering if you have an idea on this. @lhoestq thanks,. ", "maybe there is an empty line or something inside these datasets? could you tell me wh...
2020-12-08T09:44:15Z
2020-12-12T19:36:22Z
2020-12-12T16:22:36Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help {'epoch': 20.0} 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1286/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1286/timeline
null
completed
null
null
102.639167
6,259
https://api.github.com/repos/huggingface/datasets/issues/1285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1285/comments
https://api.github.com/repos/huggingface/datasets/issues/1285/events
https://github.com/huggingface/datasets/issues/1285
759,278,758
MDU6SXNzdWU3NTkyNzg3NTg=
1,285
boolq does not work
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "here is the minimal code to reproduce\r\n\r\n`datasets>>> datasets.load_dataset(\"boolq\", \"train\")\r\n\r\nthe errors\r\n\r\n```\r\n`cahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nUsing custom data configuration train\r\nDownloading and preparing d...
2020-12-08T09:28:47Z
2020-12-08T09:47:10Z
2020-12-08T09:47:10Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am getting this error when trying to load boolq, thanks for your help ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock Traceback (most recent call last): File "finetune_t5_trainer.py", line 274, in <module> main() File "finetune_t5_trainer.py", line 147, ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1285/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1285/timeline
null
completed
null
null
0.306389
6,260
https://api.github.com/repos/huggingface/datasets/issues/1167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1167/comments
https://api.github.com/repos/huggingface/datasets/issues/1167/events
https://github.com/huggingface/datasets/issues/1167
757,722,921
MDU6SXNzdWU3NTc3MjI5MjE=
1,167
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_u...
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" }, { "color": "c5def5", "default": ...
closed
false
null
[]
null
[ "We're working on adding on-the-fly transforms in datasets.\r\nCurrently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas.\r\nFor example\r\n```python\r\ndataset.set_format(\"torch\")\r\n```\r\napplies `torch.Tensor` to t...
2020-12-05T17:02:56Z
2023-07-20T15:49:42Z
2023-07-20T15:49:42Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi there, I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you c...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/1167/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1167/timeline
null
completed
null
null
22,966.779444
6,377
https://api.github.com/repos/huggingface/datasets/issues/1115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1115/comments
https://api.github.com/repos/huggingface/datasets/issues/1115/events
https://github.com/huggingface/datasets/issues/1115
757,127,527
MDU6SXNzdWU3NTcxMjc1Mjc=
1,115
Incorrect URL for MRQA SQuAD train subset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6259768?v=4", "events_url": "https://api.github.com/users/yuxiang-wu/events{/privacy}", "followers_url": "https://api.github.com/users/yuxiang-wu/followers", "following_url": "https://api.github.com/users/yuxiang-wu/following{/other_user}", "gists_url":...
[]
closed
false
null
[]
null
[ "good catch !" ]
2020-12-04T14:05:24Z
2020-12-06T17:14:22Z
2020-12-06T17:14:22Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53 The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1115/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1115/timeline
null
completed
null
null
51.149444
6,429
https://api.github.com/repos/huggingface/datasets/issues/1110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1110/comments
https://api.github.com/repos/huggingface/datasets/issues/1110/events
https://github.com/huggingface/datasets/issues/1110
757,082,677
MDU6SXNzdWU3NTcwODI2Nzc=
1,110
Using a feature named "_type" fails with certain operations
{ "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}", "followers_url": "https://api.github.com/users/dcfidalgo/followers", "following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}", "gists_url": "...
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\n\r\nIndeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json.\r\nWe can probably change `_type` to something that is less likely to collide with user feature names.\r\nIn this case we would want something backward ...
2020-12-04T12:56:33Z
2022-01-14T18:07:00Z
2022-01-14T18:07:00Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"_type": ["whatever"]}).map() concatenate_datasets([ds]) # or simply Dataset(ds._data) ``` Context: We are using datasets to persi...
{ "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}", "followers_url": "https://api.github.com/users/dcfidalgo/followers", "following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}", "gists_url": "...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1110/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1110/timeline
null
completed
null
null
9,749.174167
6,434
https://api.github.com/repos/huggingface/datasets/issues/1103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1103/comments
https://api.github.com/repos/huggingface/datasets/issues/1103/events
https://github.com/huggingface/datasets/issues/1103
757,016,820
MDU6SXNzdWU3NTcwMTY4MjA=
1,103
Add support to download kaggle datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user...
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`?", "The workflow for downloading a Kaggle dataset and turning it into an HF dataset is pretty simple:\r\n```python\r\n!kaggle datasets download -p path\r\nds = load_dataset(path)\r\n```\r\n\r\nNative support woul...
2020-12-04T11:08:37Z
2023-07-20T15:22:24Z
2023-07-20T15:22:23Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
We can use API key
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/1103/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1103/timeline
null
not_planned
null
null
22,996.229444
6,441
https://api.github.com/repos/huggingface/datasets/issues/1102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1102/comments
https://api.github.com/repos/huggingface/datasets/issues/1102/events
https://github.com/huggingface/datasets/issues/1102
757,016,515
MDU6SXNzdWU3NTcwMTY1MTU=
1,102
Add retries to download manager
{ "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user...
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "...
[ { "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", ...
null
[]
2020-12-04T11:08:11Z
2020-12-22T15:34:06Z
2020-12-22T15:34:06Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1102/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1102/timeline
null
completed
null
null
436.431944
6,442
https://api.github.com/repos/huggingface/datasets/issues/1064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1064/comments
https://api.github.com/repos/huggingface/datasets/issues/1064/events
https://github.com/huggingface/datasets/issues/1064
756,382,186
MDU6SXNzdWU3NTYzODIxODY=
1,064
Not support links with 302 redirect
{ "avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4", "events_url": "https://api.github.com/users/chameleonTK/events{/privacy}", "followers_url": "https://api.github.com/users/chameleonTK/followers", "following_url": "https://api.github.com/users/chameleonTK/following{/other_user}", "gists_ur...
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "a2eeef", "default": true, "descript...
closed
false
null
[]
null
[ "Hi !\r\nThis kind of links is now supported by the library since #1316", "> Hi !\r\n> This kind of links is now supported by the library since #1316\r\n\r\nI updated links in TLC datasets to be the github links in this pull request \r\n https://github.com/huggingface/datasets/pull/1737\r\n\r\nEverything works no...
2020-12-03T17:04:43Z
2021-01-14T02:51:25Z
2021-01-14T02:51:25Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests). ``` r.head("https://github.com/jitkapat/thailitcorpus/releases/downlo...
{ "avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4", "events_url": "https://api.github.com/users/chameleonTK/events{/privacy}", "followers_url": "https://api.github.com/users/chameleonTK/followers", "following_url": "https://api.github.com/users/chameleonTK/following{/other_user}", "gists_ur...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1064/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1064/timeline
null
completed
null
null
993.778333
6,480
https://api.github.com/repos/huggingface/datasets/issues/1046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1046/comments
https://api.github.com/repos/huggingface/datasets/issues/1046/events
https://github.com/huggingface/datasets/issues/1046
756,122,709
MDU6SXNzdWU3NTYxMjI3MDk=
1,046
Dataset.map() turns tensors into lists?
{ "avatar_url": "https://avatars.githubusercontent.com/u/5270804?v=4", "events_url": "https://api.github.com/users/tombosc/events{/privacy}", "followers_url": "https://api.github.com/users/tombosc/followers", "following_url": "https://api.github.com/users/tombosc/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug.", "It is expected behavior, you should set the format to `\"torch\"` as you mentioned to get pytorch tensors back.\r\nBy default ...
2020-12-03T11:43:46Z
2022-10-05T12:12:41Z
2022-10-05T12:12:41Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists! ```import datasets import torch from datasets import load_dataset ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/1046/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1046/timeline
null
completed
null
null
16,104.481944
6,498
https://api.github.com/repos/huggingface/datasets/issues/1027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1027/comments
https://api.github.com/repos/huggingface/datasets/issues/1027/events
https://github.com/huggingface/datasets/issues/1027
755,695,420
MDU6SXNzdWU3NTU2OTU0MjA=
1,027
Hi
{ "avatar_url": "https://avatars.githubusercontent.com/u/75398394?v=4", "events_url": "https://api.github.com/users/suemori87/events{/privacy}", "followers_url": "https://api.github.com/users/suemori87/followers", "following_url": "https://api.github.com/users/suemori87/following{/other_user}", "gists_url": "...
[]
closed
false
null
[]
null
[]
2020-12-02T23:47:14Z
2020-12-03T16:42:41Z
2020-12-03T16:42:41Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "htt...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1027/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1027/timeline
null
completed
null
null
16.924167
6,517
https://api.github.com/repos/huggingface/datasets/issues/1026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1026/comments
https://api.github.com/repos/huggingface/datasets/issues/1026/events
https://github.com/huggingface/datasets/issues/1026
755,689,195
MDU6SXNzdWU3NTU2ODkxOTU=
1,026
LΓ­o o
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.git...
[]
closed
false
null
[]
null
[]
2020-12-02T23:32:25Z
2020-12-03T16:42:47Z
2020-12-03T16:42:47Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
````l````````` ``` O ``` ````` Γ‘o ``` ```` ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "htt...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1026/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1026/timeline
null
completed
null
null
17.172778
6,518
https://api.github.com/repos/huggingface/datasets/issues/1004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1004/comments
https://api.github.com/repos/huggingface/datasets/issues/1004/events
https://github.com/huggingface/datasets/issues/1004
755,325,368
MDU6SXNzdWU3NTUzMjUzNjg=
1,004
how large datasets are handled under the hood
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/followin...
[]
closed
false
null
[]
null
[ "This library uses Apache Arrow under the hood to store datasets on disk.\r\nThe advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed.\r\n\r\nFor example when you access one element or ...
2020-12-02T14:32:40Z
2022-10-05T12:13:29Z
2022-10-05T12:13:29Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, than...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1004/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1004/timeline
null
completed
null
null
16,125.680278
6,540
https://api.github.com/repos/huggingface/datasets/issues/996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/996/comments
https://api.github.com/repos/huggingface/datasets/issues/996/events
https://github.com/huggingface/datasets/issues/996
755,176,084
MDU6SXNzdWU3NTUxNzYwODQ=
996
NotADirectoryError while loading the CNN/Dailymail dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/75367920?v=4", "events_url": "https://api.github.com/users/arc-bu/events{/privacy}", "followers_url": "https://api.github.com/users/arc-bu/followers", "following_url": "https://api.github.com/users/arc-bu/following{/other_user}", "gists_url": "https://a...
[]
closed
false
null
[]
null
[ "Looks like the google drive download failed.\r\nI'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file.\r\n\r\nWe should consider finding a better host than google drive for this dataset imo\r\nrelated : #873 #864 ", "It is working now, thank you. \r\n\r\nShould I leave this iss...
2020-12-02T11:07:56Z
2022-02-17T14:13:39Z
2022-02-17T14:13:39Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... ---------------------------------------...
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/996/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/996/timeline
null
completed
null
null
10,611.095278
6,548
https://api.github.com/repos/huggingface/datasets/issues/993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/993/comments
https://api.github.com/repos/huggingface/datasets/issues/993/events
https://github.com/huggingface/datasets/issues/993
755,135,768
MDU6SXNzdWU3NTUxMzU3Njg=
993
Problem downloading amazon_reviews_multi
{ "avatar_url": "https://avatars.githubusercontent.com/u/29229602?v=4", "events_url": "https://api.github.com/users/hfawaz/events{/privacy}", "followers_url": "https://api.github.com/users/hfawaz/followers", "following_url": "https://api.github.com/users/hfawaz/following{/other_user}", "gists_url": "https://a...
[]
closed
false
null
[]
null
[ "Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion?", "Hi, it seems a connection problem. \r\nNow it says: \r\n`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_ja_train.json`" ]
2020-12-02T10:15:57Z
2022-10-05T12:21:34Z
2022-10-05T12:21:34Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Thanks for adding the dataset. After trying to load the dataset, I am getting the following error: `ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json ` I used the following code to load the dataset: `load_dataset( dataset_name, ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/993/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/993/timeline
null
completed
null
null
16,130.093611
6,551
https://api.github.com/repos/huggingface/datasets/issues/988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/988/comments
https://api.github.com/repos/huggingface/datasets/issues/988/events
https://github.com/huggingface/datasets/issues/988
755,069,159
MDU6SXNzdWU3NTUwNjkxNTk=
988
making sure datasets are not loaded in memory and distributed training of them
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "my implementation of sharding per TPU core: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L316 \r\nmy implementation of dataloader for this case https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks...
2020-12-02T08:45:15Z
2022-10-05T13:00:42Z
2022-10-05T13:00:42Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in cas...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/988/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/988/timeline
null
completed
null
null
16,132.2575
6,556
https://api.github.com/repos/huggingface/datasets/issues/961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/961/comments
https://api.github.com/repos/huggingface/datasets/issues/961/events
https://github.com/huggingface/datasets/issues/961
754,434,398
MDU6SXNzdWU3NTQ0MzQzOTg=
961
sample multiple datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 \r\n\r\nI need to train my model distributedly with this dataloader, \"MultiTasksataloader\", currently this does not work in distributed fasion,\r\nto save on memory I tried to us...
2020-12-01T14:20:02Z
2024-06-17T08:23:20Z
2023-07-20T14:08:57Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I c...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/961/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/961/timeline
null
completed
null
null
23,063.815278
6,583
https://api.github.com/repos/huggingface/datasets/issues/942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/942/comments
https://api.github.com/repos/huggingface/datasets/issues/942/events
https://github.com/huggingface/datasets/issues/942
754,162,318
MDU6SXNzdWU3NTQxNjIzMTg=
942
D
{ "avatar_url": "https://avatars.githubusercontent.com/u/74238514?v=4", "events_url": "https://api.github.com/users/CryptoMiKKi/events{/privacy}", "followers_url": "https://api.github.com/users/CryptoMiKKi/followers", "following_url": "https://api.github.com/users/CryptoMiKKi/following{/other_user}", "gists_u...
[]
closed
false
null
[]
null
[]
2020-12-01T08:17:10Z
2020-12-03T16:42:53Z
2020-12-03T16:42:53Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "htt...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/942/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/942/timeline
null
completed
null
null
56.428611
6,602
https://api.github.com/repos/huggingface/datasets/issues/937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/937/comments
https://api.github.com/repos/huggingface/datasets/issues/937/events
https://github.com/huggingface/datasets/issues/937
753,921,078
MDU6SXNzdWU3NTM5MjEwNzg=
937
Local machine/cluster Beam Datasets example/tutorial
{ "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}", "followers_url": "https://api.github.com/users/shangw-nvidia/followers", "following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}", "g...
[]
closed
false
null
[]
null
[ "I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.\r\nFrom my experience the DirectRunner is fine though, even if it's clearly not memory efficient.\r\n\r\nIt would be awesome though to make it work locally on a SparkRunner !\r\nDid you manage to ma...
2020-12-01T01:11:43Z
2024-03-15T16:05:14Z
2024-03-15T16:05:14Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get eit...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/937/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/937/timeline
null
completed
null
null
28,814.891944
6,607
https://api.github.com/repos/huggingface/datasets/issues/927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/927/comments
https://api.github.com/repos/huggingface/datasets/issues/927/events
https://github.com/huggingface/datasets/issues/927
753,679,020
MDU6SXNzdWU3NTM2NzkwMjA=
927
Hello
{ "avatar_url": "https://avatars.githubusercontent.com/u/75259546?v=4", "events_url": "https://api.github.com/users/k125-ak/events{/privacy}", "followers_url": "https://api.github.com/users/k125-ak/followers", "following_url": "https://api.github.com/users/k125-ak/following{/other_user}", "gists_url": "https:...
[]
closed
false
null
[]
null
[]
2020-11-30T17:50:05Z
2020-11-30T17:50:30Z
2020-11-30T17:50:30Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "avatar_url": "https://avatars.githubusercontent.com/u/75259546?v=4", "events_url": "https://api.github.com/users/k125-ak/events{/privacy}", "followers_url": "https://api.github.com/users/k125-ak/followers", "following_url": "https://api.github.com/users/k125-ak/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/927/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/927/timeline
null
completed
null
null
0.006944
6,617
https://api.github.com/repos/huggingface/datasets/issues/919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/919/comments
https://api.github.com/repos/huggingface/datasets/issues/919/events
https://github.com/huggingface/datasets/issues/919
753,434,472
MDU6SXNzdWU3NTM0MzQ0NzI=
919
wrong length with datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ", "sorry I misunderstood length of dataset with dataloader, closed. thanks " ]
2020-11-30T12:23:39Z
2020-11-30T12:37:27Z
2020-11-30T12:37:26Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/919/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/919/timeline
null
completed
null
null
0.229722
6,624
https://api.github.com/repos/huggingface/datasets/issues/911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/911/comments
https://api.github.com/repos/huggingface/datasets/issues/911/events
https://github.com/huggingface/datasets/issues/911
752,806,215
MDU6SXNzdWU3NTI4MDYyMTU=
911
datasets module not found
{ "avatar_url": "https://avatars.githubusercontent.com/u/15836274?v=4", "events_url": "https://api.github.com/users/sbassam/events{/privacy}", "followers_url": "https://api.github.com/users/sbassam/followers", "following_url": "https://api.github.com/users/sbassam/following{/other_user}", "gists_url": "https:...
[]
closed
false
null
[]
null
[ "nvm, I'd made an assumption that the library gets installed with transformers. " ]
2020-11-29T01:24:15Z
2020-11-29T14:33:09Z
2020-11-29T14:33:09Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
{ "avatar_url": "https://avatars.githubusercontent.com/u/15836274?v=4", "events_url": "https://api.github.com/users/sbassam/events{/privacy}", "followers_url": "https://api.github.com/users/sbassam/followers", "following_url": "https://api.github.com/users/sbassam/following{/other_user}", "gists_url": "https:...
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/911/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/911/timeline
null
completed
null
null
13.148333
6,631
https://api.github.com/repos/huggingface/datasets/issues/910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/910/comments
https://api.github.com/repos/huggingface/datasets/issues/910/events
https://github.com/huggingface/datasets/issues/910
752,772,723
MDU6SXNzdWU3NTI3NzI3MjM=
910
Grindr meeting app web.Grindr
{ "avatar_url": "https://avatars.githubusercontent.com/u/75184749?v=4", "events_url": "https://api.github.com/users/jackin34/events{/privacy}", "followers_url": "https://api.github.com/users/jackin34/followers", "following_url": "https://api.github.com/users/jackin34/following{/other_user}", "gists_url": "htt...
[]
closed
false
null
[]
null
[]
2020-11-28T21:36:23Z
2020-11-29T10:11:51Z
2020-11-29T10:11:51Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "http...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/910/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/910/timeline
null
completed
null
null
12.591111
6,632
https://api.github.com/repos/huggingface/datasets/issues/900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/900/comments
https://api.github.com/repos/huggingface/datasets/issues/900/events
https://github.com/huggingface/datasets/issues/900
752,214,066
MDU6SXNzdWU3NTIyMTQwNjY=
900
datasets.load_dataset() custom chaching directory bug
{ "avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4", "events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}", "followers_url": "https://api.github.com/users/SapirWeissbuch/followers", "following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}", ...
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "Thanks for reporting ! I'm looking into it." ]
2020-11-27T12:18:53Z
2020-11-29T22:48:53Z
2020-11-29T22:48:53Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hello, I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to `~/.cache`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```p...
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/900/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/900/timeline
null
completed
null
null
58.5
6,642
https://api.github.com/repos/huggingface/datasets/issues/897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/897/comments
https://api.github.com/repos/huggingface/datasets/issues/897/events
https://github.com/huggingface/datasets/issues/897
752,100,256
MDU6SXNzdWU3NTIxMDAyNTY=
897
Dataset viewer issues
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url":...
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
[ "Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?", "Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewe...
2020-11-27T09:14:34Z
2021-10-31T09:12:01Z
2021-10-31T09:12:01Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. T...
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url":...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/897/timeline
null
completed
null
null
8,111.9575
6,645
https://api.github.com/repos/huggingface/datasets/issues/888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/888/comments
https://api.github.com/repos/huggingface/datasets/issues/888/events
https://github.com/huggingface/datasets/issues/888
750,944,422
MDU6SXNzdWU3NTA5NDQ0MjI=
888
Nested lists are zipped unexpectedly
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://ap...
[]
closed
false
null
[]
null
[ "Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.\r\nSee the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details", "Thanks.\r\nThis is a bit (very) confusing, but I guess if ...
2020-11-25T16:07:46Z
2020-11-25T17:30:39Z
2020-11-25T17:30:39Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://ap...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/888/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/888/timeline
null
completed
null
null
1.381389
6,654
https://api.github.com/repos/huggingface/datasets/issues/885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/885/comments
https://api.github.com/repos/huggingface/datasets/issues/885/events
https://github.com/huggingface/datasets/issues/885
750,789,052
MDU6SXNzdWU3NTA3ODkwNTI=
885
Very slow cold-start
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://ap...
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "Good point!", "Yes indeed. We can probably improve that by using lazy imports", "#1690 added fast start-up of the library " ]
2020-11-25T12:47:58Z
2021-01-13T11:31:25Z
2021-01-13T11:31:25Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi, I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant. When I load a metric, or a dataset, its fine that it takes time. The following ranges from 3 to 9 seconds: ``` python -m timeit -n 1 -r 1 'from datasets import load_dataset' ``` edi...
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://ap...
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/885/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/885/timeline
null
completed
null
null
1,174.724167
6,657
https://api.github.com/repos/huggingface/datasets/issues/880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/880/comments
https://api.github.com/repos/huggingface/datasets/issues/880/events
https://github.com/huggingface/datasets/issues/880
748,949,606
MDU6SXNzdWU3NDg5NDk2MDY=
880
Add SQA
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url"...
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq ", "@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinat...
2020-11-23T16:31:55Z
2020-12-23T13:58:24Z
2020-12-23T13:58:23Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/r...
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/880/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/880/timeline
null
completed
null
null
717.441111
6,662
https://api.github.com/repos/huggingface/datasets/issues/879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/879/comments
https://api.github.com/repos/huggingface/datasets/issues/879/events
https://github.com/huggingface/datasets/issues/879
748,848,847
MDU6SXNzdWU3NDg4NDg4NDc=
879
boolq does not load
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "Hi ! It runs on my side without issues. I tried\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"boolq\")\r\n```\r\n\r\nWhat version of datasets and tensorflow are your runnning ?\r\nAlso if you manage to get a minimal reproducible script (on google colab for example) that would be useful.", "...
2020-11-23T14:28:28Z
2022-10-05T12:23:32Z
2022-10-05T12:23:32Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset d...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/879/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/879/timeline
null
completed
null
null
16,341.917778
6,663
https://api.github.com/repos/huggingface/datasets/issues/877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/877/comments
https://api.github.com/repos/huggingface/datasets/issues/877/events
https://github.com/huggingface/datasets/issues/877
748,234,438
MDU6SXNzdWU3NDgyMzQ0Mzg=
877
DataLoader(datasets) become more and more slowly within iterations
{ "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https:...
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not", "> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset...
2020-11-22T12:41:10Z
2024-11-22T03:02:53Z
2020-11-29T15:45:12Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, th...
{ "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/877/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/877/timeline
null
completed
null
null
171.067222
6,665
https://api.github.com/repos/huggingface/datasets/issues/876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/876/comments
https://api.github.com/repos/huggingface/datasets/issues/876/events
https://github.com/huggingface/datasets/issues/876
748,195,104
MDU6SXNzdWU3NDgxOTUxMDQ=
876
imdb dataset cannot be loaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n``...
2020-11-22T08:24:43Z
2024-05-10T03:03:29Z
2020-12-24T17:38:47Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/...
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/876/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/876/timeline
null
completed
null
null
777.234444
6,666
https://api.github.com/repos/huggingface/datasets/issues/875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/875/comments
https://api.github.com/repos/huggingface/datasets/issues/875/events
https://github.com/huggingface/datasets/issues/875
748,194,311
MDU6SXNzdWU3NDgxOTQzMTE=
875
bug in boolq dataset loading
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "I just opened a PR to fix this.\r\nThanks for reporting !" ]
2020-11-22T08:18:34Z
2020-11-24T10:12:33Z
2020-11-24T10:12:33Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am trying to load boolq dataset: ``` import datasets datasets.load_dataset("boolq") ``` I am getting the following errors, thanks for your help ``` >>> import datasets 2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda...
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/875/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/875/timeline
null
completed
null
null
49.899722
6,667
https://api.github.com/repos/huggingface/datasets/issues/874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/874/comments
https://api.github.com/repos/huggingface/datasets/issues/874/events
https://github.com/huggingface/datasets/issues/874
748,193,140
MDU6SXNzdWU3NDgxOTMxNDA=
874
trec dataset unavailable
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "This was fixed in #740 \r\nCould you try to update `datasets` and try again ?", "This has been fixed in datasets 1.1.3" ]
2020-11-22T08:09:36Z
2020-11-27T13:56:42Z
2020-11-27T13:56:42Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https:...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/874/timeline
null
completed
null
null
125.785
6,668
https://api.github.com/repos/huggingface/datasets/issues/873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/873/comments
https://api.github.com/repos/huggingface/datasets/issues/873/events
https://github.com/huggingface/datasets/issues/873
747,959,523
MDU6SXNzdWU3NDc5NTk1MjM=
873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
{ "avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4", "events_url": "https://api.github.com/users/vishal-burman/events{/privacy}", "followers_url": "https://api.github.com/users/vishal-burman/followers", "following_url": "https://api.github.com/users/vishal-burman/following{/other_user}", "g...
[]
closed
false
null
[]
null
[ "I get the same error. It was fixed some days ago, but again it appears", "Hi @mrm8488 it's working again today without any fix so I am closing this issue.", "I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is alr...
2020-11-21T06:30:45Z
2023-08-03T12:07:03Z
2020-11-22T12:18:05Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4", "events_url": "https://api.github.com/users/vishal-burman/events{/privacy}", "followers_url": "https://api.github.com/users/vishal-burman/followers", "following_url": "https://api.github.com/users/vishal-burman/following{/other_user}", "g...
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/873/timeline
null
completed
null
null
29.788889
6,669
https://api.github.com/repos/huggingface/datasets/issues/871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/871/comments
https://api.github.com/repos/huggingface/datasets/issues/871/events
https://github.com/huggingface/datasets/issues/871
747,470,136
MDU6SXNzdWU3NDc0NzAxMzY=
871
terminate called after throwing an instance of 'google::protobuf::FatalException'
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
[]
closed
false
null
[]
null
[ "Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)", "closing now, figured out this is because the max length of decoder w...
2020-11-20T12:56:24Z
2020-12-12T21:16:32Z
2020-12-12T21:16:32Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https:/...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/871/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/871/timeline
null
completed
null
null
536.335556
6,671
https://api.github.com/repos/huggingface/datasets/issues/870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/870/comments
https://api.github.com/repos/huggingface/datasets/issues/870/events
https://github.com/huggingface/datasets/issues/870
747,021,996
MDU6SXNzdWU3NDcwMjE5OTY=
870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
{ "avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4", "events_url": "https://api.github.com/users/jncasey/events{/privacy}", "followers_url": "https://api.github.com/users/jncasey/followers", "following_url": "https://api.github.com/users/jncasey/following{/other_user}", "gists_url": "https:...
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for your message.\r\nIndeed it's a free feature we can add and that can be useful.\r\nIf you want to contribute, feel free to open a PR to add it to the text dataset script :)", "Resolved via #1913." ]
2020-11-19T23:51:31Z
2022-06-01T15:25:53Z
2022-06-01T15:25:52Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url"...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/870/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/870/timeline
null
completed
null
null
13,407.5725
6,672
https://api.github.com/repos/huggingface/datasets/issues/866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/866/comments
https://api.github.com/repos/huggingface/datasets/issues/866/events
https://github.com/huggingface/datasets/issues/866
745,719,222
MDU6SXNzdWU3NDU3MTkyMjI=
866
OSCAR from Inria group
{ "avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4", "events_url": "https://api.github.com/users/jchwenger/events{/privacy}", "followers_url": "https://api.github.com/users/jchwenger/followers", "following_url": "https://api.github.com/users/jchwenger/following{/other_user}", "gists_url": "...
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "PR is already open here : #348 \r\nThe only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled).\r\nAs soon as #863 is merged we can start computing them. This will take a bit of time though", "Grand, thanks for this!" ]
2020-11-18T14:40:54Z
2020-11-18T15:01:30Z
2020-11-18T15:01:30Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by la...
{ "avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4", "events_url": "https://api.github.com/users/jchwenger/events{/privacy}", "followers_url": "https://api.github.com/users/jchwenger/followers", "following_url": "https://api.github.com/users/jchwenger/following{/other_user}", "gists_url": "...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/866/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/866/timeline
null
completed
null
null
0.343333
6,676