id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
931,849,724
2,559
Memory usage consistently increases when processing a dataset with `.map`
closed
[]
2021-06-28T18:31:58
2023-07-20T13:34:10
2023-07-20T13:34:10
## Describe the bug I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch size but that doesn't seem to help. ## Steps to reproduce the bug Providing code as it is would be hard. I can provide a MVP if that helps. ## Expected results Memory usage should become consistent after some time following the launch of processing. ## Actual results Memory usage keeps on increasing. ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.7 - PyArrow version: 3.0.0
apsdehal
https://github.com/huggingface/datasets/issues/2559
null
false
931,736,647
2,558
Update: WebNLG - update checksums
closed
[]
2021-06-28T16:16:37
2021-06-28T17:23:17
2021-06-28T17:23:16
The master branch changed so I computed the new checksums. I also pinned a specific revision so that it doesn't happen again in the future. Fix https://github.com/huggingface/datasets/issues/2553
lhoestq
https://github.com/huggingface/datasets/pull/2558
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2558", "html_url": "https://github.com/huggingface/datasets/pull/2558", "diff_url": "https://github.com/huggingface/datasets/pull/2558.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2558.patch", "merged_at": "2021-06-28T17:23:16" }
true
931,633,823
2,557
Fix `fever` keys
closed
[]
2021-06-28T14:27:02
2021-06-28T16:11:30
2021-06-28T16:11:29
The keys has duplicates since they were reset to 0 after each file. I fixed it by taking into account the file index as well.
lhoestq
https://github.com/huggingface/datasets/pull/2557
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2557", "html_url": "https://github.com/huggingface/datasets/pull/2557", "diff_url": "https://github.com/huggingface/datasets/pull/2557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2557.patch", "merged_at": "2021-06-28T16:11:29" }
true
931,595,872
2,556
Better DuplicateKeysError error to help the user debug the issue
closed
[]
2021-06-28T13:50:57
2022-06-28T09:26:04
2022-06-28T09:26:04
As mentioned in https://github.com/huggingface/datasets/issues/2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys. The current one is ```python datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys should be unique and deterministic in nature ``` and we could have something that guides the user to debugging the issue: ```python DuplicateKeysError: both 42th and 1337th examples have the same keys `48`. Please fix the dataset script at <path/to/the/dataset/script> ```
lhoestq
https://github.com/huggingface/datasets/issues/2556
null
false
931,585,485
2,555
Fix code_search_net keys
closed
[]
2021-06-28T13:40:23
2021-09-02T08:24:43
2021-06-28T14:10:35
There were duplicate keys in the `code_search_net` dataset, as reported in https://github.com/huggingface/datasets/issues/2552 I fixed the keys (it was an addition of the file and row indices, which was causing collisions) Fix #2552.
lhoestq
https://github.com/huggingface/datasets/pull/2555
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2555", "html_url": "https://github.com/huggingface/datasets/pull/2555", "diff_url": "https://github.com/huggingface/datasets/pull/2555.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2555.patch", "merged_at": "2021-06-28T14:10:35" }
true
931,453,855
2,554
Multilabel metrics not supported
closed
[]
2021-06-28T11:09:46
2021-10-13T12:29:13
2021-07-08T08:40:15
When I try to use a metric like F1 macro I get the following error: ``` TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' ``` There is an explicit casting here: https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L274 And looks like this is because here https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/metrics/f1/f1.py#L88 the features can only be integers, so we cannot use that F1 for multilabel. Instead, if I create the following F1 (ints replaced with sequence of ints), it will work: ```python class F1(datasets.Metric): def _info(self): return datasets.MetricInfo( description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "predictions": datasets.Sequence(datasets.Value("int32")), "references": datasets.Sequence(datasets.Value("int32")), } ), reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html"], ) def _compute(self, predictions, references, labels=None, pos_label=1, average="binary", sample_weight=None): return { "f1": f1_score( references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight, ), } ```
GuillemGSubies
https://github.com/huggingface/datasets/issues/2554
null
false
931,365,926
2,553
load_dataset("web_nlg") NonMatchingChecksumError
closed
[]
2021-06-28T09:26:46
2021-06-28T17:23:39
2021-06-28T17:23:16
Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev") ``` Gives ``` NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip'] ``` ## Environment info - `datasets` version: 1.8.0 - Platform: macOS-11.3.1-x86_64-i386-64bit - Python version: 3.9.4 - PyArrow version: 3.0.0 Also tested on Linux, with python 3.6.8
alxthm
https://github.com/huggingface/datasets/issues/2553
null
false
931,354,687
2,552
Keys should be unique error on code_search_net
closed
[]
2021-06-28T09:15:20
2021-09-06T14:08:30
2021-09-02T08:25:29
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] Downloading: 19.1kB [00:00, 10.1MB/s] No config specified, defaulting to: code_search_net/all Downloading and preparing dataset code_search_net/all (download: 4.77 GiB, generated: 5.99 GiB, post-processed: Unknown size, total: 10.76 GiB) to /Users/thomwolf/.cache/huggingface/datasets/code_search_net/all/1.0.0/b3e8278faf5d67da1d06981efbeac3b76a2900693bd2239bbca7a4a3b0d6e52a... Traceback (most recent call last): File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/builder.py", line 1067, in _prepare_split writer.write(example, key) File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 343, in write self.check_duplicate_keys() File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 354, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys should be unique and deterministic in nature ``` ## Environment info - `datasets` version: 1.8.1.dev0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 2.0.0
thomwolf
https://github.com/huggingface/datasets/issues/2552
null
false
930,967,978
2,551
Fix FileSystems documentation
closed
[]
2021-06-27T16:18:42
2021-06-28T13:09:55
2021-06-28T13:09:54
### What this fixes: This PR resolves several issues I discovered in the documentation on the `datasets.filesystems` module ([this page](https://huggingface.co/docs/datasets/filesystems.html)). ### What were the issues? When I originally tried implementing the code examples I faced several bugs attributed to: - out of date [botocore](https://github.com/boto/botocore) call signatures - capitalization errors in the `S3FileSystem` class name (written as `S3Filesystem` in one place) - call signature errors for the `S3FileSystem` class constructor (uses parameter `sessions` instead of `session` in some places) (see [`s3fs`](https://s3fs.readthedocs.io/en/latest/api.html#s3fs.core.S3FileSystem) for where this constructor signature is defined) ### Testing/reviewing notes Instructions for generating the documentation locally: [here](https://github.com/huggingface/datasets/tree/master/docs#generating-the-documentation).
connor-mccarthy
https://github.com/huggingface/datasets/pull/2551
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2551", "html_url": "https://github.com/huggingface/datasets/pull/2551", "diff_url": "https://github.com/huggingface/datasets/pull/2551.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2551.patch", "merged_at": "2021-06-28T13:09:54" }
true
930,951,287
2,550
Allow for incremental cumulative metric updates in a distributed setup
closed
[]
2021-06-27T15:00:58
2021-09-26T13:42:39
2021-09-26T13:42:39
Currently, using a metric allows for one of the following: - Per example/batch metrics - Cumulative metrics over the whole data What I'd like is to have an efficient way to get cumulative metrics over the examples/batches added so far, in order to display it as part of the progress bar during training/evaluation. Since most metrics are just an average of per-example metrics (which aren't?), an efficient calculation can be done as follows: `((score_cumulative * n_cumulative) + (score_new * n_new)) / (n_cumulative+ n_new)` where `n` and `score` refer to number of examples and metric score, `cumulative` refers to the cumulative metric and `new` refers to the addition of new examples. If you don't want to add this capability in the library, a simple solution exists so users can do it themselves: It is easy to implement for a single process setup, but in a distributed one there is no way to get the correct `n_new`. The solution for this is to return the number of examples that was used to compute the metrics in `.compute()` by adding the following line here: https://github.com/huggingface/datasets/blob/5a3221785311d0ce86c2785b765e86bd6997d516/src/datasets/metric.py#L402-L403 ``` output["number_of_examples"] = len(predictions) ``` and also remove the log message here so it won't spam: https://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/src/datasets/metric.py#L411 If this change is ok with you, I'll open a pull request.
eladsegal
https://github.com/huggingface/datasets/issues/2550
null
false
929,819,093
2,549
Handling unlabeled datasets
closed
[]
2021-06-25T04:32:23
2021-06-25T21:07:57
2021-06-25T21:07:56
Hi! Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable). For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"label": data.get("gold_label")`, but got the following error: ``` File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 953, in encode_example return encode_nested_example(self, example) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in encode_nested_example k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 875, in encode_nested_example return schema.encode_example(obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 653, in encode_example if not -1 <= example_data < self.num_classes: TypeError: '<=' not supported between instances of 'int' and 'NoneType' ``` What's the proper way to handle reading unlabeled datasets, especially for downstream usage with Transformers?
nelson-liu
https://github.com/huggingface/datasets/issues/2549
null
false
929,232,831
2,548
Field order issue in loading json
closed
[]
2021-06-24T13:29:53
2021-06-24T14:36:43
2021-06-24T14:34:05
## Describe the bug The `load_dataset` function expects columns in alphabetical order when loading json files. Similar bug was previously reported for csv in #623 and fixed in #684. ## Steps to reproduce the bug For a json file `j.json`, ``` {"c":321, "a": 1, "b": 2} ``` Running the following, ``` f= datasets.Features({'a': Value('int32'), 'b': Value('int32'), 'c': Value('int32')}) json_data = datasets.load_dataset('json', data_files='j.json', features=f) ``` ## Expected results A successful load. ## Actual results ``` File "pyarrow/table.pxi", line 1409, in pyarrow.lib.Table.cast ValueError: Target schema's field names are not matching the table's field names: ['c', 'a', 'b'], ['a', 'b', 'c'] ``` ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 3.0.0
luyug
https://github.com/huggingface/datasets/issues/2548
null
false
929,192,329
2,547
Dataset load_from_disk is too slow
open
[]
2021-06-24T12:45:44
2021-06-25T14:56:38
null
@lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in the context of a language model training, therefore I'm wasting 100$ each time I have to load the dataset from disk again (because the spot instance was stopped by aws and I need to relaunch it for example). ## Steps to reproduce the bug Just get the oscar in spanish (around 150GGB) and try to first save in disk and then load the processed dataset. It's not dependent on the task you're doing, it just depends on the size of the text dataset. ## Expected results I expect the dataset to be loaded in a normal time, by using the whole machine for loading it, I mean if you store the dataset in multiple files (.arrow) and then load it from multiple files, you can use multiprocessing for that and therefore don't waste so much time. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Ubuntu 18 - Python version: 3.8 I've seen you're planning to include a streaming mode for load_dataset, but that only saves the downloading and processing time, that's not being a problem for me, you cannot save the pure loading from disk time, therefore that's not a solution for my use case or for anyone who wants to use your library for training a language model.
avacaondata
https://github.com/huggingface/datasets/issues/2547
null
false
929,091,689
2,546
Add license to the Cambridge English Write & Improve + LOCNESS dataset card
closed
[]
2021-06-24T10:39:29
2021-06-24T10:52:01
2021-06-24T10:52:01
As noticed in https://github.com/huggingface/datasets/pull/2539, the licensing information was missing for this dataset. I added it and I also filled a few other empty sections.
lhoestq
https://github.com/huggingface/datasets/pull/2546
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2546", "html_url": "https://github.com/huggingface/datasets/pull/2546", "diff_url": "https://github.com/huggingface/datasets/pull/2546.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2546.patch", "merged_at": "2021-06-24T10:52:01" }
true
929,016,580
2,545
Fix DuplicatedKeysError in drop dataset
closed
[]
2021-06-24T09:10:39
2021-06-24T14:57:08
2021-06-24T14:57:08
Close #2542. cc: @VictorSanh.
albertvillanova
https://github.com/huggingface/datasets/pull/2545
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2545", "html_url": "https://github.com/huggingface/datasets/pull/2545", "diff_url": "https://github.com/huggingface/datasets/pull/2545.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2545.patch", "merged_at": "2021-06-24T14:57:08" }
true
928,900,827
2,544
Fix logging levels
closed
[]
2021-06-24T06:41:36
2021-06-25T13:40:19
2021-06-25T13:40:19
Sometimes default `datasets` logging can be too verbose. One approach could be reducing some logging levels, from info to debug, or from warning to info. Close #2543. cc: @stas00
albertvillanova
https://github.com/huggingface/datasets/pull/2544
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2544", "html_url": "https://github.com/huggingface/datasets/pull/2544", "diff_url": "https://github.com/huggingface/datasets/pull/2544.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2544.patch", "merged_at": "2021-06-25T13:40:19" }
true
928,571,915
2,543
switching some low-level log.info's to log.debug?
closed
[]
2021-06-23T19:26:55
2021-06-25T13:40:19
2021-06-25T13:40:19
In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components. The trouble is that now we get a ton of these: ``` 06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 acquired on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock 06/23/2021 12:15:31 - INFO - datasets.arrow_writer - Done writing 50 examples in 12280 bytes /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow. 06/23/2021 12:15:31 - INFO - datasets.arrow_dataset - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns. 06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 released on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock ``` May I suggest that these can be `log.debug` as it's no informative to the user. More examples: these are not informative - too much information: ``` 06/23/2021 12:14:26 - INFO - datasets.load - Checking /home/stas/.cache/huggingface/datasets/downloads/459933f1fe47711fad2f6ff8110014ff189120b45ad159ef5b8e90ea43a174fa.e23e7d1259a8c6274a82a42a8936dd1b87225302c6dc9b7261beb3bc2daac640.py for additional imports. 06/23/2021 12:14:27 - INFO - datasets.builder - Constructing Dataset for split train, validation, test, from /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a ``` While these are: ``` 06/23/2021 12:14:27 - INFO - datasets.info - Loading Dataset Infos from /home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt16/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a 06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a) ``` I also realize that `transformers` examples don't have do use `info` for `datasets` to let the default `warning` keep logging to less noisy. But I think currently the log levels are slightly misused and skewed by 1 level. Many `warnings` will better be `info`s and most `info`s be `debug`. e.g.: ``` 06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a) ``` why is this a warning? it is informing me that the cache is used, there is nothing to be worried about. I'd have it as `info`. Warnings are typically something that's bordering error or the first thing to check when things don't work as expected. infrequent info is there to inform of the different stages or important events. Everything else is debug. At least the way I understand things.
stas00
https://github.com/huggingface/datasets/issues/2543
null
false
928,540,382
2,542
`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA`
closed
[]
2021-06-23T18:41:16
2021-06-25T21:50:05
2021-06-24T14:57:08
## Describe the bug Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("drop") load_dataset("adversarial_qa", "adversarialQA") ``` ## Expected results The examples keys should be unique. ## Actual results ```bash >>> load_dataset("drop") Using custom data configuration default Downloading and preparing dataset drop/default (download: 7.92 MiB, generated: 111.88 MiB, post-processed: Unknown size, total: 119.80 MiB) to /home/hf/.cache/huggingface/datasets/drop/default/0.1.0/7a94f1e2bb26c4b5c75f89857c06982967d7416e5af935a9374b9bccf5068026... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset use_auth_token=use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 28553293-d719-441b-8f00-ce3dc6df5398 Keys should be unique and deterministic in nature ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: Linux-5.4.0-1044-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyArrow version: 3.0.0
VictorSanh
https://github.com/huggingface/datasets/issues/2542
null
false
928,529,078
2,541
update discofuse link cc @ekQ
closed
[]
2021-06-23T18:24:58
2021-06-28T14:34:51
2021-06-28T14:34:50
Updating the discofuse link: https://github.com/google-research-datasets/discofuse/commit/fd4b120cb3dd19a417e7f3b5432010b574b5eeee
VictorSanh
https://github.com/huggingface/datasets/pull/2541
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2541", "html_url": "https://github.com/huggingface/datasets/pull/2541", "diff_url": "https://github.com/huggingface/datasets/pull/2541.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2541.patch", "merged_at": "2021-06-28T14:34:50" }
true
928,433,892
2,540
Remove task templates if required features are removed during `Dataset.map`
closed
[]
2021-06-23T16:20:25
2021-06-24T14:41:15
2021-06-24T13:34:03
This PR fixes a bug reported by @craffel where removing a dataset's columns during `Dataset.map` triggered a `KeyError` because the `TextClassification` template tried to access the removed columns during `DatasetInfo.__post_init__`: ```python from datasets import load_dataset # `yelp_polarity` comes with a `TextClassification` template ds = load_dataset("yelp_polarity", split="test") ds # Dataset({ # features: ['text', 'label'], # num_rows: 38000 # }) # Triggers KeyError: 'label' - oh noes! ds.map(lambda x: {"inputs": 0}, remove_columns=ds.column_names) ``` I wrote a unit test to make sure I could reproduce the error and then patched a fix.
lewtun
https://github.com/huggingface/datasets/pull/2540
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2540", "html_url": "https://github.com/huggingface/datasets/pull/2540", "diff_url": "https://github.com/huggingface/datasets/pull/2540.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2540.patch", "merged_at": "2021-06-24T13:34:03" }
true
927,952,429
2,539
remove wi_locness dataset due to licensing issues
closed
[]
2021-06-23T07:35:32
2021-06-25T14:52:42
2021-06-25T14:52:42
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
aseifert
https://github.com/huggingface/datasets/pull/2539
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2539", "html_url": "https://github.com/huggingface/datasets/pull/2539", "diff_url": "https://github.com/huggingface/datasets/pull/2539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2539.patch", "merged_at": null }
true
927,940,691
2,538
Loading partial dataset when debugging
open
[]
2021-06-23T07:19:52
2023-04-19T11:05:38
null
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues. Is there a way to only load part of the dataset on load_dataset? This would really speed up my workflow. Something like a debug mode would really help. Thanks!
reachtarunhere
https://github.com/huggingface/datasets/issues/2538
null
false
927,472,659
2,537
Add Parquet loader + from_parquet and to_parquet
closed
[]
2021-06-22T17:28:23
2021-06-30T16:31:03
2021-06-30T16:30:58
Continuation of #2247 I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`. As usual, the data are converted to arrow in a batched way to avoid loading everything in memory.
lhoestq
https://github.com/huggingface/datasets/pull/2537
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2537", "html_url": "https://github.com/huggingface/datasets/pull/2537", "diff_url": "https://github.com/huggingface/datasets/pull/2537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2537.patch", "merged_at": "2021-06-30T16:30:58" }
true
927,338,639
2,536
Use `Audio` features for `AutomaticSpeechRecognition` task template
closed
[]
2021-06-22T15:07:21
2022-06-01T17:18:16
2022-06-01T17:18:16
In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'. The solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in `datasets`, but should be included in the `AutomaticSpeechRecognition` template once they are.
lewtun
https://github.com/huggingface/datasets/issues/2536
null
false
927,334,349
2,535
Improve Features docs
closed
[]
2021-06-22T15:03:27
2021-06-23T13:40:43
2021-06-23T13:40:43
- Fix rendering and cross-references in Features docs - Add docstrings to Features methods
albertvillanova
https://github.com/huggingface/datasets/pull/2535
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2535", "html_url": "https://github.com/huggingface/datasets/pull/2535", "diff_url": "https://github.com/huggingface/datasets/pull/2535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2535.patch", "merged_at": "2021-06-23T13:40:43" }
true
927,201,435
2,534
Sync with transformers disabling NOTSET
closed
[]
2021-06-22T12:54:21
2021-06-24T14:42:47
2021-06-24T14:42:47
Close #2528.
albertvillanova
https://github.com/huggingface/datasets/pull/2534
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2534", "html_url": "https://github.com/huggingface/datasets/pull/2534", "diff_url": "https://github.com/huggingface/datasets/pull/2534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2534.patch", "merged_at": "2021-06-24T14:42:47" }
true
927,193,264
2,533
Add task template for automatic speech recognition
closed
[]
2021-06-22T12:45:02
2021-06-23T16:14:46
2021-06-23T15:56:57
This PR adds a task template for automatic speech recognition. In this task, the input is a path to an audio file which the model consumes to produce a transcription. Usage: ```python from datasets import load_dataset from datasets.tasks import AutomaticSpeechRecognition ds = load_dataset("timit_asr", split="train[:10]") # Dataset({ # features: ['file', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], # num_rows: 10 # }) task = AutomaticSpeechRecognition(audio_file_column="file", transcription_column="text") ds.prepare_for_task(task) # Dataset({ # features: ['audio_file', 'transcription'], # num_rows: 10 # }) ```
lewtun
https://github.com/huggingface/datasets/pull/2533
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2533", "html_url": "https://github.com/huggingface/datasets/pull/2533", "diff_url": "https://github.com/huggingface/datasets/pull/2533.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2533.patch", "merged_at": "2021-06-23T15:56:57" }
true
927,063,196
2,532
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
closed
[]
2021-06-22T10:08:18
2021-06-23T05:17:25
2021-06-23T05:17:25
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner). The pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https://en.wikipedia.org/wiki/Kana_ligature) break the alignment of `return_offsets_mapping`: ![image](https://user-images.githubusercontent.com/50871412/122904371-db192700-d382-11eb-8917-1775db76db69.png) Without the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https://colab.research.google.com/drive/1MmOqf3ppzzdKKyMWkn0bJy6DqzOO0SSm?usp=sharing) It is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('ヿ')` return 'コト'. One workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`. I guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this. p.s. **I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https://github.com/huggingface/datasets/pull/2466)** `get_dataset `is just a simple wrapping for `load_dataset` and the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-large")`
cosmeowpawlitan
https://github.com/huggingface/datasets/issues/2532
null
false
927,017,924
2,531
Fix dev version
closed
[]
2021-06-22T09:17:10
2021-06-22T09:47:10
2021-06-22T09:47:09
The dev version that ends in `.dev0` should be greater than the current version. However it happens that `1.8.0 > 1.8.0.dev0` for example. Therefore we need to use `1.8.1.dev0` for example in this case. I updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps about this.
lhoestq
https://github.com/huggingface/datasets/pull/2531
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2531", "html_url": "https://github.com/huggingface/datasets/pull/2531", "diff_url": "https://github.com/huggingface/datasets/pull/2531.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2531.patch", "merged_at": "2021-06-22T09:47:09" }
true
927,013,773
2,530
Fixed label parsing in the ProductReviews dataset
closed
[]
2021-06-22T09:12:45
2021-06-22T12:55:20
2021-06-22T12:52:40
Fixed issue with parsing dataset labels.
yavuzKomecoglu
https://github.com/huggingface/datasets/pull/2530
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2530", "html_url": "https://github.com/huggingface/datasets/pull/2530", "diff_url": "https://github.com/huggingface/datasets/pull/2530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2530.patch", "merged_at": "2021-06-22T12:52:40" }
true
926,378,812
2,529
Add summarization template
closed
[]
2021-06-21T16:08:31
2021-06-23T14:22:11
2021-06-23T13:30:10
This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template. Usage: ```python from datasets import load_dataset from datasets.tasks import Summarization ds = load_dataset("xsum", split="train") # Dataset({ # features: ['document', 'summary', 'id'], # num_rows: 204045 # }) summarization = Summarization(text_column="document", summary_column="summary") ds.prepare_for_task(summarization) # Dataset({ # features: ['text', 'summary'], # num_rows: 204045 # }) ```
lewtun
https://github.com/huggingface/datasets/pull/2529
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2529", "html_url": "https://github.com/huggingface/datasets/pull/2529", "diff_url": "https://github.com/huggingface/datasets/pull/2529.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2529.patch", "merged_at": "2021-06-23T13:30:10" }
true
926,314,656
2,528
Logging cannot be set to NOTSET similar to transformers
closed
[]
2021-06-21T15:04:54
2021-06-24T14:42:47
2021-06-24T14:42:47
## Describe the bug In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b53bc55ba9bb10d5ee279eab51a2f0acc5af2a6b/src/transformers/file_utils.py#L1449) `disable=bool(logging.get_verbosity() == logging.NOTSET)` and datasets accomplishes this like [so](https://github.com/huggingface/datasets/blob/83554e410e1ab8c6f705cfbb2df7953638ad3ac1/src/datasets/utils/file_utils.py#L493) `not_verbose = bool(logger.getEffectiveLevel() > WARNING)` ## Steps to reproduce the bug ```python import datasets import logging datasets.logging.get_verbosity = lambda : logging.NOTSET datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ``` ## Expected results The code should download and load the dataset as normal without displaying progress bars ## Actual results ```ImportError Traceback (most recent call last) <ipython-input-4-aec65c0509c6> in <module> ----> 1 datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ~/venv/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs) 713 dataset=True, 714 return_resolved_file_path=True, --> 715 use_auth_token=use_auth_token, 716 ) 717 # Set the base path for downloads as the parent of the script location ~/venv/lib/python3.7/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs) 350 file_path = hf_bucket_url(path, filename=name, dataset=False) 351 try: --> 352 local_path = cached_path(file_path, download_config=download_config) 353 except FileNotFoundError: 354 raise FileNotFoundError( ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 289 use_etag=download_config.use_etag, 290 max_retries=download_config.max_retries, --> 291 use_auth_token=download_config.use_auth_token, 292 ) 293 elif os.path.exists(url_or_filename): ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 668 headers=headers, 669 cookies=cookies, --> 670 max_retries=max_retries, 671 ) 672 ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries) 493 initial=resume_size, 494 desc="Downloading", --> 495 disable=not_verbose, 496 ) 497 for chunk in response.iter_content(chunk_size=1024): ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in __init__(self, *args, **kwargs) 217 total = self.total * unit_scale if self.total else self.total 218 self.container = self.status_printer( --> 219 self.fp, total, self.desc, self.ncols) 220 self.sp = self.display 221 ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in status_printer(_, total, desc, ncols) 95 if IProgress is None: # #187 #451 #558 #872 96 raise ImportError( ---> 97 "IProgress not found. Please update jupyter and ipywidgets." 98 " See https://ipywidgets.readthedocs.io/en/stable" 99 "/user_install.html") ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.95-42.163.amzn2.x86_64-x86_64-with-debian-10.8 - Python version: 3.7.10 - PyArrow version: 3.0.0 I am running this code on Deepnote and which important to this issue **does not** support IPywidgets
joshzwiebel
https://github.com/huggingface/datasets/issues/2528
null
false
926,031,525
2,527
Replace bad `n>1M` size tag
closed
[]
2021-06-21T09:42:35
2021-06-21T15:06:50
2021-06-21T15:06:49
Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M<n<10M`, etc. This resulted in unexpected results when searching for datasets bigger than 1M on the hub, since it was only showing the ones with the tag `n>1M`.
lhoestq
https://github.com/huggingface/datasets/pull/2527
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2527", "html_url": "https://github.com/huggingface/datasets/pull/2527", "diff_url": "https://github.com/huggingface/datasets/pull/2527.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2527.patch", "merged_at": "2021-06-21T15:06:49" }
true
925,929,228
2,526
Add COCO datasets
open
[]
2021-06-21T07:48:32
2023-06-22T14:12:18
null
## Adding a Dataset - **Name:** COCO - **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset. - **Paper + website:** https://cocodataset.org/#home - **Data:** https://cocodataset.org/#download - **Motivation:** It would be great to have COCO available in HuggingFace datasets, as we are moving beyond just text. COCO includes multi-modalities (images + text), as well as a huge amount of images annotated with objects, segmentation masks, keypoints etc., on which models like DETR (which I recently added to HuggingFace Transformers) are trained. Currently, one needs to download everything from the website and place it in a local folder, but it would be much easier if we can directly access it through the datasets API. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
NielsRogge
https://github.com/huggingface/datasets/issues/2526
null
false
925,896,358
2,525
Use scikit-learn package rather than sklearn in setup.py
closed
[]
2021-06-21T07:04:25
2021-06-21T10:01:13
2021-06-21T08:57:33
The sklearn package is an historical thing and should probably not be used by anyone, see https://github.com/scikit-learn/scikit-learn/issues/8215#issuecomment-344679114 for some caveats. Note: this affects only TESTS_REQUIRE so I guess only developers not end users.
lesteve
https://github.com/huggingface/datasets/pull/2525
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2525", "html_url": "https://github.com/huggingface/datasets/pull/2525", "diff_url": "https://github.com/huggingface/datasets/pull/2525.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2525.patch", "merged_at": "2021-06-21T08:57:33" }
true
925,610,934
2,524
Raise FileNotFoundError in WindowsFileLock
closed
[]
2021-06-20T14:25:11
2021-06-28T09:56:22
2021-06-28T08:47:39
Closes #2443
mariosasko
https://github.com/huggingface/datasets/pull/2524
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2524", "html_url": "https://github.com/huggingface/datasets/pull/2524", "diff_url": "https://github.com/huggingface/datasets/pull/2524.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2524.patch", "merged_at": "2021-06-28T08:47:39" }
true
925,421,008
2,523
Fr
closed
[]
2021-06-19T15:56:32
2021-06-19T18:48:23
2021-06-19T18:48:23
__Originally posted by @lewtun in https://github.com/huggingface/datasets/pull/2469__
aDrIaNo34500
https://github.com/huggingface/datasets/issues/2523
null
false
925,334,379
2,522
Documentation Mistakes in Dataset: emotion
closed
[]
2021-06-19T07:08:57
2023-01-02T12:04:58
2023-01-02T12:04:58
As per documentation, Dataset: emotion Homepage: https://github.com/dair-ai/emotion_dataset Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion Emotion is a dataset of English Twitter messages with eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. For more detailed information please refer to the paper. But when we view the data, there are only 6 emotions, anger, fear, joy, sadness, surprise, and trust.
GDGauravDutta
https://github.com/huggingface/datasets/issues/2522
null
false
925,030,685
2,521
Insert text classification template for Emotion dataset
closed
[]
2021-06-18T15:56:19
2021-06-21T09:22:31
2021-06-21T09:22:31
This PR includes a template and updated `dataset_infos.json` for the `emotion` dataset.
lewtun
https://github.com/huggingface/datasets/pull/2521
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2521", "html_url": "https://github.com/huggingface/datasets/pull/2521", "diff_url": "https://github.com/huggingface/datasets/pull/2521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2521.patch", "merged_at": "2021-06-21T09:22:31" }
true
925,015,004
2,520
Datasets with tricky task templates
closed
[]
2021-06-18T15:33:57
2023-07-20T13:20:32
2023-07-20T13:20:32
I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for. ## Text classification * [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` format and each sample appears to be tokenized. * [muchocine](https://huggingface.co/datasets/muchocine): contains two candidate text columns (long-form and summary) which in principle requires two `TextClassification` templates which is not currently supported
lewtun
https://github.com/huggingface/datasets/issues/2520
null
false
924,903,240
2,519
Improve performance of pandas arrow extractor
closed
[]
2021-06-18T13:24:41
2021-06-21T09:06:06
2021-06-21T09:06:06
While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster.
albertvillanova
https://github.com/huggingface/datasets/pull/2519
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2519", "html_url": "https://github.com/huggingface/datasets/pull/2519", "diff_url": "https://github.com/huggingface/datasets/pull/2519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2519.patch", "merged_at": "2021-06-21T09:06:06" }
true
924,654,100
2,518
Add task templates for tydiqa and xquad
closed
[]
2021-06-18T08:06:34
2021-06-18T15:01:17
2021-06-18T14:50:33
This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub. Notes: * I could not test the tydiqa implementation since I don't have enough disk space 😢 . But I am confident the template works :) * there exist other datasets like `fquad` and `mlqa` which are candidates for question-answering templates, but some work is needed to handle the ordering of nested column described in #2434
lewtun
https://github.com/huggingface/datasets/pull/2518
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2518", "html_url": "https://github.com/huggingface/datasets/pull/2518", "diff_url": "https://github.com/huggingface/datasets/pull/2518.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2518.patch", "merged_at": "2021-06-18T14:50:33" }
true
924,643,345
2,517
Fix typo in MatthewsCorrelation class name
closed
[]
2021-06-18T07:53:06
2021-06-18T08:43:55
2021-06-18T08:43:55
Close #2513.
albertvillanova
https://github.com/huggingface/datasets/pull/2517
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2517", "html_url": "https://github.com/huggingface/datasets/pull/2517", "diff_url": "https://github.com/huggingface/datasets/pull/2517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2517.patch", "merged_at": "2021-06-18T08:43:55" }
true
924,597,470
2,516
datasets.map pickle issue resulting in invalid mapping function
open
[]
2021-06-18T06:47:26
2021-06-23T13:47:49
null
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is mapped to a dataset, i.e. in the manner of run_mlm.py and other huggingface scripts. The following reproduces the issue - most likely I'm missing something A simulated tokeniser which can be pickled ``` class CustomTokenizer: def __init__(self): self.state = "init" def __getstate__(self): print("__getstate__ called") out = self.__dict__.copy() self.state = "pickled" return out def __setstate__(self, d): print("__setstate__ called") self.__dict__ = d self.state = "restored" tokenizer = CustomTokenizer() ``` Test that it actually works - prints "__getstate__ called" and "__setstate__ called" ``` import pickle serialized = pickle.dumps(tokenizer) restored = pickle.loads(serialized) assert restored.state == "restored" ``` Simulate a function that tokenises examples, when dataset.map is called, this function ``` def tokenize_function(examples): assert tokenizer.state == "restored" # this shouldn't fail but it does output = tokenizer(examples) # this will fail as tokenizer isn't really a tokenizer return output ``` Use map to simulate tokenization ``` import glob from datasets import load_dataset assert tokenizer.state == "restored" train_files = glob.glob('train*.csv') validation_files = glob.glob('validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) tokenized_datasets = datasets.map( tokenize_function, batched=True, ) ``` What's happening is I can see that __getstate__ is called but not __setstate__, so the state of `tokenize_function` is invalid at the point that it's actually executed. This doesn't matter as far as I can see for the standard tokenizers as they don't use __getstate__ / __setstate__. I'm not sure if there's another hook I'm supposed to implement as well? --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-22-a2aef4f74aaa> in <module> 8 tokenized_datasets = datasets.map( 9 tokenize_function, ---> 10 batched=True, 11 ) ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 487 desc=desc, 488 ) --> 489 for k, dataset in self.items() 490 } 491 ) ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 487 desc=desc, 488 ) --> 489 for k, dataset in self.items() 490 } 491 ) ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1633 fn_kwargs=fn_kwargs, 1634 new_fingerprint=new_fingerprint, -> 1635 desc=desc, 1636 ) 1637 else: ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 184 } 185 # apply actual function --> 186 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 187 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 188 # re-apply format to the output ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 395 # Call actual function 396 --> 397 out = func(self, *args, **kwargs) 398 399 # Update fingerprint of in-place transforms + update in-place history of transforms ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc) 1961 indices, 1962 check_same_num_examples=len(input_dataset.list_indexes()) > 0, -> 1963 offset=offset, 1964 ) 1965 except NumExamplesMismatch: ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset) 1853 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset 1854 processed_inputs = ( -> 1855 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1856 ) 1857 if update_data is None: <ipython-input-21-8ee4a8ba5b1b> in tokenize_function(examples) 1 def tokenize_function(examples): ----> 2 assert tokenizer.state == "restored" 3 tokenizer(examples) 4 return examples
david-waterworth
https://github.com/huggingface/datasets/issues/2516
null
false
924,435,447
2,515
CRD3 dataset card
closed
[]
2021-06-18T00:24:07
2021-06-21T10:18:44
2021-06-21T10:18:44
This PR adds additional information to the CRD3 dataset card.
wilsonyhlee
https://github.com/huggingface/datasets/pull/2515
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2515", "html_url": "https://github.com/huggingface/datasets/pull/2515", "diff_url": "https://github.com/huggingface/datasets/pull/2515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2515.patch", "merged_at": "2021-06-21T10:18:44" }
true
924,417,172
2,514
Can datasets remove duplicated rows?
open
[]
2021-06-17T23:35:38
2024-07-19T13:23:01
null
**Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. **Describe the solution you'd like** have a functionality of " remove duplicated rows" **Describe alternatives you've considered** convert dataset to pandas, remove duplicate, and convert back... **Additional context** no
liuxinglan
https://github.com/huggingface/datasets/issues/2514
null
false
924,174,413
2,513
Corelation should be Correlation
closed
[]
2021-06-17T17:28:48
2021-06-18T08:43:55
2021-06-18T08:43:55
https://github.com/huggingface/datasets/blob/0e87e1d053220e8ecddfa679bcd89a4c7bc5af62/metrics/matthews_correlation/matthews_correlation.py#L66
colbym-MM
https://github.com/huggingface/datasets/issues/2513
null
false
924,069,353
2,512
seqeval metric does not work with a recent version of sklearn: classification_report() got an unexpected keyword argument 'output_dict'
closed
[]
2021-06-17T15:36:02
2021-06-17T15:46:07
2021-06-17T15:46:07
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric seqeval = load_metric("seqeval") seqeval.compute(predictions=[['A']], references=[['A']]) ``` ## Expected results The function computes a dict with metrics ## Actual results ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-39-69a57f5cf06f> in <module> 1 from datasets import load_dataset, load_metric 2 seqeval = load_metric("seqeval") ----> 3 seqeval.compute(predictions=[['A']], references=[['A']]) ~/p3/lib/python3.7/site-packages/datasets/metric.py in compute(self, *args, **kwargs) 396 references = self.data["references"] 397 with temp_seed(self.seed): --> 398 output = self._compute(predictions=predictions, references=references, **kwargs) 399 400 if self.buf_writer is not None: ~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff/seqeval.py in _compute(self, predictions, references, suffix) 95 96 def _compute(self, predictions, references, suffix=False): ---> 97 report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True) 98 report.pop("macro avg") 99 report.pop("weighted avg") TypeError: classification_report() got an unexpected keyword argument 'output_dict' ``` ## Environment info sklearn=0.24 datasets=1.1.3
avidale
https://github.com/huggingface/datasets/issues/2512
null
false
923,762,133
2,511
Add C4
closed
[]
2021-06-17T10:31:04
2021-07-05T12:36:58
2021-07-05T12:36:57
## Adding a Dataset - **Name:** *C4* - **Description:** *https://github.com/allenai/allennlp/discussions/5056* - **Paper:** *https://arxiv.org/abs/1910.10683* - **Data:** *https://huggingface.co/datasets/allenai/c4* - **Motivation:** *Used a lot for pretraining* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Should fix https://github.com/huggingface/datasets/issues/1710
lhoestq
https://github.com/huggingface/datasets/issues/2511
null
false
923,735,485
2,510
Add align_labels_with_mapping to DatasetDict
closed
[]
2021-06-17T10:03:35
2021-06-17T10:45:25
2021-06-17T10:45:24
https://github.com/huggingface/datasets/pull/2457 added the `Dataset.align_labels_with_mapping` method. In this PR I also added `DatasetDict.align_labels_with_mapping`
lhoestq
https://github.com/huggingface/datasets/pull/2510
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2510", "html_url": "https://github.com/huggingface/datasets/pull/2510", "diff_url": "https://github.com/huggingface/datasets/pull/2510.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2510.patch", "merged_at": "2021-06-17T10:45:24" }
true
922,846,035
2,509
Fix fingerprint when moving cache dir
closed
[]
2021-06-16T16:45:09
2021-06-21T15:05:04
2021-06-21T15:05:03
The fingerprint of a dataset changes if the cache directory is moved. I fixed that by setting the fingerprint to be the hash of: - the relative cache dir (dataset_name/version/config_id) - the requested split Close #2496 I had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255. We usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long.
lhoestq
https://github.com/huggingface/datasets/pull/2509
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2509", "html_url": "https://github.com/huggingface/datasets/pull/2509", "diff_url": "https://github.com/huggingface/datasets/pull/2509.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2509.patch", "merged_at": "2021-06-21T15:05:03" }
true
921,863,173
2,508
Load Image Classification Dataset from Local
closed
[]
2021-06-15T22:43:33
2022-03-01T16:29:44
2022-03-01T16:29:44
**Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of each class in each folder, the ability to load these folders into a HuggingFace dataset like "cifar10". **Describe alternatives you've considered** Implement ViT training outside of the HuggingFace Trainer and without datasets (we did this but prefer to stay on the main path) Write custom data loader logic **Additional context** We're training ViT on custom dataset
Jacobsolawetz
https://github.com/huggingface/datasets/issues/2508
null
false
921,441,962
2,507
Rearrange JSON field names to match passed features schema field names
closed
[]
2021-06-15T14:10:02
2021-06-16T10:47:49
2021-06-16T10:47:49
This PR depends on PR #2453 (which must be merged first). Close #2366.
albertvillanova
https://github.com/huggingface/datasets/pull/2507
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2507", "html_url": "https://github.com/huggingface/datasets/pull/2507", "diff_url": "https://github.com/huggingface/datasets/pull/2507.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2507.patch", "merged_at": "2021-06-16T10:47:49" }
true
921,435,598
2,506
Add course banner
closed
[]
2021-06-15T14:03:54
2021-06-15T16:25:36
2021-06-15T16:25:35
This PR adds a course banner similar to the one you can now see in the [Transformers repo](https://github.com/huggingface/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too.
sgugger
https://github.com/huggingface/datasets/pull/2506
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2506", "html_url": "https://github.com/huggingface/datasets/pull/2506", "diff_url": "https://github.com/huggingface/datasets/pull/2506.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2506.patch", "merged_at": "2021-06-15T16:25:35" }
true
921,234,797
2,505
Make numpy arrow extractor faster
closed
[]
2021-06-15T10:11:32
2021-06-28T09:53:39
2021-06-28T09:53:38
I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498 This could make the numpy/torch/tf/jax formatting faster
lhoestq
https://github.com/huggingface/datasets/pull/2505
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2505", "html_url": "https://github.com/huggingface/datasets/pull/2505", "diff_url": "https://github.com/huggingface/datasets/pull/2505.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2505.patch", "merged_at": "2021-06-28T09:53:38" }
true
920,636,186
2,503
SubjQA wrong boolean values in entries
open
[]
2021-06-14T17:42:46
2021-08-25T03:52:06
null
## Describe the bug SubjQA seems to have a boolean that's consistently wrong. It defines: - question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective). - is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are considered as subjective) However, `is_ques_subjective` seems to have wrong values in the entire dataset. For instance, in the example in the dataset card, we have: - "question_subj_level": 2 - "is_ques_subjective": false However, according to the description, the question should be subjective since the `question_subj_level` is below 4
arnaudstiegler
https://github.com/huggingface/datasets/issues/2503
null
false
920,623,572
2,502
JAX integration
closed
[]
2021-06-14T17:24:23
2021-06-21T16:15:50
2021-06-21T16:15:49
Hi ! I just added the "jax" formatting, as we already have for pytorch, tensorflow, numpy (and also pandas and arrow). It does pretty much the same thing as the pytorch formatter except it creates jax.numpy.ndarray objects. ```python from datasets import Dataset d = Dataset.from_dict({"foo": [[0., 1., 2.]]}) d = d.with_format("jax") d[0] # {'foo': DeviceArray([0., 1., 2.], dtype=float32)} ``` A few details: - The default integer precision for jax depends on the jax configuration `jax_enable_x64` (see [here](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#double-64bit-precision)), I took that into account. Unless `jax_enable_x64` is specified, it is int32 by default - AFAIK it's not possible to do a full conversion from arrow data to jax data. We are doing arrow -> numpy -> jax but the numpy -> jax part doesn't do zero copy unfortutanely (see [here](https://github.com/google/jax/issues/4486)) - the env var for disabling JAX is `USE_JAX`. However I noticed that in `transformers` it is `USE_FLAX`. This is not an issue though IMO I also updated `convert_to_python_objects` to allow users to pass jax.numpy.ndarray objects to build a dataset. Since the `convert_to_python_objects` method became slow because it's the time when pytorch, tf (and now jax) are imported, I fixed it by checking the `sys.modules` to avoid unecessary import of pytorch, tf or jax. Close #2495
lhoestq
https://github.com/huggingface/datasets/pull/2502
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2502", "html_url": "https://github.com/huggingface/datasets/pull/2502", "diff_url": "https://github.com/huggingface/datasets/pull/2502.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2502.patch", "merged_at": "2021-06-21T16:15:48" }
true
920,579,634
2,501
Add Zenodo metadata file with license
closed
[]
2021-06-14T16:28:12
2021-06-14T16:49:42
2021-06-14T16:49:42
This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `"Apache-2.0"`, which otherwise by default is `"other-open"`. Close #2472.
albertvillanova
https://github.com/huggingface/datasets/pull/2501
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2501", "html_url": "https://github.com/huggingface/datasets/pull/2501", "diff_url": "https://github.com/huggingface/datasets/pull/2501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2501.patch", "merged_at": "2021-06-14T16:49:42" }
true
920,471,411
2,500
Add load_dataset_builder
closed
[]
2021-06-14T14:27:45
2025-06-20T18:07:24
2021-07-05T10:45:58
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself. TODOs: - [x] Add docstring and entry in the docs - [x] Add tests Closes #2484
mariosasko
https://github.com/huggingface/datasets/pull/2500
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2500", "html_url": "https://github.com/huggingface/datasets/pull/2500", "diff_url": "https://github.com/huggingface/datasets/pull/2500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2500.patch", "merged_at": "2021-07-05T10:45:57" }
true
920,413,021
2,499
Python Programming Puzzles
open
[]
2021-06-14T13:27:18
2021-06-15T18:14:14
null
## Adding a Dataset - **Name:** Python Programming Puzzles - **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis - **Paper:** https://arxiv.org/pdf/2106.05784.pdf - **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scrolling through the data](https://github.com/microsoft/PythonProgrammingPuzzles/blob/main/problems/README.md)) - **Motivation:** Spans a large range of difficulty, problems, and domains. A useful resource for evaluation as we don't have a clear understanding of the abilities and skills of extremely large LMs. Note: it's a growing dataset (contributions are welcome), so we'll need careful versioning for this dataset. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
VictorSanh
https://github.com/huggingface/datasets/issues/2499
null
false
920,411,285
2,498
Improve torch formatting performance
open
[]
2021-06-14T13:25:24
2022-07-15T17:12:04
null
**Is your feature request related to a problem? Please describe.** It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia and BookCorpus datasets. The training machines are similar to DGX-1 workstations. We use HF trainer torch.distributed training approach on a single machine with 8 GPUs. The current performance is about 30% slower than NVidia optimized BERT [examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling) baseline. Quite a bit of customized code and training loop tricks were used to achieve the baseline performance. It would be great to achieve the same performance while using nothing more than off the shelf HF ecosystem. Perhaps, in the future, with @stas00 work on deepspeed integration, it could even be exceeded. **Describe the solution you'd like** Using profiling tools we've observed that appx. 25% of cumulative run time is spent on data loader next call. ![dataloader_next](https://user-images.githubusercontent.com/458335/121895543-59742a00-ccee-11eb-85fb-f07715e3f1f6.png) As you can observe most of the data loader next call is spent in HF datasets torch_formatter.py format_batch call. Digging a bit deeper into format_batch we can see the following profiler data: ![torch_formatter](https://user-images.githubusercontent.com/458335/121895944-c7b8ec80-ccee-11eb-95d5-5875c5716c30.png) Once again, a lot of time is spent in pyarrow table conversion to pandas which seems like an intermediary step. Offline @lhoestq told me that this approach was, for some unknown reason, faster than direct to numpy conversion. **Describe alternatives you've considered** I am not familiar with pyarrow and have not yet considered the alternatives to the current approach. Most of the online advice around data loader performance improvements revolve around increasing number of workers, using pin memory for copying tensors from host device to gpus but we've already tried these avenues without much performance improvement. Weights & Biases dashboard for the pre-training task reports CPU utilization of ~ 10%, GPUs are completely saturated (GPU utilization is above 95% on all GPUs), while disk utilization is above 90%.
vblagoje
https://github.com/huggingface/datasets/issues/2498
null
false
920,250,382
2,497
Use default cast for sliced list arrays if pyarrow >= 4
closed
[]
2021-06-14T10:02:47
2021-06-15T18:06:18
2021-06-14T14:24:37
From pyarrow version 4, it is supported to cast sliced lists. This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4. In relation with PR #2461 and #2490. cc: @lhoestq, @abhi1thakur, @SBrandeis
albertvillanova
https://github.com/huggingface/datasets/pull/2497
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2497", "html_url": "https://github.com/huggingface/datasets/pull/2497", "diff_url": "https://github.com/huggingface/datasets/pull/2497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2497.patch", "merged_at": "2021-06-14T14:24:37" }
true
920,216,314
2,496
Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map`
closed
[]
2021-06-14T09:20:26
2021-06-21T15:05:03
2021-06-21T15:05:03
`Dataset.map` uses the dataset fingerprint (a hash) for caching. However the fingerprint seems to change when someone moves the cache directory of the dataset. This is because it uses the default fingerprint generation: 1. the dataset path is used to get the fingerprint 2. the modification times of the arrow file is also used to get the fingerprint To fix that we could set the fingerprint of the dataset to be a hash of (<dataset_name>, <config_name>, <version>, <script_hash>), i.e. a hash of the the cache path relative to the cache directory.
lhoestq
https://github.com/huggingface/datasets/issues/2496
null
false
920,170,030
2,495
JAX formatting
closed
[]
2021-06-14T08:32:07
2021-06-21T16:15:49
2021-06-21T16:15:49
We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well
lhoestq
https://github.com/huggingface/datasets/issues/2495
null
false
920,149,183
2,494
Improve docs on Enhancing performance
open
[]
2021-06-14T08:11:48
2025-06-28T18:55:38
null
In the ["Enhancing performance"](https://huggingface.co/docs/datasets/loading_datasets.html#enhancing-performance) section of docs, add specific use cases: - How to make datasets the fastest - How to make datasets take the less RAM - How to make datasets take the less hard drive mem cc: @thomwolf
albertvillanova
https://github.com/huggingface/datasets/issues/2494
null
false
919,833,281
2,493
add tensorflow-macos support
closed
[]
2021-06-13T16:20:08
2021-06-15T08:53:06
2021-06-15T08:53:06
ref - https://github.com/huggingface/datasets/issues/2068
slayerjain
https://github.com/huggingface/datasets/pull/2493
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2493", "html_url": "https://github.com/huggingface/datasets/pull/2493", "diff_url": "https://github.com/huggingface/datasets/pull/2493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2493.patch", "merged_at": "2021-06-15T08:53:06" }
true
919,718,102
2,492
Eduge
closed
[]
2021-06-13T05:10:59
2021-06-22T09:49:04
2021-06-16T10:41:46
Hi, awesome folks behind the huggingface! Here is my PR for the text classification dataset in Mongolian. Please do let me know in case you have anything to clarify. Thanks & Regards, Enod
enod
https://github.com/huggingface/datasets/pull/2492
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2492", "html_url": "https://github.com/huggingface/datasets/pull/2492", "diff_url": "https://github.com/huggingface/datasets/pull/2492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2492.patch", "merged_at": "2021-06-16T10:41:46" }
true
919,714,506
2,491
add eduge classification dataset
closed
[]
2021-06-13T04:37:01
2021-06-13T05:06:48
2021-06-13T05:06:38
enod
https://github.com/huggingface/datasets/pull/2491
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2491", "html_url": "https://github.com/huggingface/datasets/pull/2491", "diff_url": "https://github.com/huggingface/datasets/pull/2491.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2491.patch", "merged_at": null }
true
919,571,385
2,490
Allow latest pyarrow version
closed
[]
2021-06-12T14:17:34
2021-07-06T16:54:52
2021-06-14T07:53:23
Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0. Close #2489.
albertvillanova
https://github.com/huggingface/datasets/pull/2490
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2490", "html_url": "https://github.com/huggingface/datasets/pull/2490", "diff_url": "https://github.com/huggingface/datasets/pull/2490.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2490.patch", "merged_at": "2021-06-14T07:53:23" }
true
919,569,749
2,489
Allow latest pyarrow version once segfault bug is fixed
closed
[]
2021-06-12T14:09:52
2021-06-14T07:53:23
2021-06-14T07:53:23
As pointed out by @symeneses (see https://github.com/huggingface/datasets/pull/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https://issues.apache.org/jira/browse/ARROW-12568): - it was fixed on 3 May 2021 - version 4.0.1 was released on 19 May 2021 with the bug fix
albertvillanova
https://github.com/huggingface/datasets/issues/2489
null
false
919,500,756
2,488
Set configurable downloaded datasets path
closed
[]
2021-06-12T09:09:03
2021-06-14T09:13:27
2021-06-14T08:29:07
Part of #2480.
albertvillanova
https://github.com/huggingface/datasets/pull/2488
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2488", "html_url": "https://github.com/huggingface/datasets/pull/2488", "diff_url": "https://github.com/huggingface/datasets/pull/2488.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2488.patch", "merged_at": "2021-06-14T08:29:07" }
true
919,452,407
2,487
Set configurable extracted datasets path
closed
[]
2021-06-12T05:47:29
2021-06-14T09:30:17
2021-06-14T09:02:56
Part of #2480.
albertvillanova
https://github.com/huggingface/datasets/pull/2487
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2487", "html_url": "https://github.com/huggingface/datasets/pull/2487", "diff_url": "https://github.com/huggingface/datasets/pull/2487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2487.patch", "merged_at": "2021-06-14T09:02:56" }
true
919,174,898
2,486
Add Rico Dataset
closed
[]
2021-06-11T20:17:41
2022-10-03T09:38:18
2022-10-03T09:38:18
Hi there! I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib. 1) There are 7 different datasets under Rico and so I was wondering, should I make a folder for each or should I put them as different configurations of a single dataset? You can see the datasets available for Rico here: http://interactionmining.org/rico 2) As of right now, I have a semi working version of the first dataset which has pairs of screenshots and hierarchies from android applications. However, these screenshots are very large (1440, 2560, 3) and there are 66,000 of them so I am not able to perform the processing that the `datasets` lib does after downloading and extracting the dataset since I run out of memory very fast. Is there a way to have `datasets` lib not put everything into memory while it is processing the dataset? 2.1) If there is not a way, would it be better to just return the path to the screenshots instead of the actual image? 3) The hierarchies are JSON objects and looking through the documentation of `datasets`, I didn't see any feature that I could use for this type of data. So, for now I just have it being read in as a string, is this okay or should I be doing it differently? 4) One of the Rico datasets is a bunch of animations (GIFs), is there a `datasets` feature that I can put this type of data into or should I just return the path as a string? I appreciate any and all help I can get for this PR, I think the Rico datasets will be an awesome addition to the library :nerd_face: !
ncoop57
https://github.com/huggingface/datasets/pull/2486
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2486", "html_url": "https://github.com/huggingface/datasets/pull/2486", "diff_url": "https://github.com/huggingface/datasets/pull/2486.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2486.patch", "merged_at": null }
true
919,099,218
2,485
Implement layered building
open
[]
2021-06-11T18:54:25
2021-06-11T18:54:25
null
As discussed with @stas00 and @lhoestq (see also here https://github.com/huggingface/datasets/issues/2481#issuecomment-859712190): > My suggestion for this would be to have this enabled by default. > > Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is: > > 1. uncompress a handful of files via a generator enough to generate one arrow file > 2. process arrow file 1 > 3. delete all the files that went in and aren't needed anymore. > > rinse and repeat. > > 1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project > 2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing > 3. It would already include deleting temp files this issue is talking about > > I wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders.
albertvillanova
https://github.com/huggingface/datasets/issues/2485
null
false
919,092,635
2,484
Implement loading a dataset builder
closed
[]
2021-06-11T18:47:22
2021-07-05T10:45:57
2021-07-05T10:45:57
As discussed with @stas00 and @lhoestq, this would allow things like: ```python from datasets import load_dataset_builder dataset_name = "openwebtext" builder = load_dataset_builder(dataset_name) print(builder.cache_dir) ```
albertvillanova
https://github.com/huggingface/datasets/issues/2484
null
false
918,871,712
2,483
Use gc.collect only when needed to avoid slow downs
closed
[]
2021-06-11T15:09:30
2021-06-18T19:25:06
2021-06-11T15:31:36
In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482) However calling gc.collect too often causes significant slow downs (the CI run time doubled). So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset
lhoestq
https://github.com/huggingface/datasets/pull/2483
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2483", "html_url": "https://github.com/huggingface/datasets/pull/2483", "diff_url": "https://github.com/huggingface/datasets/pull/2483.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2483.patch", "merged_at": "2021-06-11T15:31:35" }
true
918,846,027
2,482
Allow to use tqdm>=4.50.0
closed
[]
2021-06-11T14:49:21
2021-06-11T15:11:51
2021-06-11T15:11:50
We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232)) They were due to open arrow files not properly closed by pyarrow. Since https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 gc.collect is called each time we don't need an arrow file to make sure that the files are closed. close https://github.com/huggingface/datasets/issues/2471 cc @lewtun
lhoestq
https://github.com/huggingface/datasets/pull/2482
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2482", "html_url": "https://github.com/huggingface/datasets/pull/2482", "diff_url": "https://github.com/huggingface/datasets/pull/2482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2482.patch", "merged_at": "2021-06-11T15:11:50" }
true
918,680,168
2,481
Delete extracted files to save disk space
closed
[]
2021-06-11T12:21:52
2021-07-19T09:08:18
2021-07-19T09:08:18
As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user.
albertvillanova
https://github.com/huggingface/datasets/issues/2481
null
false
918,678,578
2,480
Set download/extracted paths configurable
open
[]
2021-06-11T12:20:24
2021-06-15T14:23:49
null
As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives. TODO: - [x] Set configurable extracted datasets path: #2487 - [x] Set configurable downloaded datasets path: #2488 - [ ] Set configurable "incomplete" datasets path?
albertvillanova
https://github.com/huggingface/datasets/issues/2480
null
false
918,672,431
2,479
❌ load_datasets ❌
closed
[]
2021-06-11T12:14:36
2021-06-11T14:46:25
2021-06-11T14:46:25
julien-c
https://github.com/huggingface/datasets/pull/2479
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2479", "html_url": "https://github.com/huggingface/datasets/pull/2479", "diff_url": "https://github.com/huggingface/datasets/pull/2479.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2479.patch", "merged_at": "2021-06-11T14:46:24" }
true
918,507,510
2,478
Create release script
open
[]
2021-06-11T09:38:02
2023-07-20T13:22:23
null
Create a script so that releases can be done automatically (as done in `transformers`).
albertvillanova
https://github.com/huggingface/datasets/issues/2478
null
false
918,334,431
2,477
Fix docs custom stable version
closed
[]
2021-06-11T07:26:03
2021-06-14T09:14:20
2021-06-14T08:20:18
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
albertvillanova
https://github.com/huggingface/datasets/pull/2477
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2477", "html_url": "https://github.com/huggingface/datasets/pull/2477", "diff_url": "https://github.com/huggingface/datasets/pull/2477.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2477.patch", "merged_at": "2021-06-14T08:20:18" }
true
917,686,662
2,476
Add TimeDial
closed
[]
2021-06-10T18:33:07
2021-07-30T12:57:54
2021-07-30T12:57:54
Dataset: https://github.com/google-research-datasets/TimeDial To-Do: Update README.md and add YAML tags
bhavitvyamalik
https://github.com/huggingface/datasets/pull/2476
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2476", "html_url": "https://github.com/huggingface/datasets/pull/2476", "diff_url": "https://github.com/huggingface/datasets/pull/2476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2476.patch", "merged_at": "2021-07-30T12:57:54" }
true
917,650,882
2,475
Issue in timit_asr database
closed
[]
2021-06-10T18:05:29
2021-06-13T08:13:50
2021-06-13T08:13:13
## Describe the bug I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows). I am using the next code line dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10)) The above code result with the same sentence duplicated ten times. It also happens when I use the dataset viewer at Streamlit . ## Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10)) data = dataset.to_pandas() # Sample code to reproduce the bug ``` ## Expected results table with different row information ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.4.1 (also occur in the latest version) - Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.8.1+cu102 (False) - Tensorflow version (GPU?): 1.15.3 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No - `datasets` version: - Platform: - Python version: - PyArrow version:
hrahamim
https://github.com/huggingface/datasets/issues/2475
null
false
917,622,055
2,474
cache_dir parameter for load_from_disk ?
closed
[]
2021-06-10T17:39:36
2022-02-16T14:55:01
2022-02-16T14:55:00
**Is your feature request related to a problem? Please describe.** When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cached to the VM's disk: ` from datasets import load_from_disk myPreprocessedData = load_from_disk("/content/gdrive/MyDrive/ASR_data/myPreprocessedData") ` I know that chaching on google drive could slow down learning. But at least it would run. **Describe the solution you'd like** Add cache_Dir parameter to the load_from_disk function. **Describe alternatives you've considered** It looks like you could write a custom loading script for the load_dataset function. But this seems to be much too complex for my use case. Is there perhaps a template here that uses the load_from_disk function?
chbensch
https://github.com/huggingface/datasets/issues/2474
null
false
917,538,629
2,473
Add Disfl-QA
closed
[]
2021-06-10T16:18:00
2021-07-29T11:56:19
2021-07-29T11:56:18
Dataset: https://github.com/google-research-datasets/disfl-qa To-Do: Update README.md and add YAML tags
bhavitvyamalik
https://github.com/huggingface/datasets/pull/2473
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2473", "html_url": "https://github.com/huggingface/datasets/pull/2473", "diff_url": "https://github.com/huggingface/datasets/pull/2473.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2473.patch", "merged_at": "2021-07-29T11:56:18" }
true
917,463,821
2,472
Fix automatic generation of Zenodo DOI
closed
[]
2021-06-10T15:15:46
2021-06-14T16:49:42
2021-06-14T16:49:42
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published". I have contacted Zenodo support to fix this issue. TODO: - [x] Check with Zenodo to fix the issue - [x] Check BibTeX entry is right
albertvillanova
https://github.com/huggingface/datasets/issues/2472
null
false
917,067,165
2,471
Fix PermissionError on Windows when using tqdm >=4.50.0
closed
[]
2021-06-10T08:31:49
2021-06-11T15:11:50
2021-06-11T15:11:50
See: https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111 ``` PermissionError: [WinError 32] The process cannot access the file because it is being used by another process ```
albertvillanova
https://github.com/huggingface/datasets/issues/2471
null
false
916,724,260
2,470
Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
closed
[]
2021-06-09T22:40:22
2021-07-01T09:34:54
2021-07-01T09:11:13
## Describe the bug Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`. I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any tips greatly appreciated, I'm happy to provide more info if it would helps us diagnose. ## Steps to reproduce the bug ```python # this function will be applied with map() def tokenize_function(examples): return tokenizer( examples["text"], padding=PaddingStrategy.DO_NOT_PAD, truncation=True, ) # data_files is a Dict[str, str] mapping name -> path datasets = load_dataset("text", data_files={...}) # this is where the error happens if num_proc = 16, # but is fine if num_proc = 1 tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=num_workers, ) ``` ## Expected results The `map()` function succeeds with `num_proc` > 1. ## Actual results ![image](https://user-images.githubusercontent.com/1170062/121404271-a6cc5200-c910-11eb-8e27-5c893bd04042.png) ![image](https://user-images.githubusercontent.com/1170062/121404362-be0b3f80-c910-11eb-9117-658943029aef.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2 - Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.31 - Python version: 3.9.5 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes, but I think N/A for this issue - Using distributed or parallel set-up in script?: Multi-GPU on one machine, but I think also N/A for this issue
mbforbes
https://github.com/huggingface/datasets/issues/2470
null
false
916,440,418
2,469
Bump tqdm version
closed
[]
2021-06-09T17:24:40
2021-06-11T15:03:42
2021-06-11T15:03:36
lewtun
https://github.com/huggingface/datasets/pull/2469
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2469", "html_url": "https://github.com/huggingface/datasets/pull/2469", "diff_url": "https://github.com/huggingface/datasets/pull/2469.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2469.patch", "merged_at": null }
true
916,427,320
2,468
Implement ClassLabel encoding in JSON loader
closed
[]
2021-06-09T17:08:54
2021-06-28T15:39:54
2021-06-28T15:05:35
Close #2365.
albertvillanova
https://github.com/huggingface/datasets/pull/2468
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2468", "html_url": "https://github.com/huggingface/datasets/pull/2468", "diff_url": "https://github.com/huggingface/datasets/pull/2468.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2468.patch", "merged_at": "2021-06-28T15:05:34" }
true
915,914,098
2,466
change udpos features structure
closed
[]
2021-06-09T08:03:31
2021-06-18T11:55:09
2021-06-16T10:41:37
The structure is change such that each example is a sentence The change is done for issues: #2061 #2444 Close #2061 , close #2444.
cosmeowpawlitan
https://github.com/huggingface/datasets/pull/2466
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2466", "html_url": "https://github.com/huggingface/datasets/pull/2466", "diff_url": "https://github.com/huggingface/datasets/pull/2466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2466.patch", "merged_at": "2021-06-16T10:41:37" }
true
915,525,071
2,465
adding masahaner dataset
closed
[]
2021-06-08T21:20:25
2021-06-14T14:59:05
2021-06-14T14:59:05
Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner @lhoestq , can you please review
dadelani
https://github.com/huggingface/datasets/pull/2465
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2465", "html_url": "https://github.com/huggingface/datasets/pull/2465", "diff_url": "https://github.com/huggingface/datasets/pull/2465.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2465.patch", "merged_at": "2021-06-14T14:59:05" }
true
915,485,601
2,464
fix: adjusting indexing for the labels.
closed
[]
2021-06-08T20:47:25
2021-06-09T10:15:46
2021-06-09T09:10:28
The labels index were mismatching the actual ones used in the dataset. Specifically `0` is used for `SUPPORTS` and `1` is used for `REFUTES` After this change, the `README.md` now reflects the content of `dataset_infos.json`. Signed-off-by: Matteo Manica <drugilsberg@gmail.com>
drugilsberg
https://github.com/huggingface/datasets/pull/2464
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2464", "html_url": "https://github.com/huggingface/datasets/pull/2464", "diff_url": "https://github.com/huggingface/datasets/pull/2464.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2464.patch", "merged_at": "2021-06-09T09:10:28" }
true
915,454,788
2,463
Fix proto_qa download link
closed
[]
2021-06-08T20:23:16
2021-06-10T12:49:56
2021-06-10T08:31:10
Fixes #2459 Instead of updating the path, this PR fixes a commit hash as suggested by @lhoestq.
mariosasko
https://github.com/huggingface/datasets/pull/2463
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2463", "html_url": "https://github.com/huggingface/datasets/pull/2463", "diff_url": "https://github.com/huggingface/datasets/pull/2463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2463.patch", "merged_at": "2021-06-10T08:31:09" }
true
915,384,613
2,462
Merge DatasetDict and Dataset
open
[]
2021-06-08T19:22:04
2023-08-16T09:34:34
null
As discussed in #2424 and #2437 (please see there for detailed conversation): - It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset. - The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users. - A user expects a "Dataset" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user. Here is a proposal for discussion and refined (and potential abandon if it's not good enough): - let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other - let's disallow the use of integers in split names (probably not a very big breaking change) - when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order) - when you index with strings/split name you have the same behavior as now (full backward compat) - let's then also have all the methods of a Dataset on the DatasetDict The end goal would be to merge both Dataset and DatasetDict object in a single object that would be (pretty much totally) backward compatible with both. There are a few things that we could discuss if we want to merge Dataset and DatasetDict: 1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature ``` from datasets import load_dataset dataset = load_dataset(...) dataset["train"] dataset["input_ids"] ``` 2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss. Moreover regarding your points: - integers are not allowed as split names already - it's definitely doable to have all the methods. Maybe some of them like train_test_split that is currently only available for Dataset can be tweaked to work for a split dataset cc: @thomwolf @lhoestq
albertvillanova
https://github.com/huggingface/datasets/issues/2462
null
false
915,286,150
2,461
Support sliced list arrays in cast
closed
[]
2021-06-08T17:38:47
2021-06-08T17:56:24
2021-06-08T17:56:23
There is this issue in pyarrow: ```python import pyarrow as pa arr = pa.array([[i * 10] for i in range(4)]) arr.cast(pa.list_(pa.int32())) # works arr = arr.slice(1) arr.cast(pa.list_(pa.int32())) # fails # ArrowNotImplementedError("Casting sliced lists (non-zero offset) not yet implemented") ``` However in `Dataset.cast` we slice tables to cast their types (it's memory intensive), so we have the same issue. Because of this it is currently not possible to cast a Dataset with a Sequence feature type (unless the table is small enough to not be sliced). In this PR I fixed this by resetting the offset of `pyarrow.ListArray` arrays to zero in the table before casting. I used `pyarrow.compute.subtract` function to update the offsets of the ListArray. cc @abhi1thakur @SBrandeis
lhoestq
https://github.com/huggingface/datasets/pull/2461
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2461", "html_url": "https://github.com/huggingface/datasets/pull/2461", "diff_url": "https://github.com/huggingface/datasets/pull/2461.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2461.patch", "merged_at": "2021-06-08T17:56:23" }
true
915,268,536
2,460
Revert default in-memory for small datasets
closed
[]
2021-06-08T17:14:23
2021-06-08T18:04:14
2021-06-08T17:55:43
Close #2458
albertvillanova
https://github.com/huggingface/datasets/pull/2460
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2460", "html_url": "https://github.com/huggingface/datasets/pull/2460", "diff_url": "https://github.com/huggingface/datasets/pull/2460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2460.patch", "merged_at": "2021-06-08T17:55:43" }
true
915,222,015
2,459
`Proto_qa` hosting seems to be broken
closed
[]
2021-06-08T16:16:32
2021-06-10T08:31:09
2021-06-10T08:31:09
## Describe the bug The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now. @zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("proto_qa") ``` ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset use_auth_token=use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators train_fpath = dl_manager.download(_URLs[self.config.name]["train"]) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download num_proc=download_config.num_proc, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl ```
VictorSanh
https://github.com/huggingface/datasets/issues/2459
null
false
915,199,693
2,458
Revert default in-memory for small datasets
closed
[]
2021-06-08T15:51:41
2021-06-08T18:57:11
2021-06-08T17:55:43
Users are reporting issues and confusion about setting default in-memory to True for small datasets. We see 2 clear use cases of Datasets: - the "canonical" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation) - some edge cases (speed benchmarks, interactive/exploratory analysis,...), where default in-memory can explicitly be enabled, and no caching will be done After discussing with @lhoestq we have agreed to: - revert this feature (implemented in #2182) - explain in the docs how to optimize speed/performance by setting default in-memory cc: @stas00 https://github.com/huggingface/datasets/pull/2409#issuecomment-856210552
albertvillanova
https://github.com/huggingface/datasets/issues/2458
null
false