id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
βŒ€
body
stringlengths
0
228k
βŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,287,941,058
4,590
Generalize meta_path json file creation in load.py [#4540]
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova, Can you please review this PR for Issue #4540 ", "@lhoestq Thank you for merging the PR . Is there a slack channel for contributing to the datasets library. I would love to work on the library and make meaningful contributions.", "Hi ! Sure feel free to join our discord ^^ \r\nhttps://discuss.huggingface.co/t/join-the-hugging-face-discord/11263 so that we can discuss together mor eeasily. Otherwise everything happens on github ;)" ]
2022-06-28T21:48:06
2022-07-08T14:55:13
2022-07-07T13:17:45
# What does this PR do? ## Summary *In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.* ## Additions - ## Changes - Changed meta_path to use `os.path.splitext` instead of using `str.split` to generalize code. ## Deletions - ## Issues Addressed : Fixes #4540
VijayKalmath
https://github.com/huggingface/datasets/pull/4590
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4590", "html_url": "https://github.com/huggingface/datasets/pull/4590", "diff_url": "https://github.com/huggingface/datasets/pull/4590.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4590.patch", "merged_at": "2022-07-07T13:17:44" }
true
1,287,600,029
4,589
Permission denied: '/home/.cache' when load_dataset with local script
closed
[]
2022-06-28T16:26:03
2022-06-29T06:26:28
2022-06-29T06:25:08
null
jiangh0
https://github.com/huggingface/datasets/issues/4589
null
false
1,287,368,751
4,588
Host head_qa data on the Hub and fix NonMatchingChecksumError
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @albertvillanova ! Thanks for the fix ;)\r\nCan I safely checkout from this branch to build `datasets` or it is preferable to wait until all CI tests pass?\r\nThanks πŸ™ ", "@younesbelkada we have just merged this PR." ]
2022-06-28T13:39:28
2022-07-05T16:01:15
2022-07-05T15:49:52
This PR: - Hosts head_qa data on the Hub instead of Google Drive - Fixes NonMatchingChecksumError Fix https://huggingface.co/datasets/head_qa/discussions/1
albertvillanova
https://github.com/huggingface/datasets/pull/4588
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4588", "html_url": "https://github.com/huggingface/datasets/pull/4588", "diff_url": "https://github.com/huggingface/datasets/pull/4588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4588.patch", "merged_at": "2022-07-05T15:49:52" }
true
1,287,291,494
4,587
Validate new_fingerprint passed by user
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-28T12:46:21
2022-06-28T14:11:57
2022-06-28T14:00:44
Users can pass the dataset fingerprint they want in `map` and other dataset transforms. However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long
lhoestq
https://github.com/huggingface/datasets/pull/4587
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4587", "html_url": "https://github.com/huggingface/datasets/pull/4587", "diff_url": "https://github.com/huggingface/datasets/pull/4587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4587.patch", "merged_at": "2022-06-28T14:00:44" }
true
1,287,105,636
4,586
Host pn_summary data on the Hub instead of Google Drive
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-28T10:05:05
2022-06-28T14:52:56
2022-06-28T14:42:03
Fix #4581.
albertvillanova
https://github.com/huggingface/datasets/pull/4586
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4586", "html_url": "https://github.com/huggingface/datasets/pull/4586", "diff_url": "https://github.com/huggingface/datasets/pull/4586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4586.patch", "merged_at": "2022-06-28T14:42:03" }
true
1,287,064,929
4,585
Host multi_news data on the Hub instead of Google Drive
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-28T09:32:06
2022-06-28T14:19:35
2022-06-28T14:08:48
Host data files of multi_news dataset on the Hub. They were on Google Drive. Fix #4580.
albertvillanova
https://github.com/huggingface/datasets/pull/4585
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4585", "html_url": "https://github.com/huggingface/datasets/pull/4585", "diff_url": "https://github.com/huggingface/datasets/pull/4585.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4585.patch", "merged_at": "2022-06-28T14:08:48" }
true
1,286,911,993
4,584
Add binary classification task IDs
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4584). All of your documentation changes will be reflected on that endpoint.", "> Awesome thanks ! Can you add it to https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts first please ? This is where we define the cross libraries tasks taxonomy ;)\r\n\r\nThanks for the tip! Done in https://github.com/huggingface/hub-docs/pull/217", "I don't think we need to update this file anymore. We should remove it IMO, and simply update the dataset [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging)", "I'm closing this PR." ]
2022-06-28T07:30:39
2023-09-24T10:04:04
2023-01-26T09:27:52
As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification. This PR adds binary classification to the task IDs to enable this. Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597 cc @abhishekkrthakur @SBrandeis
lewtun
https://github.com/huggingface/datasets/pull/4584
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4584", "html_url": "https://github.com/huggingface/datasets/pull/4584", "diff_url": "https://github.com/huggingface/datasets/pull/4584.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4584.patch", "merged_at": null }
true
1,286,790,871
4,583
<code> implementation of FLAC support using torchaudio
closed
[]
2022-06-28T05:24:21
2022-06-28T05:47:02
2022-06-28T05:47:02
I had added Audio FLAC support with torchaudio given that Librosa and SoundFile can give problems. Also, FLAC is been used as audio from https://mlcommons.org/en/peoples-speech/
rafael-ariascalles
https://github.com/huggingface/datasets/pull/4583
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4583", "html_url": "https://github.com/huggingface/datasets/pull/4583", "diff_url": "https://github.com/huggingface/datasets/pull/4583.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4583.patch", "merged_at": null }
true
1,286,517,060
4,582
add_column should preserve _indexes
open
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4582). All of your documentation changes will be reflected on that endpoint." ]
2022-06-27T22:35:47
2022-07-06T15:19:54
null
https://github.com/huggingface/datasets/issues/3769#issuecomment-1167146126 doing `.add_column("x",x_data)` also removed any `_indexes` on the dataset, decided this shouldn't be the case. This was because `add_column` was creating a new `Dataset(...)` and wasn't possible to pass indexes on init. with this PR now can pass 'indexes' on init through `IndexableMixin` - [x] Added test
cceyda
https://github.com/huggingface/datasets/pull/4582
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4582", "html_url": "https://github.com/huggingface/datasets/pull/4582", "diff_url": "https://github.com/huggingface/datasets/pull/4582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4582.patch", "merged_at": null }
true
1,286,362,907
4,581
Dataset Viewer issue for pn_summary
closed
[ "linked to https://github.com/huggingface/datasets/issues/4580#issuecomment-1168373066?", "Note that I refreshed twice this dataset, and I still have (another) error on one of the splits\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 403, message='Forbidden', url=URL('https://doc-14-4c-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/pgotjmcuh77q0lk7p44rparfrhv459kp/1656403650000/11771870722949762109/*/16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO?e=download')\r\n```\r\n\r\nLike the three splits are processed in parallel by the workers, I imagine that the Google hosting is rate-limiting us.\r\n\r\ncc @albertvillanova \r\n\r\n", "Exactly, Google Drive bans our loading scripts.\r\n\r\nWhen possible, we should host somewhere else." ]
2022-06-27T20:56:12
2022-06-28T14:42:03
2022-06-28T14:42:03
### Link https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation ### Description Getting an index error on the `validation` and `test` splits: ``` Server error Status code: 400 Exception: IndexError Message: list index out of range ``` ### Owner No
lewtun
https://github.com/huggingface/datasets/issues/4581
null
false
1,286,312,912
4,580
Dataset Viewer issue for multi_news
closed
[ "Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. Let's see if the license allows that.", "I guess we can host the data: https://github.com/Alex-Fabbri/Multi-News/blob/master/LICENSE.txt" ]
2022-06-27T20:25:25
2022-06-28T14:08:48
2022-06-28T14:08:48
### Link https://huggingface.co/datasets/multi_news ### Description Not sure what the index error is referring to here: ``` Status code: 400 Exception: IndexError Message: list index out of range ``` ### Owner No
lewtun
https://github.com/huggingface/datasets/issues/4580
null
false
1,286,106,285
4,579
Support streaming cfq dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq I've been refactoring a little the code:\r\n- Use less RAM by loading only the required samples: only if its index is in the splits file\r\n- Start yielding \"earlier\" in streaming mode: for each `split_idx`:\r\n - either yield from buffer\r\n - or iterate over samples and either yield or buffer the sample\r\n \r\n The speed gain obviously depends on how the indexes are sorted in the split file:\r\n - Best case: indices are [1, 2, 3]\r\n - Worst case (no speed gain): indices are [3, 1, 2] or [3, 2, 1]\r\n\r\nLet me know what you think.", "I have to update the dummy data so that it aligns with the real data (inside the archive, the samples file `dataset.json` is the last member).", "There is an issue when testing `test_load_dataset_cfq` with dummy data:\r\n- `MockDownloadManager.iter_archive` yields FIRST `'cfq/dataset.json'`\r\n- [`Streaming`]`DownloadManager.iter_archive` yields LAST `'cfq/dataset.json'` when using real data tar.gz archive\r\n\r\nNote that this issue arises only with dummy data: loading the real dataset works smoothly for all configurations: I recreated the `dataset_infos.json` file to check it (it generated the same file).", "This PR should be merged first:\r\n- #4611", "Impressive, thank you ! :o \r\n\r\nfeel free to merge master into this branch, now that the files order is respected. You can merge if the CI is green :)" ]
2022-06-27T17:11:23
2022-07-04T19:35:01
2022-07-04T19:23:57
Support streaming cfq dataset.
albertvillanova
https://github.com/huggingface/datasets/pull/4579
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4579", "html_url": "https://github.com/huggingface/datasets/pull/4579", "diff_url": "https://github.com/huggingface/datasets/pull/4579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4579.patch", "merged_at": "2022-07-04T19:23:57" }
true
1,286,086,400
4,578
[Multi Configs] Use directories to differentiate between subsets/configurations
open
[ "I want to be able to create folders in a model.", "How to set new split names, instead of train/test/validation? For example, I have a local dataset, consists of several subsets, named \"A\", \"B\", and \"C\". How can I create a huggingface dataset, with splits A/B/C ?\r\n\r\nThe document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?", "> The document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?\r\n\r\nIt works the same - you just need to use local paths instead of URLs" ]
2022-06-27T16:55:11
2023-06-14T15:43:05
null
Currently to define several subsets/configurations of your dataset, you need to use a dataset script. However it would be nice to have a no-code way to to this. For example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per configuration. These structures are not supported right now, but would be nice to have: ``` my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ en/ β”‚ β”œβ”€β”€ train.csv β”‚ └── test.csv └── fr/ β”œβ”€β”€ train.csv └── test.csv ``` Or with one directory per split: ``` my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ en/ β”‚ β”œβ”€β”€ train/ β”‚ β”‚ β”œβ”€β”€ shard_0.csv β”‚ β”‚ └── shard_1.csv β”‚ └── test/ β”‚ β”œβ”€β”€ shard_0.csv β”‚ └── shard_1.csv └── fr/ β”œβ”€β”€ train/ β”‚ β”œβ”€β”€ shard_0.csv β”‚ └── shard_1.csv └── test/ β”œβ”€β”€ shard_0.csv └── shard_1.csv ``` cc @stevhliu @albertvillanova This can be specified in the README as YAML with ``` configs: - config_name: en data_dir: en - config_name: fr data_dir: fr ```
lhoestq
https://github.com/huggingface/datasets/issues/4578
null
false
1,285,703,775
4,577
Add authentication tip to `load_dataset`
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-27T12:05:34
2022-07-04T13:13:15
2022-07-04T13:01:30
Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`.
mariosasko
https://github.com/huggingface/datasets/pull/4577
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4577", "html_url": "https://github.com/huggingface/datasets/pull/4577", "diff_url": "https://github.com/huggingface/datasets/pull/4577.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4577.patch", "merged_at": "2022-07-04T13:01:30" }
true
1,285,698,576
4,576
Include `metadata.jsonl` in resolved data files
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "I still don't know if the way we implemented data files resolution could support the metadata.jsonl file without bad side effects for the other packaged builders. In particular here if you have a folder of csv/parquet/whatever files and a metadata.jsonl file, it would return \r\n```\r\nsplit: patterns_dict[split] + [METADATA_PATTERN]\r\n```\r\nwhich is a bit unexpected and can lead to errors.\r\n\r\nMaybe this logic can be specific to imagefolder somehow ? This could be an additional pattern `[\"metadata.jsonl\", \"**/metadata.jsonl\"]` just for imagefolder, that is only used when `data_files=` is not specified by the user.\r\n\r\nI guess it's ok to have patterns that lead to duplicate metadata.jsonl files for imagefolder, since the imagefolder logic only considers the closest metadata file for each image.\r\n\r\nWhat do you think ?", "Yes, that's indeed the problem. My solution in https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 that accounts for that (include metadata files only if image files are present; not ideal): https://github.com/huggingface/datasets/blob/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95/src/datasets/data_files.py#L119-L125.\r\nPerhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as `imagefolder` and append metadata files to already resolved data files (if there are any). WDYT?", "@lhoestq \r\n\r\n> Perhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as imagefolder and append metadata files to already resolved data files (if there are any). WDYT?\r\n\r\nI decided to go with this approach.\r\n\r\n Not sure if you meant the same thing with this comment:\r\n\r\n> Maybe this logic can be specific to imagefolder somehow ? This could be an additional pattern [\"metadata.jsonl\", \"**/metadata.jsonl\"] just for imagefolder, that is only used when data_files= is not specified by the user.\r\n\r\n\r\nIt adds more code but is easy to follow IMO.\r\n", "The CI still struggles but you can merge since at least one of the two WIN CI succeeded" ]
2022-06-27T12:01:29
2022-07-01T12:44:55
2022-06-30T10:15:32
Include `metadata.jsonl` in resolved data files. Fix #4548 @lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts for nested metadata files also, but this results in more complex code. Let me know which one of these two approaches you prefer.~~ Maybe https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 is good enough for now (for the sake of simplicity). https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 breaks the imagefolder tests due to duplicates in the resolved metadata files. One way to fix this would be to resolve the metadata pattern only on parent directories, but this adds even more logic to `_get_data_files_patterns`, so not sure if this is what we should do.
mariosasko
https://github.com/huggingface/datasets/pull/4576
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4576", "html_url": "https://github.com/huggingface/datasets/pull/4576", "diff_url": "https://github.com/huggingface/datasets/pull/4576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4576.patch", "merged_at": "2022-06-30T10:15:31" }
true
1,285,446,700
4,575
Problem about wmt17 zh-en dataset
closed
[ "Running into the same error with `wmt17/zh-en`, `wmt18/zh-en` and `wmt19/zh-en`.", "@albertvillanova @lhoestq Could you take a look at this issue?", "@winterfell2021 Hi, I wonder where the code you provided should be added. I tried to add them in the `datasets/table.py` in `array_cast` function, however, the 'zh' item is none.", "I found some 'zh' item is none while 'c[hn]' is not.\r\nSo the code may change to:\r\n```python\r\nif 'c[hn]' in str(array.type):\r\n py_array = array.to_pylist()\r\n data_list = []\r\n for vo in py_array:\r\n tmp = {\r\n 'en': vo['en'],\r\n }\r\n if vo.get('zh'):\r\n tmp['zh'] = vo['zh']\r\n else:\r\n tmp['zh'] = vo['c[hn]']\r\n data_list.append(tmp)\r\n array = pa.array(data_list, type=pa.struct([\r\n pa.field('en', pa.string()),\r\n pa.field('zh', pa.string()),\r\n ]))\r\n```", "I just pushed a fix, we'll do a new release of `datasets` soon to include this fix. In the meantime you can use the fixed dataset by passing `revision=\"main\"` to `load_dataset`" ]
2022-06-27T08:35:42
2022-08-23T10:01:02
2022-08-23T10:00:21
It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`. So when using `data = load_dataset('wmt17', "zh-en")` to load the wmt17 zh-en dataset, which will raise the exception: ``` Traceback (most recent call last): File "train.py", line 78, in <module> data = load_dataset(args.dataset, "zh-en") File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1684, in load_dataset use_auth_token=use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1221, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1215, in _prepare_split num_examples, num_bytes = writer.finalize() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 533, in finalize self.write_examples_on_file() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 410, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 503, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 198, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1846, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1756, in array_cast raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") TypeError: Couldn't cast array of type struct<c[hn]: string, en: string, zh: string> to struct<en: string, zh: string> ``` So the solution of this problem is to change the original array manually: ``` if 'c[hn]' in str(array.type): py_array = array.to_pylist() data_list = [] for vo in py_array: tmp = { 'en': vo['en'], } if 'zh' not in vo: tmp['zh'] = vo['c[hn]'] else: tmp['zh'] = vo['zh'] data_list.append(tmp) array = pa.array(data_list, type=pa.struct([ pa.field('en', pa.string()), pa.field('zh', pa.string()), ])) ``` Therefore, maybe a correct version of original casia2015 file need to be updated
winterfell2021
https://github.com/huggingface/datasets/issues/4575
null
false
1,285,380,616
4,574
Support streaming mlsum dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "After unpinning `s3fs` and pinning `fsspec[http]>=2021.11.1`, the CI installs\r\n- `fsspec-2022.1.0`\r\n- `s3fs-0.5.1`\r\n\r\nand raises the following error:\r\n```\r\n ImportError while loading conftest '/home/runner/work/datasets/datasets/tests/conftest.py'.\r\ntests/conftest.py:13: in <module>\r\n import datasets\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>\r\n from .arrow_dataset import Dataset\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_dataset.py:62: in <module>\r\n from .arrow_reader import ArrowReader\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_reader.py:29: in <module>\r\n from .download.download_config import DownloadConfig\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/__init__.py:10: in <module>\r\n from .streaming_download_manager import StreamingDownloadManager\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/streaming_download_manager.py:20: in <module>\r\n from ..filesystems import COMPRESSION_FILESYSTEMS\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/__init__.py:13: in <module>\r\n from .s3filesystem import S3FileSystem # noqa: F401\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py:1: in <module>\r\n import s3fs\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/__init__.py:1: in <module>\r\n from .core import S3FileSystem, S3File\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/core.py:12: in <module>\r\n from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync\r\nE ImportError: cannot import name 'maybe_sync'\r\n```\r\n\r\nThe installed `s3fs` version is too old. What about pinning a min version?", "Maybe you can try setting the same minimum version as fsspec ? `s3fs>=2021.11.1`", "Yes, I have checked that they both require to have the same version. \r\n\r\nThe issue then was coming from aiobotocore, boto3, botocore. I have changed them from strict to min version requirements.\r\n> s3fs 2021.11.1 depends on aiobotocore~=2.0.1", "I have updated all min versions so that they are compatible one with each other. I'm pushing again...", "Thanks !", "Nice!" ]
2022-06-27T07:37:03
2022-07-21T13:37:30
2022-07-21T12:40:00
Support streaming mlsum dataset. This PR: - pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1` - https://github.com/fsspec/filesystem_spec/pull/830 - unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1` > s3fs 2021.8.1 requires fsspec==2021.08.1 - see discussion: https://github.com/huggingface/datasets/pull/2858/files#r700027326 - updates the following requirements to be compatible with the previous ones and one with each other: - `aiobotocore==1.4.2` to `aiobotocore>=2.0.1` (required by s3fs>=2021.11.1) - `boto3==1.17.106` to `boto3>=1.19.8` (to be compatible with aiobotocore>=2.0.1) - `botocore==1.20.106` to `botocore>=1.22.8` (to be compatible with aiobotocore and boto3) Fix #4572.
albertvillanova
https://github.com/huggingface/datasets/pull/4574
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4574", "html_url": "https://github.com/huggingface/datasets/pull/4574", "diff_url": "https://github.com/huggingface/datasets/pull/4574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4574.patch", "merged_at": "2022-07-21T12:40:00" }
true
1,285,023,629
4,573
Fix evaluation metadata for ncbi_disease
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
2022-06-26T20:29:32
2023-09-24T09:35:07
2022-09-23T09:38:02
This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream.
lewtun
https://github.com/huggingface/datasets/pull/4573
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4573", "html_url": "https://github.com/huggingface/datasets/pull/4573", "diff_url": "https://github.com/huggingface/datasets/pull/4573.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4573.patch", "merged_at": null }
true
1,285,022,499
4,572
Dataset Viewer issue for mlsum
closed
[ "Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..." ]
2022-06-26T20:24:17
2022-07-21T12:40:01
2022-07-21T12:40:01
### Link https://huggingface.co/datasets/mlsum/viewer/de/train ### Description There's seems to be a problem with the download / streaming of this dataset: ``` Server error Status code: 400 Exception: BadZipFile Message: File is not a zip file ``` ### Owner No
lewtun
https://github.com/huggingface/datasets/issues/4572
null
false
1,284,883,289
4,571
move under the facebook org?
open
[ "Related to https://github.com/huggingface/datasets/issues/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ", "I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?", "fwiw: the dataset viewer is working. Renaming the issue" ]
2022-06-26T11:19:09
2023-09-25T12:05:18
null
### Link https://huggingface.co/datasets/gsarti/flores_101 ### Description It seems like streaming isn't supported for this dataset: ``` Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset/flores101_dataset.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. ``` ### Owner No
lewtun
https://github.com/huggingface/datasets/issues/4571
null
false
1,284,846,168
4,570
Dataset sharding non-contiguous?
closed
[ "This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.", "Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.shard).\r\n\r\nEDIT: Answered as you closed the thread πŸ˜„ ", "Hahaha I'm sorry; my excuse is: it's Sunday. (Which makes me all the more grateful for your response :smiley: ", "@mariosasko Sorry for reviving this, but I was curious as to why `contiguous=False` was the default. This might be a personal bias, but I feel that a user would expect the opposite to be the default. :thinking: ", "This project started as a fork of TFDS, and `contiguous=False` is the default behavior [there](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard)." ]
2022-06-26T08:34:05
2022-06-30T11:00:47
2022-06-26T14:36:20
## Describe the bug I'm not sure if this is a bug; more likely normal behavior but i wanted to double check. Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset? This might be related to this pull request (https://github.com/huggingface/datasets/pull/4466) but I have to admit I did not properly look into the changes made. ## Steps to reproduce the bug ```python max_shard_size = convert_file_size_to_int('300MB') dataset_nbytes = dataset.data.nbytes num_shards = int(dataset_nbytes / max_shard_size) + 1 num_shards = max(num_shards, 1) print(f"{num_shards=}") for shard_index in range(num_shards): shard = dataset.shard(num_shards=num_shards, index=shard_index) shard.to_parquet(f"tokenized/tokenized-{shard_index:03d}.parquet") os.listdir('tokenized/') ``` ## Expected results I expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example ## Actual results Only the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31 - Python version: 3.10.4 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
cakiki
https://github.com/huggingface/datasets/issues/4570
null
false
1,284,833,694
4,569
Dataset Viewer issue for sst2
closed
[ "Hi @lewtun, thanks for reporting.\r\n\r\nI have checked locally and refreshed the preview and it seems working smooth now:\r\n```python\r\nIn [8]: ds\r\nOut[8]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nCould you confirm? ", "Thanks @albertvillanova - it is indeed working now (not sure what caused the error in the first place). Closing this :)" ]
2022-06-26T07:32:54
2022-06-27T06:37:48
2022-06-27T06:37:48
### Link https://huggingface.co/datasets/sst2 ### Description Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem): ``` Status code: 400 Exception: Exception Message: Give up after 5 attempts with ConnectionError ``` ### Owner No
lewtun
https://github.com/huggingface/datasets/issues/4569
null
false
1,284,655,624
4,568
XNLI cache reload is very slow
closed
[ "Hi,\r\nCould you tell us how you are running this code?\r\nI tested on my machine (M1 Mac). And it is running fine both on and off internet.\r\n\r\n<img width=\"1033\" alt=\"Screen Shot 2022-07-03 at 1 32 25 AM\" src=\"https://user-images.githubusercontent.com/8711912/177026364-4ad7cedb-e524-4513-97f7-7961bbb34c90.png\">\r\nTested on both stable and dev version. ", "Sure, I was running it on a Linux machine.\r\nI found that if I turn the Internet off, it would still try to make a HTTPS call which would slow down the cache loading. If you can't reproduce then we can close the issue.", "Hi @Muennighoff! You can set the env variable `HF_DATASETS_OFFLINE` to `1` to avoid this behavior in offline mode. More info is available [here](https://huggingface.co/docs/datasets/master/en/loading#offline)." ]
2022-06-25T16:43:56
2022-07-04T14:29:40
2022-07-04T14:29:40
### Reproduce Using `2.3.3.dev0` `from datasets import load_dataset` `load_dataset("xnli", "en")` Turn off Internet `load_dataset("xnli", "en")` I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the library trying to download when there is no Internet. If I leave it running it works but takes way longer than when there is Internet. I would expect loading from cache to take the same amount of time regardless of whether there is Internet. ``` --------------------------------------------------------------------------- gaierror Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self) 174 conn = connection.create_connection( --> 175 (self._dns_host, self.port), self.timeout, **extra_kw 176 ) /opt/conda/lib/python3.7/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 71 ---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 73 af, socktype, proto, canonname, sa = res /opt/conda/lib/python3.7/socket.py in getaddrinfo(host, port, family, type, proto, flags) 751 addrlist = [] --> 752 for res in _socket.getaddrinfo(host, port, family, type, proto, flags): 753 af, socktype, proto, canonname, sa = res gaierror: [Errno -3] Temporary failure in name resolution During handling of the above exception, another exception occurred: KeyboardInterrupt Traceback (most recent call last) /tmp/ipykernel_33/3594208039.py in <module> ----> 1 load_dataset("xnli", "en") /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1673 revision=revision, 1674 use_auth_token=use_auth_token, -> 1675 **config_kwargs, 1676 ) 1677 /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1494 download_mode=download_mode, 1495 data_dir=data_dir, -> 1496 data_files=data_files, 1497 ) 1498 /opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1182 download_config=download_config, 1183 download_mode=download_mode, -> 1184 dynamic_modules_path=dynamic_modules_path, 1185 ).get_module() 1186 elif path.count("/") == 1: # community dataset on the Hub /opt/conda/lib/python3.7/site-packages/datasets/load.py in __init__(self, name, revision, download_config, download_mode, dynamic_modules_path) 506 self.dynamic_modules_path = dynamic_modules_path 507 assert self.name.count("/") == 0 --> 508 increase_load_count(name, resource_type="dataset") 509 510 def download_loading_script(self, revision: Optional[str]) -> str: /opt/conda/lib/python3.7/site-packages/datasets/load.py in increase_load_count(name, resource_type) 166 if not config.HF_DATASETS_OFFLINE and config.HF_UPDATE_DOWNLOAD_COUNTS: 167 try: --> 168 head_hf_s3(name, filename=name + ".py", dataset=(resource_type == "dataset")) 169 except Exception: 170 pass /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in head_hf_s3(identifier, filename, use_cdn, dataset, max_retries) 93 return http_head( 94 hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset), ---> 95 max_retries=max_retries, 96 ) 97 /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries) 445 allow_redirects=allow_redirects, 446 timeout=timeout, --> 447 max_retries=max_retries, 448 ) 449 return response /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 366 tries += 1 367 try: --> 368 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) 369 success = True 370 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: /opt/conda/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 /opt/conda/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 527 } 528 send_kwargs.update(settings) --> 529 resp = self.send(prep, **send_kwargs) 530 531 return resp /opt/conda/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs) 643 644 # Send the request --> 645 r = adapter.send(request, **kwargs) 646 647 # Total elapsed time of the request (approximately) /opt/conda/lib/python3.7/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 decode_content=False, 449 retries=self.max_retries, --> 450 timeout=timeout 451 ) 452 /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 708 body=body, 709 headers=headers, --> 710 chunked=chunked, 711 ) 712 /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 384 # Trigger any extra validation we need to do. 385 try: --> 386 self._validate_conn(conn) 387 except (SocketTimeout, BaseSSLError) as e: 388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 1038 # Force connect early to allow us to validate the connection. 1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` -> 1040 conn.connect() 1041 1042 if not conn.is_verified: /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in connect(self) 356 def connect(self): 357 # Add certificate verification --> 358 self.sock = conn = self._new_conn() 359 hostname = self.host 360 tls_in_tls = False /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self) 173 try: 174 conn = connection.create_connection( --> 175 (self._dns_host, self.port), self.timeout, **extra_kw 176 ) 177 KeyboardInterrupt: ```
Muennighoff
https://github.com/huggingface/datasets/issues/4568
null
false
1,284,528,474
4,567
Add evaluation data for amazon_reviews_multi
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
2022-06-25T09:40:52
2023-09-24T09:35:22
2022-09-23T09:37:23
null
lewtun
https://github.com/huggingface/datasets/pull/4567
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4567", "html_url": "https://github.com/huggingface/datasets/pull/4567", "diff_url": "https://github.com/huggingface/datasets/pull/4567.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4567.patch", "merged_at": null }
true
1,284,397,594
4,566
Document link #load_dataset_enhancing_performance points to nowhere
closed
[ "Hi! This is indeed the link the docstring should point to. Are you interested in submitting a PR to fix this?", "https://github.com/huggingface/datasets/blame/master/docs/source/cache.mdx#L93\r\n\r\nThere seems already an anchor here. Somehow it doesn't work. I am not very familiar with how this online documentation works." ]
2022-06-25T01:18:19
2023-01-24T16:33:40
2023-01-24T16:33:40
## Describe the bug A clear and concise description of what the bug is. ![image](https://user-images.githubusercontent.com/11674033/175752806-5b066b92-9d28-4771-9112-5c8606f07741.png) The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dataset_enhancing_performance) link [here](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.load_from_disk.keep_in_memory) points to nowhere, I guess it should point to https://huggingface.co/docs/datasets/v2.3.2/en/cache#improve-performance?
subercui
https://github.com/huggingface/datasets/issues/4566
null
false
1,284,141,666
4,565
Add UFSC OCPap dataset
closed
[ "I will add this directly on the hub (same as #4486)β€”in https://huggingface.co/lapix" ]
2022-06-24T20:07:54
2022-07-06T19:03:02
2022-07-06T19:03:02
## Adding a Dataset - **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4) - **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients. - **Paper:** https://dx.doi.org/10.2139/ssrn.4119212 - **Data:** https://data.mendeley.com/datasets/dr7ydy9xbk/1 - **Motivation:** real data of pap stained oral cytology samples Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
johnnv1
https://github.com/huggingface/datasets/issues/4565
null
false
1,283,932,333
4,564
Support streaming bookcorpus dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-24T16:13:39
2022-07-06T09:34:48
2022-07-06T09:23:04
Support streaming bookcorpus dataset.
albertvillanova
https://github.com/huggingface/datasets/pull/4564
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4564", "html_url": "https://github.com/huggingface/datasets/pull/4564", "diff_url": "https://github.com/huggingface/datasets/pull/4564.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4564.patch", "merged_at": "2022-07-06T09:23:04" }
true
1,283,914,383
4,563
Support streaming allocine dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-24T15:55:03
2022-06-24T16:54:57
2022-06-24T16:44:41
Support streaming allocine dataset. Fix #4562.
albertvillanova
https://github.com/huggingface/datasets/pull/4563
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4563", "html_url": "https://github.com/huggingface/datasets/pull/4563", "diff_url": "https://github.com/huggingface/datasets/pull/4563.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4563.patch", "merged_at": "2022-06-24T16:44:41" }
true
1,283,779,557
4,562
Dataset Viewer issue for allocine
closed
[ "I removed my assignment as @huggingface/datasets should be able to answer better than me\r\n", "Let me have a look...", "Thanks for the quick fix @albertvillanova ", "Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content *sequentially* (no random access).", "> Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content _sequentially_ (no random access).\r\n\r\nAh thanks for the clarification! I'll look out for this next time and implement the fix myself :)" ]
2022-06-24T13:50:38
2022-06-27T06:39:32
2022-06-24T16:44:41
### Link https://huggingface.co/datasets/allocine ### Description Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed: ``` Status code: 400 Exception: AttributeError Message: 'TarContainedFile' object has no attribute 'readable' ``` ### Owner No
lewtun
https://github.com/huggingface/datasets/issues/4562
null
false
1,283,624,242
4,561
Add evaluation data to acronym_identification
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-24T11:17:33
2022-06-27T09:37:55
2022-06-27T08:49:22
null
lewtun
https://github.com/huggingface/datasets/pull/4561
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4561", "html_url": "https://github.com/huggingface/datasets/pull/4561", "diff_url": "https://github.com/huggingface/datasets/pull/4561.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4561.patch", "merged_at": "2022-06-27T08:49:22" }
true
1,283,558,873
4,560
Add evaluation metadata to imagenet-1k
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
2022-06-24T10:12:41
2023-09-24T09:35:32
2022-09-23T09:37:03
null
lewtun
https://github.com/huggingface/datasets/pull/4560
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4560", "html_url": "https://github.com/huggingface/datasets/pull/4560", "diff_url": "https://github.com/huggingface/datasets/pull/4560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4560.patch", "merged_at": null }
true
1,283,544,937
4,559
Add action names in schema_guided_dstc8 dataset card
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-24T10:00:01
2022-06-24T10:54:28
2022-06-24T10:43:47
As aseked in https://huggingface.co/datasets/schema_guided_dstc8/discussions/1, I added the action names in the dataset card
lhoestq
https://github.com/huggingface/datasets/pull/4559
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4559", "html_url": "https://github.com/huggingface/datasets/pull/4559", "diff_url": "https://github.com/huggingface/datasets/pull/4559.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4559.patch", "merged_at": "2022-06-24T10:43:47" }
true
1,283,479,650
4,558
Add evaluation metadata to wmt14
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4558). All of your documentation changes will be reflected on that endpoint.", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
2022-06-24T09:08:54
2023-09-24T09:35:39
2022-09-23T09:36:50
null
lewtun
https://github.com/huggingface/datasets/pull/4558
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4558", "html_url": "https://github.com/huggingface/datasets/pull/4558", "diff_url": "https://github.com/huggingface/datasets/pull/4558.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4558.patch", "merged_at": null }
true
1,283,473,889
4,557
Add evaluation metadata to wmt16
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4557). All of your documentation changes will be reflected on that endpoint.", "> Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?\r\n\r\nyes :)", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
2022-06-24T09:04:23
2023-09-24T09:35:49
2022-09-23T09:36:32
Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?
lewtun
https://github.com/huggingface/datasets/pull/4557
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4557", "html_url": "https://github.com/huggingface/datasets/pull/4557", "diff_url": "https://github.com/huggingface/datasets/pull/4557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4557.patch", "merged_at": null }
true
1,283,462,881
4,556
Dataset Viewer issue for conll2003
closed
[ "Fixed, thanks." ]
2022-06-24T08:55:18
2022-06-24T09:50:39
2022-06-24T09:50:39
### Link https://huggingface.co/datasets/conll2003/viewer/conll2003/test ### Description Seems like a cache problem with this config / split: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/conll2003/__init__.py' ``` ### Owner No
lewtun
https://github.com/huggingface/datasets/issues/4556
null
false
1,283,451,651
4,555
Dataset Viewer issue for xtreme
closed
[ "Fixed, thanks." ]
2022-06-24T08:46:08
2022-06-24T09:50:45
2022-06-24T09:50:45
### Link https://huggingface.co/datasets/xtreme/viewer/PAN-X.de/test ### Description There seems to be a problem with the cache of this config / split: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/xtreme/349258adc25bb45e47de193222f95e68a44f7a7ab53c4283b3f007208a11bf7e/xtreme.py' ``` ### Owner No
lewtun
https://github.com/huggingface/datasets/issues/4555
null
false
1,283,369,453
4,554
Fix WMT dataset loading issue and docs update (Re-opened)
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-24T07:26:16
2022-07-08T15:39:20
2022-07-08T15:27:44
This PR is a fix for #4354 Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets. Let me know, if any additional changes are required. Thanks
khushmeeet
https://github.com/huggingface/datasets/pull/4554
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4554", "html_url": "https://github.com/huggingface/datasets/pull/4554", "diff_url": "https://github.com/huggingface/datasets/pull/4554.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4554.patch", "merged_at": "2022-07-08T15:27:44" }
true
1,282,779,560
4,553
Stop dropping columns in to_tf_dataset() before we load batches
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq Rebasing fixed the test failures, so this should be ready to review now! There's still a failure on Win but it seems unrelated.", "Gentle ping @lhoestq ! This is a simple fix (dropping columns after loading a batch from the dataset rather than with `.remove_columns()` to make sure we don't break transforms), and tests are green so we're ready for review!", "@lhoestq Test is in!" ]
2022-06-23T18:21:05
2022-07-04T19:00:13
2022-07-04T18:49:01
`to_tf_dataset()` dropped unnecessary columns before loading batches from the dataset, but this is causing problems when using a transform, because the dropped columns might be needed to compute the transform. Since there's no real way to check which columns the transform might need, we skip dropping columns and instead drop keys from the batch after we load it. cc @amyeroberts and https://github.com/huggingface/notebooks/pull/202
Rocketknight1
https://github.com/huggingface/datasets/pull/4553
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4553", "html_url": "https://github.com/huggingface/datasets/pull/4553", "diff_url": "https://github.com/huggingface/datasets/pull/4553.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4553.patch", "merged_at": "2022-07-04T18:49:01" }
true
1,282,615,646
4,552
Tell users to upload on the hub directly
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks ! I updated the two remaining files" ]
2022-06-23T15:47:52
2022-06-26T15:49:46
2022-06-26T15:39:11
As noted in https://github.com/huggingface/datasets/pull/4534, it is still not clear that it is recommended to add datasets on the Hugging Face Hub directly instead of GitHub, so I updated some docs. Moreover since users won't be able to get reviews from us on the Hub, I added a paragraph to tell users that they can open a discussion and tag `datasets` maintainers for reviews. Finally I removed the _previous good reasons_ to add a dataset on GitHub to only keep this one: > In some rare cases it makes more sense to open a PR on GitHub. For example when you are not the author of the dataset and there is no clear organization / namespace that you can put the dataset under. Does it sound good to you @albertvillanova @julien-c ?
lhoestq
https://github.com/huggingface/datasets/pull/4552
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4552", "html_url": "https://github.com/huggingface/datasets/pull/4552", "diff_url": "https://github.com/huggingface/datasets/pull/4552.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4552.patch", "merged_at": "2022-06-26T15:39:11" }
true
1,282,534,807
4,551
Perform hidden file check on relative data file path
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm aware of this behavior, which is tricky to solve due to fsspec's hidden file handling (see https://github.com/huggingface/datasets/issues/4115#issuecomment-1108819538). I've tested some regex patterns to address this, and they seem to work (will push them on Monday; btw they don't break any of fsspec's tests, so maybe we can contribute this as an enhancement to them). Also, perhaps we should include the files starting with `__` in the results again (we hadn't had issues with this pattern before). WDYT?", "I see. Feel free to merge this one if it's good for you btw :)\r\n\r\n> Also, perhaps we should include the files starting with __ in the results again (we hadn't had issues with this pattern before)\r\n\r\nThe point was mainly to ignore `__pycache__` directories for example. Also also for consistency with the iter_files/iter_archive which are already ignoring them", "Very elegant solution! Feel free to merge if the CI is green after adding the tests.", "CI failure is unrelated to this PR" ]
2022-06-23T14:49:11
2022-06-30T14:49:20
2022-06-30T14:38:18
Fix #4549
mariosasko
https://github.com/huggingface/datasets/pull/4551
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4551", "html_url": "https://github.com/huggingface/datasets/pull/4551", "diff_url": "https://github.com/huggingface/datasets/pull/4551.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4551.patch", "merged_at": "2022-06-30T14:38:18" }
true
1,282,374,441
4,550
imdb source error
closed
[ "Thanks for reporting, @Muhtasham.\r\n\r\nIndeed IMDB dataset is not accessible from yesterday, because the data is hosted on the data owners servers at Stanford (http://ai.stanford.edu/) and these are down due to a power outage originated by a fire: https://twitter.com/StanfordAILab/status/1539472302399623170?s=20&t=1HU1hrtaXprtn14U61P55w\r\n\r\nAs a temporary workaroud, you can load the IMDB dataset with this tweak:\r\n```python\r\nds = load_dataset(\"imdb\", revision=\"tmp-fix-imdb\")\r\n```\r\n" ]
2022-06-23T13:02:52
2022-06-23T13:47:05
2022-06-23T13:47:04
## Describe the bug imdb dataset not loading ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("imdb") ``` ## Expected results ## Actual results ```bash 06/23/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and preparing it from source 06/23/2022 14:46:34 - INFO - datasets.utils.file_utils - HEAD request to http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz timed out, retrying... [1.0] ..... ConnectionError: Couldn't reach http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz (ConnectTimeout(MaxRetryError("HTTPConnectionPool(host='ai.stanford.edu', port=80): Max retries exceeded with url: /~amaas/data/sentiment/aclImdb_v1.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f2d750cf690>, 'Connection to ai.stanford.edu timed out. (connect timeout=100)'))"))) ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
Muhtasham
https://github.com/huggingface/datasets/issues/4550
null
false
1,282,312,975
4,549
FileNotFoundError when passing a data_file inside a directory starting with double underscores
closed
[ "I have consistently experienced this bug on GitHub actions when bumping to `2.3.2`", "We're working on a fix ;)" ]
2022-06-23T12:19:24
2022-06-30T14:38:18
2022-06-30T14:38:18
Bug experienced in the `accelerate` CI: https://github.com/huggingface/accelerate/runs/7016055148?check_suite_focus=true This is related to https://github.com/huggingface/datasets/pull/4505 and the changes from https://github.com/huggingface/datasets/pull/4412
lhoestq
https://github.com/huggingface/datasets/issues/4549
null
false
1,282,218,096
4,548
Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix
closed
[ "I agree it would be nice to support this. It doesn't fit really well in the current data_files.py, where files of each splits are separated in different folder though, maybe we have to modify a bit the logic here. \r\n\r\nOne idea would be to extend `get_patterns_in_dataset_repository` and `get_patterns_locally` to additionally check for `metadata.json`, but feel free to comment if you have better ideas (I feel like we're reaching the limits of what the current implementation IMO, so we could think of a different way of resolving the data files if necessary)" ]
2022-06-23T10:58:57
2022-06-30T10:15:32
2022-06-30T10:15:32
If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored. This happens when a directory is structured like as follows: ``` train/ file_1.jpg file_2.jpg test/ file_3.jpg file_4.jpg metadata.jsonl ``` or like as follows: ``` train_file_1.jpg train_file_2.jpg test_file_3.jpg test_file_4.jpg metadata.jsonl ``` The same for HF repos. because it's ignored by the patterns [here](https://github.com/huggingface/datasets/blob/master/src/datasets/data_files.py#L29) @lhoestq @mariosasko Do you think it's better to add this functionality in `data_files.py` or just specifically in imagefolder/audiofolder code? In `data_files.py` would me more general but I don't know if there are any other cases when that might be needed.
polinaeterna
https://github.com/huggingface/datasets/issues/4548
null
false
1,282,160,517
4,547
[CI] Fix some warnings
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "There is a CI failure only related to the missing content of the universal_dependencies dataset card, we can ignore this failure in this PR", "good catch, I thought I resolved them all sorry", "Alright it should be good now" ]
2022-06-23T10:10:49
2022-06-28T14:10:57
2022-06-28T13:59:54
There are some warnings in the CI that are annoying, I tried to remove most of them
lhoestq
https://github.com/huggingface/datasets/pull/4547
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4547", "html_url": "https://github.com/huggingface/datasets/pull/4547", "diff_url": "https://github.com/huggingface/datasets/pull/4547.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4547.patch", "merged_at": "2022-06-28T13:59:54" }
true
1,282,093,288
4,546
[CI] fixing seqeval install in ci by pinning setuptools-scm
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-23T09:24:37
2022-06-23T10:24:16
2022-06-23T10:13:44
The latest setuptools-scm version supported on 3.6 is 6.4.2. However for some reason circleci has version 7, which doesn't work. I fixed this by pinning the version of setuptools-scm in the circleci job Fix https://github.com/huggingface/datasets/issues/4544
lhoestq
https://github.com/huggingface/datasets/pull/4546
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4546", "html_url": "https://github.com/huggingface/datasets/pull/4546", "diff_url": "https://github.com/huggingface/datasets/pull/4546.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4546.patch", "merged_at": "2022-06-23T10:13:44" }
true
1,280,899,028
4,545
Make DuplicateKeysError more user friendly [For Issue #2556]
closed
[ "> Nice thanks !\r\n> \r\n> After your changes feel free to mark this PR as \"ready for review\" ;)\r\n\r\nMarking PR ready for review.\r\n\r\n@lhoestq Let me know if there is anything else required or if we are good to go ahead and merge.", "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-22T21:01:34
2022-06-28T09:37:06
2022-06-28T09:26:04
# What does this PR do? ## Summary *DuplicateKeysError error does not provide any information regarding the examples which have the same the key.* *This information is very helpful for debugging the dataset generator script.* ## Additions - ## Changes - Changed `DuplicateKeysError Class` in `src/datasets/keyhash.py` to add current index and duplicate_key_indices to error message. - Changed `check_duplicate_keys` function in `src/datasets/arrow_writer.py` to find indices of examples with duplicate hash if duplicate keys are found. ## Deletions - ## To do : - [x] Find way to find and print path `<Path to Dataset>` in Error message ## Issues Addressed : Fixes #2556
VijayKalmath
https://github.com/huggingface/datasets/pull/4545
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4545", "html_url": "https://github.com/huggingface/datasets/pull/4545", "diff_url": "https://github.com/huggingface/datasets/pull/4545.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4545.patch", "merged_at": "2022-06-28T09:26:04" }
true
1,280,500,340
4,544
[CI] seqeval installation fails sometimes on python 3.6
closed
[]
2022-06-22T16:35:23
2022-06-23T10:13:44
2022-06-23T10:13:44
The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail. The installation fails because of this error: ``` Collecting seqeval Downloading seqeval-1.2.2.tar.gz (43 kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 10 kB 42.1 MB/s eta 0:00:01 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 20 kB 53.3 MB/s eta 0:00:01 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 30 kB 67.2 MB/s eta 0:00:01 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 40 kB 76.1 MB/s eta 0:00:01 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 43 kB 10.0 MB/s Preparing metadata (setup.py) ... - error ERROR: Command errored out with exit status 1: command: /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pf54_vqy cwd: /tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/ Complete output (22 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py", line 56, in <module> 'Programming Language :: Python :: Implementation :: PyPy' File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/__init__.py", line 143, in setup return distutils.core.setup(**attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/core.py", line 108, in setup _setup_distribution = dist = klass(attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 442, in __init__ k: v for k, v in attrs.items() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/dist.py", line 281, in __init__ self.finalize_options() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 601, in finalize_options ep.load()(self, ep.name, value) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2346, in load return self.resolve() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2352, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/.eggs/setuptools_scm-7.0.2-py3.6.egg/setuptools_scm/__init__.py", line 5 from __future__ import annotations ^ SyntaxError: future feature annotations is not defined ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz#sha256=f28e97c3ab96d6fcd32b648f6438ff2e09cfba87f05939da9b3970713ec56e6f (from https://pypi.org/simple/seqeval/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` for example in https://app.circleci.com/pipelines/github/huggingface/datasets/12665/workflows/93878eb9-a923-4b35-b2e7-c5e9b22f10ad/jobs/75300 Here is a diff of the pip install logs until the error is reached: https://www.diffchecker.com/VkQDLeQT This could be caused by the latest updates of setuptools-scm
lhoestq
https://github.com/huggingface/datasets/issues/4544
null
false
1,280,379,781
4,543
[CI] Fix upstream hub test url
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Remaining CI failures are unrelated to this fix, merging" ]
2022-06-22T15:34:27
2022-06-22T16:37:40
2022-06-22T16:27:37
Some tests were still using moon-stagign instead of hub-ci. I also updated the token to use one dedicated to `datasets`
lhoestq
https://github.com/huggingface/datasets/pull/4543
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4543", "html_url": "https://github.com/huggingface/datasets/pull/4543", "diff_url": "https://github.com/huggingface/datasets/pull/4543.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4543.patch", "merged_at": "2022-06-22T16:27:37" }
true
1,280,269,445
4,542
[to_tf_dataset] Use Feather for better compatibility with TensorFlow ?
open
[ "This has so much potential to be great! Also I think you tagged some poor random dude on the internet whose name is also Joao, lol, edited that for you! ", "cc @sayakpaul here too, since he was interested in our new approaches to converting datasets!", "Noted and I will look into the thread in detail tomorrow once I log back in. ", "@lhoestq I have used TFRecords with `tf.data` for both vision and text and I can say that they are quite performant. I haven't worked with Feather yet as similarly as I have with TFRecords. If you haven't started the benchmarking script yet, I can prepare a Colab notebook that loads Feather files, converts them into a `tf.data` pipeline, and does some basic preprocessing. \r\n\r\nBut in my limited understanding, Feather might be better suited for CSV files. Not yet sure if it's good for modalities like images. ", "> Not yet sure if it's good for modalities like images.\r\n\r\nWe store images pretty much the same way as tensorflow_datasets (i.e. storing the encoded image bytes, or a path to the local image, so that the image can be decoded on-the-fly), so as long as we use something similar as TFDS for image decoding it should be ok", "So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly? But it introduces an I/O redundancy of having to read the images every time.\r\n\r\nWith caching it could be somewhat mitigated but it's not a good solution for bigger image datasets. ", "> So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly?\r\n\r\nhopefully yes :) \r\n\r\nI double-checked the TFDS source code and they always save the bytes actually, not the path. Anyway we'll see if we run into issues or not (as a first step we can require the bytes to be in the feather file)", "Yes. For images, TFDS actually prepares TFRecords first for encoding and then reuses them for every subsequent call. ", "@lhoestq @Rocketknight1 I worked on [this PoC](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59) that\r\n\r\n* Creates Feather files from a medium resolution dataset (`tf_flowers`).\r\n* Explores different options with TensorFlow IO to load the Feather files. \r\n\r\nI haven't benchmarked those different options yet. There's also a gotcha that I have noted in the PoC. I hope it gets us started but I'm sorry if this is redundant. ", "Cool thanks ! If I understand correctly in your PoC you store the flattened array of pixels in the feather file. This will take a lot of disk space.\r\n\r\nMaybe we could just save the encoded bytes and let users apply a `map` to decode/transform them into the format they need for training ? Users can use tf.image to do so for example", "@lhoestq this is what I tried:\r\n\r\n```py\r\ndef read_image(path):\r\n with open(path, \"rb\") as f:\r\n return f.read()\r\n\r\n\r\ntotal_images_written = 0\r\n\r\nfor step in tqdm.tnrange(int(math.ceil(len(image_paths) / batch_size))):\r\n batch_image_paths = image_paths[step * batch_size : (step + 1) * batch_size]\r\n batch_image_labels = all_integer_labels[step * batch_size : (step + 1) * batch_size]\r\n\r\n data = [read_image(path) for path in batch_image_paths]\r\n table = pa.Table.from_arrays([data, batch_image_labels], [\"data\", \"labels\"])\r\n write_feather(table, f\"/tmp/flowers_feather_{step}.feather\", chunksize=chunk_size)\r\n total_images_written += len(batch_image_paths)\r\n print(f\"Total images written: {total_images_written}.\")\r\n\r\n del data\r\n```\r\n\r\nI got the feather files done (no resizing required as you can see):\r\n\r\n```sh\r\nls -lh /tmp/*.feather\r\n\r\n-rw-r--r-- 1 sayakpaul wheel 64M Jun 24 09:28 /tmp/flowers_feather_0.feather\r\n-rw-r--r-- 1 sayakpaul wheel 59M Jun 24 09:28 /tmp/flowers_feather_1.feather\r\n-rw-r--r-- 1 sayakpaul wheel 51M Jun 24 09:28 /tmp/flowers_feather_2.feather\r\n-rw-r--r-- 1 sayakpaul wheel 45M Jun 24 09:28 /tmp/flowers_feather_3.feather\r\n```\r\n\r\nNow there seems to be a problem with `tfio.arrow`:\r\n\r\n```py\r\nimport tensorflow_io.arrow as arrow_io\r\n\r\n\r\ndataset = arrow_io.ArrowFeatherDataset(\r\n [\"/tmp/flowers_feather_0.feather\"],\r\n columns=(0, 1),\r\n output_types=(tf.string, tf.int64),\r\n output_shapes=([], []),\r\n batch_mode=\"auto\",\r\n)\r\n\r\nprint(dataset.element_spec) \r\n```\r\n\r\nPrints:\r\n\r\n```\r\n(TensorSpec(shape=(None,), dtype=tf.string, name=None),\r\n TensorSpec(shape=(None,), dtype=tf.int64, name=None))\r\n```\r\n\r\nBut when I do `sample = next(iter(dataset))` it goes into:\r\n\r\n```py\r\nInternalError Traceback (most recent call last)\r\nInput In [30], in <cell line: 1>()\r\n----> 1 sample = next(iter(dataset))\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:766, in OwnedIterator.__next__(self)\r\n 764 def __next__(self):\r\n 765 try:\r\n--> 766 return self._next_internal()\r\n 767 except errors.OutOfRangeError:\r\n 768 raise StopIteration\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:749, in OwnedIterator._next_internal(self)\r\n 746 # TODO(b/77291417): This runs in sync mode as iterators use an error status\r\n 747 # to communicate that there is no more data to iterate over.\r\n 748 with context.execution_mode(context.SYNC):\r\n--> 749 ret = gen_dataset_ops.iterator_get_next(\r\n 750 self._iterator_resource,\r\n 751 output_types=self._flat_output_types,\r\n 752 output_shapes=self._flat_output_shapes)\r\n 754 try:\r\n 755 # Fast path for the case `self._structure` is not a nested structure.\r\n 756 return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/ops/gen_dataset_ops.py:3017, in iterator_get_next(iterator, output_types, output_shapes, name)\r\n 3015 return _result\r\n 3016 except _core._NotOkStatusException as e:\r\n-> 3017 _ops.raise_from_not_ok_status(e, name)\r\n 3018 except _core._FallbackException:\r\n 3019 pass\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:7164, in raise_from_not_ok_status(e, name)\r\n 7162 def raise_from_not_ok_status(e, name):\r\n 7163 e.message += (\" name: \" + name if name is not None else \"\")\r\n-> 7164 raise core._status_to_exception(e) from None\r\n\r\nInternalError: Invalid: INVALID_ARGUMENT: arrow data type 0x7ff9899d8038 is not supported: Type error: Arrow data type is not supported [Op:IteratorGetNext]\r\n```\r\n\r\nSome additional notes:\r\n\r\n* I can actually decode an image encoded with `read_image()` (shown earlier):\r\n\r\n ```py\r\n sample_image_path = image_paths[0]\r\n encoded_image = read_image(sample_image_path)\r\n image = tf.image.decode_png(encoded_image, 3)\r\n print(image.shape)\r\n ```\r\n\r\n* If the above `tf.data.Dataset` object would have succeeded my plan was to just map the decoder like so:\r\n\r\n ```py\r\n autotune = tf.data.AUTOTUNE\r\n dataset = dataset.map(lambda x, y: (tf.image.decode_png(x, 3), y), num_parallel_calls=autotune)\r\n ```", "@lhoestq I think I was able to make it work in the way you were envisioning. Here's the PoC:\r\nhttps://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb\r\n\r\nSome details:\r\n\r\n* I am currently serializing the images as strings with `base64`). In comparison to the flattened arrays as before, the size of the individual feather files has reduced (144 MB -> 85 MB, largest).\r\n* When decoding, I am first decoding the base64 string and then decoding that string (with `tf.io.decode_base64`) as an image with `tf.image.decode_png()`. \r\n* The entire workflow (from generating the Feather files to loading them and preparing the batched `tf.data` pipeline) involves the following libraries: `pyarraow`, `tensorflow-io`, and `tensorflow`. \r\n\r\nCc: @Rocketknight1 @gante ", "Cool thanks ! Too bad the Arrow binary type doesn't seem to be supported in `arrow_io.ArrowFeatherDataset` :/ We would also need it to support Arrow struct type. Indeed images in `datasets` are represented using an Arrow type\r\n```python\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n```\r\nnot sure yet how hard it is to support this though.\r\n\r\nChanging the typing on our side would create concerning breaking changes, that's why it would be awesome if it could work using these types", "If the ArrowFeatherDataset doesn't yet support it, I guess our hands are a bit tied at the moment. \r\n\r\nIIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n\r\n```\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n``` \r\n\r\nIn that case, `pa.binary()` isn't yet supported.", "> IIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n> \r\n> pa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n\r\nYea because that's the data format we're using. If we were to use base64, then we would have to process the full dataset to convert it, which can take some time. Converting to TFRecords would be simpler than converting to base64 in Feather files.\r\n\r\nMaybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset. What do you think ? Any other alternative in mind ?", "> Maybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset.\r\n\r\nShould be possible as per the comment but there hasn't been any progress and it's been more than a year. \r\n\r\n> If we were to use base64, then we would have to process the full dataset to convert it, which can take some time.\r\n\r\nI don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from. \r\n\r\n> What do you think ? Any other alternative in mind ?\r\n\r\nTFRecords since the TensorFlow ecosystem has developed good support for it over the years. ", "> I don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from.\r\n\r\nUsers already have a copy of the dataset in Arrow format (we can change this to Feather). So to load the Arrow/feather files to a TF dataset we need TF IO or something like that. Otherwise the user has to convert all the files from Arrow to TFRecords to use TF data efficiently. But the conversion needs resources: CPU, disk, time. Converting the images to base64 require the same sort of resources.\r\n\r\nSo the issue we're trying to tackle is how to load the Arrow data in TF without having to convert anything ^^", "Yeah, it looks like in its current state the tfio support for `Feather` is incomplete, so we'd end up having to write a lot of it, or do a conversion that defeats the whole point (because if we're going to convert the whole dataset we might as well convert to `TFRecord`).", "Understood @lhoestq. Thanks for explaining!\r\n\r\nAgreed with @Rocketknight1. ", "@lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?", "> @lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?\r\n\r\nHappy to take part there @Rocketknight1.", "If `to_tf_dataset` can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?", "@lhoestq why one would convert to TFRecords after unbatching? ", "> If to_tf_dataset can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?\r\n\r\nSort of! A `tf.data.Dataset` is more like an iterator, and does not support sample indexing. `to_tf_dataset()` creates an iterator, but to convert that to `TFRecord`, the user would have to iterate over the whole thing and manually save the stream of samples to files. ", "Someone would like to try to dive into tfio to fix this ? Sounds like a good opportunity to learn what are the best ways to load a dataset for TF, and also the connections between Arrow and TF.\r\n\r\nIf we can at least have the Arrow `binary` type working for TF that would be awesome already (issue https://github.com/tensorflow/io/issues/1361)\r\n\r\nalso cc @nateraw in case you'd be interested ;)", "> Sounds like a good opportunity to learn what are the best ways to load a dataset for TF\r\n\r\nThe recommended way would likely be a combination of TFRecords and `tf.data`. \r\n\r\nExploring the connection between Arrow and TensorFlow is definitely worth pursuing though. But I am not sure about the implications of storing images in a format supported by Arrow. I guess we'll know more once we have at least figured out the support for `binary` type for TFIO. I will spend some time on it and keep this thread updated. ", "I am currently working on a fine-tuning notebook for the TFSegFormer model (Semantic Segmentation). The resolution is high for both the input images and the labels - (512, 512, 3). Here's the [Colab Notebook](https://colab.research.google.com/drive/1jAtR7Z0lYX6m6JsDI5VByh5vFaNhHIbP?usp=sharing) (it's a WIP so please bear that in mind).\r\n\r\nI think the current implementation of `to_tf_dataset()` does create a bottleneck here since the GPU utilization is quite low. ", "Here's a notebook showing the performance difference: https://colab.research.google.com/gist/sayakpaul/d7ca67c90beb47e354942c9d8c0bd8ef/scratchpad.ipynb. \r\n\r\nNote that I acknowledge that it's not an apples-to-apples comparison in many aspects (the dataset isn't the same, data serialization format isn't the same, etc.) but this is the best I could do. ", "Thanks ! I think the speed difference can be partly explained: you use ds.shuffle in your dataset, which is an exact shuffling (compared to TFDS which does buffer shuffling): it slows down query time by 2x to 10x since it has to play with data that are not contiguous.\r\n\r\nThe rest of the speed difference seems to be caused by image decoding (from 330Β΅s/image to 30ms/image)", "Fair enough. Can do one without shuffling too. But it's an important one to consider I guess. " ]
2022-06-22T14:42:00
2022-10-11T08:45:45
null
To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory. It seems that using `tensorflow_io` we could have something similar for `to_tf_dataset` if we provide sharded Feather files: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowFeatherDataset Feather is a format almost equivalent to the Arrow IPC Stream format we're using in `datasets`: Feather V2 is equivalent to Arrow IPC File format, which is an extension of the stream format (it has an extra footer). Therefore we could store datasets as Feather instead of Arrow IPC Stream format without breaking the whole library. Here are a few points to explore - [ ] check the performance of ArrowFeatherDataset in tf.data - [ ] check what would change if we were to switch to Feather if needed, in particular check that those are fine: memory mapping, typing, writing, reading to python objects, etc. We would also need to implement sharding when loading a dataset (this will be done anyway for #546) cc @Rocketknight1 @gante feel free to comment in case I missed anything ! I'll share some files and scripts, so that we can benchmark performance of Feather files with tf.data
lhoestq
https://github.com/huggingface/datasets/issues/4542
null
false
1,280,161,436
4,541
Fix timestamp conversion from Pandas to Python datetime in streaming mode
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI failures are unrelated to this PR, merging" ]
2022-06-22T13:40:01
2022-06-22T16:39:27
2022-06-22T16:29:09
Arrow accepts both pd.Timestamp and datetime.datetime objects to create timestamp arrays. However a timestamp array is always converted to datetime.datetime objects. This created an inconsistency between streaming in non-streaming. e.g. the `ett` dataset outputs datetime.datetime objects in non-streaming but pd.timestamp in streaming. I fixed this by always converting pd.Timestamp to datetime.datetime during the example encoding step. I fixed the same issue for pd.Timedelta as well. Finally I added an extra step of conversion for Series and DataFrame to take this into account in case such data are passed as Series or DataFrame. Fix https://github.com/huggingface/datasets/issues/4533 Related to https://github.com/huggingface/datasets-server/issues/397
lhoestq
https://github.com/huggingface/datasets/pull/4541
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4541", "html_url": "https://github.com/huggingface/datasets/pull/4541", "diff_url": "https://github.com/huggingface/datasets/pull/4541.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4541.patch", "merged_at": "2022-06-22T16:29:09" }
true
1,280,142,942
4,540
Avoid splitting by` .py` for the file.
closed
[ "Hi @espoirMur, thanks for reporting.\r\n\r\nYou are right: that code line could be improved and made more generically valid.\r\n\r\nOn the other hand, I would suggest using `os.path.splitext` instead.\r\n\r\nAre you willing to open a PR? :)", "I will have a look.. \r\n\r\nThis weekend .. ", "@albertvillanova , Can you have a look at #4590. \r\n\r\nThanks ", "#self-assign" ]
2022-06-22T13:26:55
2022-07-07T13:17:44
2022-07-07T13:17:44
https://github.com/huggingface/datasets/blob/90b3a98065556fc66380cafd780af9b1814b9426/src/datasets/load.py#L272 Hello, Thanks you for this library . I was using it and I had one edge case. my home folder name ends with `.py` it is `/home/espoir.py` so anytime I am running the code to load a local module this code here it is failing because after splitting it is trying to save the code to my home directory. Step to reproduce. - If you have a home folder which ends with `.py` - load a module with a local folder `qa_dataset = load_dataset("src/data/build_qa_dataset.py")` it is failed A possible workaround would be to use pathlib at the mentioned line ` meta_path = Path(importable_local_file).parent.joinpath("metadata.json")` this can alivate the issue . Let me what are your thought on this and I can try to fix it by A PR.
espoirMur
https://github.com/huggingface/datasets/issues/4540
null
false
1,279,779,829
4,539
Replace deprecated logging.warn with logging.warning
closed
[]
2022-06-22T08:32:29
2022-06-22T13:43:23
2022-06-22T12:51:51
Replace `logging.warn` (deprecated in [Python 2.7, 2011](https://github.com/python/cpython/commit/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https://github.com/python/cpython/commit/6fa635df7aa88ae9fd8b41ae42743341316c90f7)). * https://docs.python.org/3/library/logging.html#logging.Logger.warning * https://github.com/python/cpython/issues/57444
hugovk
https://github.com/huggingface/datasets/pull/4539
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4539", "html_url": "https://github.com/huggingface/datasets/pull/4539", "diff_url": "https://github.com/huggingface/datasets/pull/4539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4539.patch", "merged_at": "2022-06-22T12:51:51" }
true
1,279,409,786
4,538
Dataset Viewer issue for Pile of Law
closed
[ "Hi @Breakend, yes – we'll propose a solution today", "Thanks so much, I appreciate it!", "Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!", "Awesome! Thanks for confirming. cc @severo ", "Just for the record:\r\n\r\n- the doc\r\n \r\n<img width=\"1430\" alt=\"Capture d’écran 2022-06-27 aΜ€ 09 29 27\" src=\"https://user-images.githubusercontent.com/1676121/175884089-bca6c0d5-6387-473e-98ca-86a910ede4bd.png\">\r\n\r\n- the dataset main page\r\n\r\n<img width=\"1134\" alt=\"Capture d’écran 2022-06-27 aΜ€ 09 29 05\" src=\"https://user-images.githubusercontent.com/1676121/175884152-5f285bf0-3471-45de-927a-e141b00ebb33.png\">\r\n\r\n- the dataset viewer page\r\n\r\n<img width=\"567\" alt=\"Capture d’écran 2022-06-27 aΜ€ 09 29 16\" src=\"https://user-images.githubusercontent.com/1676121/175884191-ab6a297b-1c11-417e-bbde-0b7623278a79.png\">\r\n" ]
2022-06-22T02:48:40
2022-06-27T07:30:23
2022-06-26T22:26:22
### Link https://huggingface.co/datasets/pile-of-law/pile-of-law ### Description Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information? Thanks so much! ### Owner Yes
Breakend
https://github.com/huggingface/datasets/issues/4538
null
false
1,279,144,310
4,537
Fix WMT dataset loading issue and docs update
closed
[ "The PR branch now has some commits unrelated to the changes, probably due to rebasing. Can you please close this PR and open a new one from a new branch? You can use `git cherry-pick` to preserve the relevant changes:\r\n```bash\r\ngit checkout master\r\ngit remote add upstream git@github.com:huggingface/datasets.git\r\ngit pull --ff-only upstream master\r\ngit checkout -b wmt-datasets-fix2\r\ngit cherry-pick f2d6c995d5153131168f64fc60fe33a7813739a4 a9fdead5f435aeb88c237600be28eb8d4fde4c55\r\n```", "Closing this PR due to unwanted commit changes. Will be opening new PR for the same issue." ]
2022-06-21T21:48:02
2022-06-24T07:05:43
2022-06-24T07:05:10
This PR is a fix for #4354 Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets. As I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e ".[dev]"`. Issue is with `tensorflow-text` not supported on M1s and there is no supporting repo by Apple or Google. So, if I was needed to perform local testing, I am not able to do that. Let me know, if any additional changes are required. Thanks
khushmeeet
https://github.com/huggingface/datasets/pull/4537
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4537", "html_url": "https://github.com/huggingface/datasets/pull/4537", "diff_url": "https://github.com/huggingface/datasets/pull/4537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4537.patch", "merged_at": null }
true
1,278,734,727
4,536
Properly raise FileNotFound even if the dataset is private
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-21T17:05:50
2022-06-28T10:46:51
2022-06-28T10:36:10
`tests/test_load.py::test_load_streaming_private_dataset` was failing because the hub now returns 401 when getting the HfApi.dataset_info of a dataset without authentication. `load_dataset` was raising ConnectionError, while it should be FileNoteFoundError since it first checks for local files before checking the Hub. Moreover when use_auth_token is not set (default is False), we should not pass `token=None` to HfApi.dataset_info, or it will use the local token by default - instead it should use no token. It's currently not possible to ask for no token to be used, so as a workaround I simply set token="no-token"
lhoestq
https://github.com/huggingface/datasets/pull/4536
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4536", "html_url": "https://github.com/huggingface/datasets/pull/4536", "diff_url": "https://github.com/huggingface/datasets/pull/4536.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4536.patch", "merged_at": "2022-06-28T10:36:10" }
true
1,278,365,039
4,535
Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays`
closed
[ "Also, I had a doubt while checking the code related to the indices... \r\n\r\n@lhoestq, there's a value in `config.py` named `DATASET_INDICES_FILENAME` which has the arrow extension (which I assume it should be `indices.faiss`, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an `ArrowDataset` in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/config.py#L183\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/arrow_dataset.py#L1079-L1092\r\n\r\nSo should I also remove that?\r\n\r\nP.S. I also edited the following code comment which I found misleading as it's not actually storing the indices.\r\n\r\nhttps://github.com/huggingface/datasets/blob/8ddc4bbeb1e2bd307b21f5d21f884649aa2bf640/src/datasets/arrow_dataset.py#L1122", "_The documentation is not available anymore as the PR was closed or merged._", "> @lhoestq, there's a value in config.py named DATASET_INDICES_FILENAME which has the arrow extension (which I assume it should be indices.faiss, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an ArrowDataset in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nThe arrow file is used to store an indices mapping (when you shuffle the dataset for example) - not for a faiss index ;)", "Ok cool thanks a lot for the explanation @lhoestq I was not sure about that :+1: I'll also add it there as you suggested!", "CI failures are unrelated to this PR and fixed on master, merging" ]
2022-06-21T12:18:49
2022-06-27T16:25:09
2022-06-27T16:14:36
Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR adds the `batch_size` parameter to both `ArrowDataset.add_faiss_index` and `ArrowDataset.add_faiss_index_from_external_arrays`. This is useful so as to tweak the `batch_size` according to the VM specifications.
alvarobartt
https://github.com/huggingface/datasets/pull/4535
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4535", "html_url": "https://github.com/huggingface/datasets/pull/4535", "diff_url": "https://github.com/huggingface/datasets/pull/4535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4535.patch", "merged_at": "2022-06-27T16:14:36" }
true
1,277,897,197
4,534
Add `tldr_news` dataset
closed
[ "Hey @lhoestq, \r\nSorry for opening a PR, I was following the guide [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)! Thanks for the review anyway, I will follow the instructions you sent πŸ˜ƒ ", "Thanks, we will update the guide ;)" ]
2022-06-21T05:02:43
2022-06-23T14:33:54
2022-06-21T14:21:11
This PR aims at adding support for a news dataset: `tldr news`. This dataset is based on the daily [tldr tech newsletter](https://tldr.tech/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter.
JulesBelveze
https://github.com/huggingface/datasets/pull/4534
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4534", "html_url": "https://github.com/huggingface/datasets/pull/4534", "diff_url": "https://github.com/huggingface/datasets/pull/4534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4534.patch", "merged_at": null }
true
1,277,211,490
4,533
Timestamp not returned as datetime objects in streaming mode
closed
[]
2022-06-20T17:28:47
2022-06-22T16:29:09
2022-06-22T16:29:09
As reported in (internal) https://github.com/huggingface/datasets-server/issues/397 ```python >>> from datasets import load_dataset >>> dataset = load_dataset("ett", name="h2", split="test", streaming=True) >>> d = next(iter(dataset)) >>> d['start'] Timestamp('2016-07-01 00:00:00') ``` while loading in non-streaming mode it returns `datetime.datetime(2016, 7, 1, 0, 0)`
lhoestq
https://github.com/huggingface/datasets/issues/4533
null
false
1,277,167,129
4,532
Add Video feature
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4532). All of your documentation changes will be reflected on that endpoint.", "@nateraw do you have any plans to continue this pr? Or should I write a custom loader script to use my video dataset in the hub?", "@fcakyon I think we still want this feature in here, but my solution here isn't the right one, I'm afraid. Using my (very hacky) library is not the right move. Let's move to an issue to discuss the feature/workarounds for now. " ]
2022-06-20T16:36:41
2022-11-10T16:59:51
2022-11-10T16:59:51
The following adds a `Video` feature for encoding/decoding videos on the fly from in memory bytes. It uses my own `encoded-video` library which is basically `pytorchvideo`'s encoded video but with all the `torch` specific stuff stripped out. Because of that, and because the tool I used under the hood is not very mature, I leave this as a draft idea that we can use to build off of.
nateraw
https://github.com/huggingface/datasets/pull/4532
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4532", "html_url": "https://github.com/huggingface/datasets/pull/4532", "diff_url": "https://github.com/huggingface/datasets/pull/4532.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4532.patch", "merged_at": null }
true
1,277,054,172
4,531
Dataset Viewer issue for CSV datasets
closed
[ "this should now be fixed", "Confirmed, it's fixed now. Thanks for reporting, and thanks @coyotte508 for fixing it\r\n\r\n<img width=\"1123\" alt=\"Capture d’écran 2022-06-21 aΜ€ 10 28 05\" src=\"https://user-images.githubusercontent.com/1676121/174753833-1b453a5a-6a90-4717-bca1-1b5fc6b75e4a.png\">\r\n" ]
2022-06-20T14:56:24
2022-06-21T08:28:46
2022-06-21T08:28:27
### Link https://huggingface.co/datasets/scikit-learn/breast-cancer-wisconsin ### Description I'm populating CSV datasets [here](https://huggingface.co/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well. You can replicate the problem by simply uploading any CSV dataset. ### Owner Yes
merveenoyan
https://github.com/huggingface/datasets/issues/4531
null
false
1,276,884,962
4,530
Add AudioFolder packaged loader
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq @mariosasko I don't know what to do with the test, do you have any ideas? :)", "also it's passed in `pyarrow_latest_WIN`", "If the error only happens on 3.6, maybe #4460 can help ^^' It seems to work in 3.7 on the windows CI\r\n\r\n> inferring labels is not the default behavior (drop_labels is set to True in config)\r\n\r\nI think it a missed opportunity to have a consistent API between imagefolder and audiofolder, since they do everything the same way. Can you give more details why you think we should drop the labels by default ?", "Considering audio classification in audio is not as common as image classification in image, I'm ok with having different config defaults as long as they are properly documented (check [Papers With Code](https://paperswithcode.com/datasets) for stats and compare the classification numbers to the other tasks, do this for both modalities)\r\n\r\nAlso, WDYT about creating a generic folder loader that ImageFolder and AudioFolder then subclass to avoid having to update both of them when there is something to update/fix?", "@lhoestq I think it doesn't change the API itself, it just doesn't infer labels by default, but you can **still** set `drop_labels=False` to `load_dataset` and the labels will be inferred. \r\nSuppose that one has data structured as follows:\r\n```\r\ndata/\r\n train/\r\n audio/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n test/\r\n audio/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n```\r\nIf users load this dataset with `load_dataset(\"audiofolder\", data_dir=\"data\")` (the most native way), they will get a `label` feature that will always be equal to 0 (= \"audio\"). To mitigate this, they will have to always specify `load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=True)` explicitly and I believe it's not convenient. \r\n\r\nAt the same time, `label` column can be added just as easy as adding one argument:` load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=False)`. As classification task is not as common, I think it should require more symbols to be added to the code :D \r\n\r\nBut this is definitely should be explained in the docs, which I've forgotten to update... I'll add this section soon.\r\n\r\nAlso +to the generic loader, will work on it. \r\n\r\n", "If a metadata.jsonl file is present, then it doesn't have to infer the labels I agree. Note that this is already the case for imagefolder ;) in your case `load_dataset(\"audiofolder\", data_dir=\"data\")` won't return labels !\r\n\r\nLabels are only inferred if there are no metadata.jsonl", "Feel free to merge the `main` branch into yours after updating your fork of `datasets`: https://github.com/huggingface/datasets/issues/4629\r\n\r\nThis should fix some errors in the CI", "@mariosasko could you please review this PR again? :)\r\n\r\nmost of the tests for AutoFolder (base class for AudioFolder and ImageFolder) are now basically copied from Image/AudioFolder (their tests are also almost identical too) and adapted to test other methods. it should be refactored but i think this is not that important for now and might be done in the future PR, wdyt?", "@mariosasko thank you for the review! I'm sorry I accidentally asked for the review again, ignore it." ]
2022-06-20T12:54:02
2022-08-22T14:36:49
2022-08-22T14:20:40
will close #3964 AudioFolder is almost identical to ImageFolder except for inferring labels is not the default behavior (`drop_labels` is set to True in config), the option of inferring them is preserved though. The weird thing is happening with the `test_data_files_with_metadata_and_archives` when `streaming` is `True`. Here is the log from the CI: ``` ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/features/audio.py:237: in _decode_non_mp3_path_like array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/util/decorators.py:88: in inner_f return f(*args, **kwargs) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/core/audio.py:176: in load raise (exc) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/core/audio.py:155: in load context = sf.SoundFile(path) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/soundfile.py:629: in __init__ self._file = self._open(file, mode_int, closefd) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/soundfile.py:1184: in _open "Error opening {0!r}: ".format(self.name)) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ err = 72 prefix = "Error opening <zipfile.ZipExtFile name='audio_file.wav' mode='r' compress_type=deflate>: " def _error_check(err, prefix=""): """Pretty-print a numerical error code if there is an error.""" if err != 0: err_str = _snd.sf_error_number(err) > raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace')) E RuntimeError: Error opening <zipfile.ZipExtFile name='audio_file.wav' mode='r' compress_type=deflate>: Error in WAV file. No 'data' chunk marker. ``` I hadn't been able to reproduce this locally until I created the same test environment (I mean with `pip install .[tests]`) with python3.6. The same env but with python3.8 passes the test! I didn't manage to figure out what's wrong, I also tried simply to replace the test wav file and still got the same error. Versions of `soundfile`, `librosa` and `libsndfile` are identical. Might it be something with zip compression? Sounds weird but I don't have any other ideas... TODO: - [x] align with #4622 - [x] documentation - [x] tests for AutoFolder?
polinaeterna
https://github.com/huggingface/datasets/pull/4530
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4530", "html_url": "https://github.com/huggingface/datasets/pull/4530", "diff_url": "https://github.com/huggingface/datasets/pull/4530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4530.patch", "merged_at": "2022-08-22T14:20:40" }
true
1,276,729,303
4,529
Ecoset
closed
[ "Hi! Very cool dataset! I answered your questions on the forum. Also, feel free to comment `#self-assign` on this issue to self-assign it.", "The dataset lives on the Hub [here](https://huggingface.co/datasets/kietzmannlab/ecoset), so I'm closing this issue.", "Hey There, thanks for closing πŸ€— \r\n\r\nForgot the issue existed, so I didn't close it after implementing the downloader :)" ]
2022-06-20T10:39:34
2023-10-26T09:12:32
2023-10-04T18:19:52
## Adding a Dataset - **Name:** *Ecoset* - **Description:** *https://www.kietzmannlab.org/ecoset/* - **Paper:** *https://doi.org/10.1073/pnas.2011417118* - **Data:** *https://codeocean.com/capsule/9570390/tree/v1* - **Motivation:** **Ecoset** was created as a clean and ecologically valid alternative to **Imagenet**. It is a large image recognition dataset, similar to Imagenet in size and structure. However, the authors of ecoset claim several improvements over Imagenet, like: - more ecologically valid classes (e.g. not over-focussed on distinguishing different dog breeds) - less NSFW content - 'pre-packed image recognition models' that come with the dataset and can be used for validation of other models. I am working for one of the authors of the paper with the aim of bringing Ecoset to huggingface datasets. Therefore I can work on this issue personally, but could use some help from devs and experienced users if the dataset is of interest to them. I phrased some of my questions on [discuss.huggingface](https://discuss.huggingface.co/t/handling-large-image-datasets/19373).
DiGyt
https://github.com/huggingface/datasets/issues/4529
null
false
1,276,679,155
4,528
Memory leak when iterating a Dataset
closed
[ "Is someone assigned to this issue?", "The same issue is being debugged here: https://github.com/huggingface/datasets/issues/4883\r\n", "Here is a modified repro example that makes it easier to see the leak:\r\n\r\n```\r\n$ cat ds2.py\r\nimport gc, sys\r\nimport time\r\nfrom datasets import load_dataset\r\nimport os, psutil\r\n\r\nprocess = psutil.Process(os.getpid())\r\n\r\nprint(process.memory_info().rss/2**20)\r\n\r\ncorpus = load_dataset(\"BeIR/msmarco\", 'corpus', keep_in_memory=False, streaming=False)['corpus']\r\ncorpus = corpus.select(range(200000))\r\n\r\nprint(process.memory_info().rss/2**20)\r\n\r\nbatch = None\r\n\r\nmem_before_start = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n\r\nstep = 20000\r\nfor i in range(0, 10*step, step):\r\n mem_before = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n batch = corpus[i:i+step]\r\n import objgraph\r\n #objgraph.show_refs([batch])\r\n #objgraph.show_refs([corpus])\r\n #sys.exit()\r\n gc.collect()\r\n\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n print(f\"{i:6d} {mem_after - mem_before:12.4f} {mem_after - mem_before_start:12.4f}\")\r\n\r\n```\r\n\r\nLet's run:\r\n\r\n```\r\n$ python ds2.py\r\n 0 36.5391 36.5391\r\n 20000 10.4609 47.0000\r\n 40000 5.9766 52.9766\r\n 60000 7.8906 60.8672\r\n 80000 6.0586 66.9258\r\n100000 8.4453 75.3711\r\n120000 6.7422 82.1133\r\n140000 8.5664 90.6797\r\n160000 5.7344 96.4141\r\n180000 8.3398 104.7539\r\n```\r\n\r\nYou can see the last column of total RSS memory keeps on growing in MBs. The mid column is by how much it was grown during a single iteration of the repro script (20000 items)", "@NouamaneTazi, please check my analysis here https://github.com/huggingface/datasets/issues/4883#issuecomment-1242599722 so if you agree with my research this Issue can be closed as well.\r\n\r\nI also made a suggestion at how to proceed to hunt for a real leak here https://github.com/huggingface/datasets/issues/4883#issuecomment-1242600626\r\n\r\nyou may find this one to be useful as well https://github.com/huggingface/datasets/issues/4883#issuecomment-1242597966", "Amazing job! Thanks for taking time to debug this πŸ€—\r\n\r\nFor my side, I tried to do some more research as well, but to no avail. https://github.com/huggingface/datasets/issues/4883#issuecomment-1243415957" ]
2022-06-20T10:03:14
2022-09-12T08:51:39
2022-09-12T08:51:39
e## Describe the bug It seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop) ## Steps to reproduce the bug ```python import gc import logging import time import pyarrow from datasets import load_dataset from tqdm import trange import os, psutil logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) process = psutil.Process(os.getpid()) print(process.memory_info().rss) # output: 633507840 bytes corpus = load_dataset("BeIR/msmarco", 'corpus', keep_in_memory=False, streaming=False)['corpus'] # or "BeIR/trec-covid" for a smaller dataset print(process.memory_info().rss) # output: 698601472 bytes logger.info("Applying method to all examples in all splits") for i in trange(0, len(corpus), 1000): batch = corpus[i:i+1000] data = pyarrow.total_allocated_bytes() if data > 0: logger.info(f"{i}/{len(corpus)}: {data}") print(process.memory_info().rss) # output: 3788247040 bytes del batch gc.collect() print(process.memory_info().rss) # output: 3788247040 bytes logger.info("Done...") time.sleep(100) ``` ## Expected results Limited memory usage, and memory to be freed after processing ## Actual results Memory leak ![test](https://user-images.githubusercontent.com/29777165/174578276-f2c37e6c-b5d8-4985-b4d8-8413eb2b3241.png) You can see how the memory allocation keeps increasing until it reaches a steady state when we hit the `time.sleep(100)`, which showcases that even the garbage collector couldn't free the allocated memory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
NouamaneTazi
https://github.com/huggingface/datasets/issues/4528
null
false
1,276,583,536
4,527
Dataset Viewer issue for vadis/sv-ident
closed
[ "Fixed, thanks!\r\n![Uploading Capture d’écran 2022-06-21 aΜ€ 18.42.40.png…]()\r\n\r\n" ]
2022-06-20T08:47:42
2022-06-21T16:42:46
2022-06-21T16:42:45
### Link https://huggingface.co/datasets/vadis/sv-ident ### Description The dataset preview does not work: ``` Server Error Status code: 400 Exception: Status400Error Message: The dataset does not exist. ``` However, the dataset is streamable and works locally: ```python In [1]: from datasets import load_dataset; ds = load_dataset("sv-ident.py", split="train", streaming=True); item = next(iter(ds)); item Using custom data configuration default Out[1]: {'sentence': 'Our point, however, is that so long as downward (favorable) comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.', 'is_variable': 1, 'variable': ['exploredata-ZA5400_VarV66', 'exploredata-ZA5400_VarV53'], 'research_data': ['ZA5400'], 'doc_id': '73106', 'uuid': 'b9fbb80f-3492-4b42-b9d5-0254cc33ac10', 'lang': 'en'} ``` CC: @e-tornike ### Owner No
albertvillanova
https://github.com/huggingface/datasets/issues/4527
null
false
1,276,580,185
4,526
split cache used when processing different split
open
[ "I was not able to reproduce this behavior (I tried without using pytorch lightning though, since I don't know what code you ran in pytorch lightning to get this).\r\n\r\nIf you can provide a MWE that would be perfect ! :)", "Hi, I think the issue happened because I was loading datasets under an `if` ... `else` statement and the condition would change the dataset I would need to load but instead the cached one was always returned. However, I believe that is expected behaviour, if so I'll close the issue.\r\n\r\nOtherwise I will try to provide a MWE" ]
2022-06-20T08:44:58
2022-06-28T14:04:58
null
## Describe the bug` ``` ds1 = load_dataset('squad', split='validation') ds2 = load_dataset('squad', split='train') ds1 = ds1.map(some_function) ds2 = ds2.map(some_function) assert ds1 == ds2 ``` This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through ``` class myDataModule: def train_dataloader(self): ds = load_dataset('squad', split='train') ds = ds.map(some_function) return [ds] def val_dataloader(self): ds = load_dataset('squad', split="validation") ds = ds.map(some_function) return [ds] ``` I don't know if it depends on `pytorch_lightning` or `datasets` but setting `ds.map(some_function, load_from_cache_file=False)` fixes the issue. If this is not enough to replicate I will try and provide and MWE, I don't have time now so I thought I wuld open the issue first!
gpucce
https://github.com/huggingface/datasets/issues/4526
null
false
1,276,491,386
4,525
Out of memory error on workers while running Beam+Dataflow
closed
[ "Some naive ideas to cope with this:\r\n- enable more RAM on each worker\r\n- force the spanning of more workers\r\n- others?", "@albertvillanova We were finally able to process the full NQ dataset on our machines using 600 gb with 5 workers. Maybe these numbers will work for you as well.", "Thanks a lot for the hint, @seirasto.\r\n\r\nI have one question: what runner did you use? Direct, Apache Flink/Nemo/Samza/Spark, Google Dataflow...? Thank you.", "I asked my colleague who ran the code and he said apache beam.", "@albertvillanova Since we have already processed the NQ dataset on our machines can we upload it to datasets so the NQ PR can be merged?", "Maybe @lhoestq can give a more accurate answer as I am not sure about the authentication requirements to upload those files to our cloud bucket.\r\n\r\nAnyway I propose to continue this discussion on the dedicated PR for Natural questions dataset:\r\n- #4368", "> I asked my colleague who ran the code and he said apache beam.\r\n\r\nHe looked into it further and he just used DirectRunner. @albertvillanova ", "OK, thank you @seirasto for your hint.\r\n\r\nThat explains why you did not encounter the out of memory error: this only appears when the processing is distributed (on workers memory) and DirectRunner does not distribute the processing (all is done in a single machine). ", "@albertvillanova Doesn't DirectRunner offer distributed processing through?\r\n\r\nhttps://beam.apache.org/documentation/runners/direct/\r\n\r\n```\r\nSetting parallelism\r\n\r\nNumber of threads or subprocesses is defined by setting the direct_num_workers pipeline option. From 2.22.0, direct_num_workers = 0 is supported. When direct_num_workers is set to 0, it will set the number of threads/subprocess to the number of cores of the machine where the pipeline is running.\r\n\r\nSetting running mode\r\n\r\nIn Beam 2.19.0 and newer, you can use the direct_running_mode pipeline option to set the running mode. direct_running_mode can be one of ['in_memory', 'multi_threading', 'multi_processing'].\r\n\r\nin_memory: Runner and workers’ communication happens in memory (not through gRPC). This is a default mode.\r\n\r\nmulti_threading: Runner and workers communicate through gRPC and each worker runs in a thread.\r\n\r\nmulti_processing: Runner and workers communicate through gRPC and each worker runs in a subprocess.\r\n```", "Unrelated to the OOM issue, but we deprecated datasets with Beam scripts in #6474. I think we can close this issue" ]
2022-06-20T07:28:12
2024-10-09T16:09:50
2024-10-09T16:09:50
## Describe the bug While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files). Previously we ran the preprocessing for the "dev" config (only dev files) with success. Train data files are larger than dev ones and apparently workers run out of memory while processing them. Any help/hint is welcome! Error message: ``` Data channel closed, unable to receive additional data from SDK sdk-0-0 ``` Info from the Diagnostics tab: ``` Out of memory: Killed process 1882 (python) total-vm:6041764kB, anon-rss:3290928kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:9520kB oom_score_adj:900 The worker VM had to shut down one or more processes due to lack of memory. ``` ## Additional information ### Stack trace ``` Traceback (most recent call last): File "/home/albert_huggingface_co/natural_questions/venv/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/run_beam.py", line 127, in run builder.download_and_prepare( File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 1389, in _download_and_prepare pipeline_results.wait_until_finish() File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1667, in wait_until_finish raise DataflowRuntimeException( apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error: Data channel closed, unable to receive additional data from SDK sdk-0-0 ``` ### Logs ``` Error message from worker: Data channel closed, unable to receive additional data from SDK sdk-0-0 Workflow failed. Causes: S30:train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/Read+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/GroupByWindow+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/FlatMap(restore_timestamps)+train/ReadAllFromText/ReadAllFiles/Reshard/RemoveRandomKeys+train/ReadAllFromText/ReadAllFiles/ReadRange+train/Map(_parse_example)+train/Encode+train/Count N. Examples+train/Get values/Values+train/Save to parquet/Write/WriteImpl/WindowInto(WindowIntoFn)+train/Save to parquet/Write/WriteImpl/WriteBundles+train/Save to parquet/Write/WriteImpl/Pair+train/Save to parquet/Write/WriteImpl/GroupByKey/Write failed., The job failed because a work item has failed 4 times. Look in previous log entries for the cause of each one of the 4 failures. For more information, see https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was attempted on these workers: beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: Data channel closed, unable to receive additional data from SDK sdk-0-0, beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-bwsj Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-5052 Root cause: The worker lost contact with the service. ```
albertvillanova
https://github.com/huggingface/datasets/issues/4525
null
false
1,275,909,186
4,524
Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException)
open
[ "Hi @dan-the-meme-man, thanks for reporting.\r\n\r\nWe are investigating a similar issue but with Beam+Dataflow (instead of Beam+Flink): \r\n- #4525\r\n\r\nIn order to go deeper into the root cause, we need as much information as possible: logs from the main process + logs from the workers are very informative.\r\n\r\nIn the case of the issue with Beam+Dataflow, the logs from the workers report an out of memory issue.", "As I continued working on this today, I came to suspect that it is in fact an out of memory issue - I have a few more notebooks that I've left running, and if they produce the same error, I will try to get the logs. In the meantime, if there's any chance that there is a repo out there with those three languages already as .arrow files, or if you know about how much memory would be needed to actually download those sets, please let me know!" ]
2022-06-18T23:36:45
2022-06-21T00:38:20
null
## Describe the bug When downloading some `wikipedia` languages (in particular, I'm having a hard time with Spanish, Cebuano, and Russian) via FlinkRunner, I encounter the exception in the title. I have been playing with package versions a lot, because unfortunately, the different dependencies required by these packages seem to be incompatible in terms of versions (dill and requests, for instance). It should be noted that the following code runs for several hours without issue, executing the `load_dataset()` function, before the exception occurs. ## Steps to reproduce the bug ```python # bash commands !pip install datasets !pip install apache-beam[interactive] !pip install mwparserfromhell !pip install dill==0.3.5.1 !pip install requests==2.23.0 # imports import os from datasets import load_dataset import apache_beam as beam import mwparserfromhell from google.colab import drive import dill import requests # mount drive drive_dir = os.path.join(os.getcwd(), 'drive') drive.mount(drive_dir) # confirming the versions of these two packages are the ones that are suggested by the outputs from the bash commands print(dill.__version__) print(requests.__version__) lang = 'es' # or 'ru' or 'ceb' - these are the ones causing the issue lang_dir = os.path.join(drive_dir, 'path/to/my/folder', lang) if not os.path.exists(lang_dir): x = None x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink', split='train') x.save_to_disk(lang_dir) ``` ## Expected results Although some warnings are generally produced by this code (run in Colab Notebook), most languages I've tried have been successfully downloaded. It should simply go through without issue, but for these languages, I am continually encountering this error. ## Actual results Traceback below: ``` Exception in thread run_worker_3-1: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 234, in run for work_request in self._control_stub.Control(get_responses()): File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Socket closed" debug_error_string = "{"created":"@1655593643.871830638","description":"Error received from peer ipv4:127.0.0.1:44441","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Socket closed","grpc_status":14}" > Traceback (most recent call last): File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute response = task() File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda> lambda: self.create_worker().do_instruction(request), request) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction getattr(request, request_type), request.instruction_id) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle bundle_processor.process_bundle(instruction_id)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle element.data) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded self.output(decoded_value) File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ERROR:apache_beam.runners.worker.sdk_worker:Error processing instruction 26. Original traceback is Traceback (most recent call last): File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute response = task() File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda> lambda: self.create_worker().do_instruction(request), request) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction getattr(request, request_type), request.instruction_id) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle bundle_processor.process_bundle(instruction_id)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle element.data) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded self.output(decoded_value) File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ERROR:root:org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled ERROR:apache_beam.runners.worker.data_plane:Failed to read inputs in the data plane. Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.CANCELLED details = "Multiplexer hanging up" debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}" > Exception in thread read_grpc_client_inputs: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 651, in <lambda> target=lambda: self._read_inputs(elements_iterator), File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.CANCELLED details = "Multiplexer hanging up" debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}" > --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [/tmp/ipykernel_219/3869142325.py](https://localhost:8080/#) in <module> 18 x = None 19 x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink', ---> 20 split='train') 21 x.save_to_disk(lang_dir) 3 frames [/usr/local/lib/python3.7/dist-packages/apache_beam/runners/portability/portable_runner.py](https://localhost:8080/#) in wait_until_finish(self, duration) 604 605 if self._runtime_exception: --> 606 raise self._runtime_exception 607 608 return self._state RuntimeError: Pipeline BeamApp-root-0618220708-b3b59a0e_d8efcf67-9119-4f76-b013-70de7b29b54d failed in state FAILED: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
ddegenaro
https://github.com/huggingface/datasets/issues/4524
null
false
1,275,002,639
4,523
Update download url and improve card of `cats_vs_dogs` dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-17T12:59:44
2022-06-21T14:23:26
2022-06-21T14:13:08
Improve the download URL (reported here: https://huggingface.co/datasets/cats_vs_dogs/discussions/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card.
mariosasko
https://github.com/huggingface/datasets/pull/4523
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4523", "html_url": "https://github.com/huggingface/datasets/pull/4523", "diff_url": "https://github.com/huggingface/datasets/pull/4523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4523.patch", "merged_at": "2022-06-21T14:13:08" }
true
1,274,929,328
4,522
Try to reduce the number of datasets that require manual download
open
[]
2022-06-17T11:42:03
2022-06-17T11:52:48
null
> Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to β‰ˆ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, which we can ignore from https://github.com/huggingface/datasets-server/issues/12#issuecomment-1026920432
severo
https://github.com/huggingface/datasets/issues/4522
null
false
1,274,919,437
4,521
Datasets method `.map` not hashing
closed
[ "Fix posted: https://github.com/huggingface/datasets/issues/4506#issuecomment-1157417219", "Didn't realize it's a bug when I asked the question yesterday! Free free to post an answer if you are sure the cause has been addressed.\r\n\r\nhttps://stackoverflow.com/questions/72664827/can-pickle-dill-foo-but-not-lambda-x-foox", "Thank @nalzok . That works for me:\r\n\r\n`pip install \"dill<0.3.5\"`" ]
2022-06-17T11:31:10
2022-08-04T12:08:16
2022-06-28T13:23:05
## Describe the bug Datasets method `.map` not hashing, even with an empty no-op function ## Steps to reproduce the bug ```python from datasets import load_dataset # download 9MB dummy dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean") def prepare_dataset(batch): return(batch) ds = ds.map( prepare_dataset, num_proc=1, desc="preprocess train dataset", ) ``` ## Expected results Hashed and cached dataset preprocessing ## Actual results Does not hash properly: ``` Parameter 'function'=<function prepare_dataset at 0x7fccb68e9280> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 cc @lhoestq
sanchit-gandhi
https://github.com/huggingface/datasets/issues/4521
null
false
1,274,879,180
4,520
Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map`
closed
[ "I think this has been fixed by #4516, let me know if you encounter this again :)\r\n\r\nI re-ran your code in 3.7 and 3.9 and it works fine", "Thank you!" ]
2022-06-17T10:47:17
2022-06-28T14:47:17
2022-06-28T14:04:29
Dataclasses cannot be hashed. As a result, they cannot be hashed or cached if used in the `.map` method. Dataclasses are used extensively in Transformers examples scripts: (c.f. [CTC example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)). Since dataclasses cannot be hashed, one has to define separate variables prior to passing dataclass attributes to the `.map` method: ```python phoneme_language = data_args.phoneme_language ``` in the example https://github.com/huggingface/transformers/blob/3c7e56fbb11f401de2528c1dcf0e282febc031cd/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L603-L630 ## Steps to reproduce the bug ```python from dataclasses import dataclass, field from datasets.fingerprint import Hasher @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ phoneme_language: str = field( default=None, metadata={"help": "The name of the phoneme language to use."} ) data_args = DataTrainingArguments(phoneme_language ="foo") Hasher.hash(data_args) phoneme_language = data_args.phoneme_language Hasher.hash(phoneme_language) ``` ## Expected results A hash. ## Actual results <details> <summary> Traceback </summary> ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Input In [1], in <cell line: 16>() 10 phoneme_language: str = field( 11 default=None, metadata={"help": "The name of the phoneme language to use."} 12 ) 14 data_args = DataTrainingArguments(phoneme_language ="foo") ---> 16 Hasher.hash(data_args) 18 phoneme_language = data_args. phoneme_language 20 Hasher.hash(phoneme_language) File ~/datasets/src/datasets/fingerprint.py:237, in Hasher.hash(cls, value) 235 return cls.dispatch[type(value)](cls, value) 236 else: --> 237 return cls.hash_default(value) File ~/datasets/src/datasets/fingerprint.py:230, in Hasher.hash_default(cls, value) 228 @classmethod 229 def hash_default(cls, value: Any) -> str: --> 230 return cls.hash_bytes(dumps(value)) File ~/datasets/src/datasets/utils/py_utils.py:564, in dumps(obj) 562 file = StringIO() 563 with _no_cache_fields(obj): --> 564 dump(obj, file) 565 return file.getvalue() File ~/datasets/src/datasets/utils/py_utils.py:539, in dump(obj, file) 537 def dump(obj, file): 538 """pickle an object to a file""" --> 539 Pickler(file, recurse=True).dump(obj) 540 return File ~/hf/lib/python3.8/site-packages/dill/_dill.py:620, in Pickler.dump(self, obj) 618 raise PicklingError(msg) 619 else: --> 620 StockPickler.dump(self, obj) 621 return File /usr/lib/python3.8/pickle.py:487, in _Pickler.dump(self, obj) 485 if self.proto >= 4: 486 self.framer.start_framing() --> 487 self.save(obj) 488 self.write(STOP) 489 self.framer.end_framing() File /usr/lib/python3.8/pickle.py:603, in _Pickler.save(self, obj, save_persistent_id) 599 raise PicklingError("Tuple returned by %s must have " 600 "two to six elements" % reduce) 602 # Save the reduce() output and finally memoize the object --> 603 self.save_reduce(obj=obj, *rv) File /usr/lib/python3.8/pickle.py:687, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) 684 raise PicklingError( 685 "args[0] from __newobj__ args has the wrong class") 686 args = args[1:] --> 687 save(cls) 688 save(args) 689 write(NEWOBJ) File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1838, in save_type(pickler, obj, postproc_list) 1836 postproc_list = [] 1837 postproc_list.append((setattr, (obj, '__qualname__', obj_name))) -> 1838 _save_with_postproc(pickler, (_create_type, ( 1839 type(obj), obj.__name__, obj.__bases__, _dict 1840 )), obj=obj, postproc_list=postproc_list) 1841 log.info("# %s" % _t) 1842 else: File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1140, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list) 1137 pickler._postproc[id(obj)] = postproc_list 1139 # TODO: Use state_setter in Python 3.8 to allow for faster cPickle implementations -> 1140 pickler.save_reduce(*reduction, obj=obj) 1142 if is_pickler_dill: 1143 # pickler.x -= 1 1144 # print(pickler.x*' ', 'pop', obj, id(obj)) 1145 postproc = pickler._postproc.pop(id(obj)) File /usr/lib/python3.8/pickle.py:692, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) 690 else: 691 save(func) --> 692 save(args) 693 write(REDUCE) 695 if obj is not None: 696 # If the object is already in the memo, this means it is 697 # recursive. In this case, throw away everything we put on the 698 # stack, and fetch the object back from the memo. File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File /usr/lib/python3.8/pickle.py:901, in _Pickler.save_tuple(self, obj) 899 write(MARK) 900 for element in obj: --> 901 save(element) 903 if id(obj) in memo: 904 # Subtle. d was not in memo when we entered save_tuple(), so 905 # the process of saving the tuple's elements must have saved (...) 909 # could have been done in the "for element" loop instead, but 910 # recursive tuples are a rare thing. 911 get = self.get(memo[id(obj)][0]) File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1251, in save_module_dict(pickler, obj) 1248 if is_dill(pickler, child=False) and pickler._session: 1249 # we only care about session the first pass thru 1250 pickler._first_pass = False -> 1251 StockPickler.save_dict(pickler, obj) 1252 log.info("# D2") 1253 return File /usr/lib/python3.8/pickle.py:971, in _Pickler.save_dict(self, obj) 968 self.write(MARK + DICT) 970 self.memoize(obj) --> 971 self._batch_setitems(obj.items()) File /usr/lib/python3.8/pickle.py:997, in _Pickler._batch_setitems(self, items) 995 for k, v in tmp: 996 save(k) --> 997 save(v) 998 write(SETITEMS) 999 elif n: File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/datasets/src/datasets/utils/py_utils.py:862, in save_function(pickler, obj) 859 if state_dict: 860 state = state, state_dict --> 862 dill._dill._save_with_postproc( 863 pickler, 864 ( 865 dill._dill._create_function, 866 (obj.__code__, globs, obj.__name__, obj.__defaults__, closure), 867 state, 868 ), 869 obj=obj, 870 postproc_list=postproc_list, 871 ) 872 else: 873 closure = obj.func_closure File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1153, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list) 1151 dest, source = reduction[1] 1152 if source: -> 1153 pickler.write(pickler.get(pickler.memo[id(dest)][0])) 1154 pickler._batch_setitems(iter(source.items())) 1155 else: 1156 # Updating with an empty dictionary. Same as doing nothing. KeyError: 140434581781568 ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 cc @lhoestq
sanchit-gandhi
https://github.com/huggingface/datasets/issues/4520
null
false
1,274,110,623
4,519
Create new sections for audio and vision in guides
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Ready for review!\r\n\r\nThe `toctree` is a bit longer now with the sections. I think if we keep the audio/vision/text/dataset repository sections collapsed by default, and keep the general usage expanded, it may look a little cleaner and not as overwhelming. Let me know what you think! πŸ˜„ " ]
2022-06-16T21:38:24
2022-07-07T15:36:37
2022-07-07T15:24:58
This PR creates separate sections in the guides for audio, vision, text, and general usage so it is easier for users to find loading, processing, or sharing guides specific to the dataset type they're working with. It'll also allow us to scale the docs to additional dataset types - like time series, tabular, etc. - while keeping our docs information architecture. Some other changes include: - ~Experimented with decorating text with some CSS to highlight guides specific to each modality. Hopefully, it'll be easier for users to find and realize that these different docs exist!~ Will experiment with this in a different PR. - Added deprecation warning for Metrics and redirect to Evaluate. - Updated `set_format` section to recommend using the new `to_tf_dataset` function if you need to convert to a TensorFlow dataset. - Reorganized `toctree` to nest general usage, audio, vision, and text sections under the how-to guides. - A quick review and edit to the Load and Process docs for clarity.
stevhliu
https://github.com/huggingface/datasets/pull/4519
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4519", "html_url": "https://github.com/huggingface/datasets/pull/4519", "diff_url": "https://github.com/huggingface/datasets/pull/4519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4519.patch", "merged_at": "2022-07-07T15:24:58" }
true
1,274,010,628
4,518
Patch tests for hfh v0.8.0
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-16T19:45:32
2022-06-17T16:15:57
2022-06-17T16:06:07
This PR patches testing utilities that would otherwise fail with hfh v0.8.0.
LysandreJik
https://github.com/huggingface/datasets/pull/4518
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4518", "html_url": "https://github.com/huggingface/datasets/pull/4518", "diff_url": "https://github.com/huggingface/datasets/pull/4518.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4518.patch", "merged_at": "2022-06-17T16:06:07" }
true
1,273,960,476
4,517
Add tags for task_ids:summarization-* and task_categories:summarization*
closed
[ "Associated community discussion is [here](https://huggingface.co/datasets/aeslc/discussions/1).\r\nPaper referenced in the `dataset_infos.json` is [here](https://arxiv.org/pdf/1906.03497.pdf). It mentions the _email-subject-generation_ task, which is not a tag mentioned in any other dataset so it was not added in this pull request. The _summarization_ task is mentioned as a related task.", "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-16T18:52:25
2022-07-08T15:14:23
2022-07-08T15:02:31
yaml header at top of README.md file was edited to add task tags because I couldn't find the existing tags in the json separate Pull Request will modify dataset_infos.json to add these tags The Enron dataset (dataset id aeslc) is only tagged with: arxiv:1906.03497' languages:en pretty_name:AESLC Using the email subject_line field as a label or target variable it possible to create models for the following task_ids (in order of relevance): 'task_ids:summarization' 'task_ids:summarization-other-conversations-summarization' "task_ids:other-other-query-based-multi-document-summarization" 'task_ids:summarization-other-aspect-based-summarization' 'task_ids:summarization--other-headline-generation' The subject might also be used for the task_category "task_categories:summarization" E-mail chains might be used for the task category "task_categories:dialogue-system"
hobson
https://github.com/huggingface/datasets/pull/4517
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4517", "html_url": "https://github.com/huggingface/datasets/pull/4517", "diff_url": "https://github.com/huggingface/datasets/pull/4517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4517.patch", "merged_at": "2022-07-08T15:02:31" }
true
1,273,825,640
4,516
Fix hashing for python 3.9
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "What do you think @albertvillanova ?" ]
2022-06-16T16:42:31
2022-06-28T13:33:46
2022-06-28T13:23:06
In python 3.9, pickle hashes the `glob_ids` dictionary in addition to the `globs` of a function. Therefore the test at `tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals` is currently failing for python 3.9 To make hashing deterministic when the globals are not in the same order, we also need to make the order of `glob_ids` deterministic. Right now we don't have a CI to test python 3.9 but we should definitely have one. For this PR in particular I ran the tests locally using python 3.9 and they're passing now. Fix https://github.com/huggingface/datasets/issues/4506
lhoestq
https://github.com/huggingface/datasets/pull/4516
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4516", "html_url": "https://github.com/huggingface/datasets/pull/4516", "diff_url": "https://github.com/huggingface/datasets/pull/4516.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4516.patch", "merged_at": "2022-06-28T13:23:05" }
true
1,273,626,131
4,515
Add uppercased versions of image file extensions for automatic module inference
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-16T14:14:49
2022-06-16T17:21:53
2022-06-16T17:11:41
Adds the uppercased versions of the image file extensions to the supported extensions. Another approach would be to call `.lower()` on extensions while resolving data files, but uppercased extensions are not something we want to encourage out of the box IMO unless they are commonly used (as they are in the vision domain) Note that there is a slight discrepancy between the image file resolution and `imagefolder` as the latter calls `.lower()` on file extensions leading to some image file extensions being ignored by the resolution but not by the loader (e.g. `pNg`). Such extensions should also be discouraged, so I'm ignoring that case too. Fix #4514.
mariosasko
https://github.com/huggingface/datasets/pull/4515
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4515", "html_url": "https://github.com/huggingface/datasets/pull/4515", "diff_url": "https://github.com/huggingface/datasets/pull/4515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4515.patch", "merged_at": "2022-06-16T17:11:40" }
true
1,273,505,230
4,514
Allow .JPEG as a file extension
closed
[ "Hi, thanks for reporting! I've opened a PR with the fix.", "Wow, that was quick! Thank you very much πŸ™ " ]
2022-06-16T12:36:20
2022-06-20T08:18:46
2022-06-16T17:11:40
## Describe the bug When loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed. ## Steps to reproduce the bug ```python # use bash to create 2 sham datasets with jpeg and JPEG ext !mkdir dataset_a !mkdir dataset_b !wget https://upload.wikimedia.org/wikipedia/commons/7/71/Dsc_%28179253513%29.jpeg -O example_img.jpeg !cp example_img.jpeg ./dataset_a/ !mv example_img.jpeg ./dataset_b/example_img.JPEG from datasets import load_dataset # working df1 = load_dataset("./dataset_a", ignore_verifications=True) #not working df2 = load_dataset("./dataset_b", ignore_verifications=True) # show print(df1, df2) ``` ## Expected results ``` DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 1 }) }) DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 1 }) }) ``` ## Actual results ``` FileNotFoundError: Unable to resolve any data file that matches '['**']' at /..PATH../dataset_b with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` I know that it can be annoying to allow seemingly arbitrary numbers of file extensions. But I think this one would be really welcome.
DiGyt
https://github.com/huggingface/datasets/issues/4514
null
false
1,273,450,338
4,513
Update Google Cloud Storage documentation and add Azure Blob Storage example
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @stevhliu, I've kept the `>>>` before all the in-line code comments as it was done like that in the default S3 example that was already there, I assume that it's done like that just for readiness, let me know whether we should remove the `>>>` in the Python blocks before the in-line code comments or keep them.\r\n\r\n![image](https://user-images.githubusercontent.com/36760800/174254663-b68d28d2-eae1-40f3-8695-dc4b0c3b479a.png)\r\n", "Comments are ignored by doctest, so I think we can remove the `>>>` :)", "Cool I'll remove those now πŸ‘πŸ»", "Sure @lhoestq, I just kept that structure as that was the more similar one to the one that was already there, but we can go with that approach, just let me know whether I should change the headers so as to leave all those providers in the same level (`h2`). Thanks!" ]
2022-06-16T11:46:09
2022-06-23T17:05:11
2022-06-23T16:54:59
While I was going through the πŸ€— Datasets documentation of the Cloud storage filesystems at https://huggingface.co/docs/datasets/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says "Load your dataset" when the actual call was to "Save your dataset", in-line code comment was mentioning "s3 bucket" instead of "gcs bucket", and some more in-line comments could be included. Also, I think that mixing Google Cloud Storage documentation with AWS S3's one was a little bit confusing, so I moved all those to the end of the document under an h2 tab named "Other filesystems", with an h3 for "Google Cloud Storage". Besides that, I was currently working with Azure Blob Storage and found out that the URL to [adlfs](https://github.com/fsspec/adlfs) was common for both filesystems Azure Blob Storage and Azure DataLake Storage, as well as the URL, which was updated even though the redirect was working fine, so I decided to group those under the same row in the column of supported filesystems. And took also the change to add a small documentation entry as for Google Cloud Storage but for Azure Blob Storage, as I assume that AWS S3, GCP Cloud Storage, and Azure Blob Storage, are the most used cloud storage providers. Let me know if you're OK with these changes, or whether you want me to roll back some of those! :hugs:
alvarobartt
https://github.com/huggingface/datasets/pull/4513
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4513", "html_url": "https://github.com/huggingface/datasets/pull/4513", "diff_url": "https://github.com/huggingface/datasets/pull/4513.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4513.patch", "merged_at": "2022-06-23T16:54:59" }
true
1,273,378,129
4,512
Add links to vision tasks scripts in ADD_NEW_DATASET template
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI failure is unrelated to the PR's changes. Merging." ]
2022-06-16T10:35:35
2022-07-08T14:07:50
2022-07-08T13:56:23
Add links to vision dataset scripts in the ADD_NEW_DATASET template.
mariosasko
https://github.com/huggingface/datasets/pull/4512
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4512", "html_url": "https://github.com/huggingface/datasets/pull/4512", "diff_url": "https://github.com/huggingface/datasets/pull/4512.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4512.patch", "merged_at": "2022-07-08T13:56:23" }
true
1,273,336,874
4,511
Support all negative values in ClassLabel
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for this fix! I'm not sure what the release timeline is, but FYI #4508 is a breaking issue for transformer token classification using Trainer and PyTorch. PyTorch defaults to -100 as the ignored label for [negative log loss](https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html?highlight=nllloss#torch.nn.NLLLoss), so switching labels to -1 leads to index errors using Trainer defaults.\r\n\r\nAs a workaround, I'm using master branch directly (`pip install git+https://github.com/huggingface/datasets.git@master` for anyone who needs to do the same) until this gets released.", "The new release `2.4` fixes the issue, feel free to update `datasets` :) \r\n```\r\npip install -U datasets\r\n```", "@lhoestq I hope it's OK to ping you here. I've noticed that `encode_example` does only work with -1. I already created #7645 to fix the documentation, but then I stumbled across your original changes to the docs text in this PR.\r\n\r\nI am talking about this part in `ClassLabel -> encode_example`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129" ]
2022-06-16T09:59:39
2025-07-23T18:38:15
2022-06-16T13:54:07
We usually use -1 to represent a missing label, but we should also support any negative values (some users use -100 for example). This is a regression from `datasets` 2.3 Fix https://github.com/huggingface/datasets/issues/4508
lhoestq
https://github.com/huggingface/datasets/pull/4511
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4511", "html_url": "https://github.com/huggingface/datasets/pull/4511", "diff_url": "https://github.com/huggingface/datasets/pull/4511.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4511.patch", "merged_at": "2022-06-16T13:54:07" }
true
1,273,260,396
4,510
Add regression test for `ArrowWriter.write_batch` when batch is empty
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "As mentioned by @lhoestq, the current behavior is correct and we should not expect batches with different columns, in that case, the if should fail, as the values of the batch can be empty, but not the actual `batch_examples` value." ]
2022-06-16T08:53:51
2022-06-16T12:38:02
2022-06-16T12:28:19
As spotted by @cccntu in #4502, there's a logic bug in `ArrowWriter.write_batch` as the if-statement to handle the empty batches as detailed in the docstrings of the function ("Ignores the batch if it appears to be empty, preventing a potential schema update of unknown types."), the current if-statement is not handling properly `writer.write_batch({})` as an error is triggered. Also, if we add a regression test in `test_arrow_writer.py::test_write_batch` before applying the fix, the test will fail as when trying to write an empty batch as follows: ``` =================================================================================== short test summary info =================================================================================== FAILED tests/test_arrow_writer.py::test_write_batch[None-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[None-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[None-10] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-10] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-10] - ValueError: Schema and number of arrays unequal ======================================================================== 9 failed, 73 deselected, 7 warnings in 0.81s ========================================================================= ``` So the batch is not ignored when empty, as `batch_examples={}` won't match the condition `if batch_examples: ...`.
alvarobartt
https://github.com/huggingface/datasets/pull/4510
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4510", "html_url": "https://github.com/huggingface/datasets/pull/4510", "diff_url": "https://github.com/huggingface/datasets/pull/4510.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4510.patch", "merged_at": "2022-06-16T12:28:19" }
true
1,273,227,760
4,509
Support skipping Parquet to Arrow conversion when using Beam
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4509). All of your documentation changes will be reflected on that endpoint.", "When #4724 is merged, we can just pass `file_format=\"parquet\"` to `download_and_prepare` and it will output parquet fiels without converting to arrow", "I think we can close this one" ]
2022-06-16T08:25:38
2022-11-07T16:22:41
2022-11-07T16:22:41
null
albertvillanova
https://github.com/huggingface/datasets/pull/4509
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4509", "html_url": "https://github.com/huggingface/datasets/pull/4509", "diff_url": "https://github.com/huggingface/datasets/pull/4509.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4509.patch", "merged_at": null }
true
1,272,718,921
4,508
cast_storage method from datasets.features
closed
[ "Hi! We've recently added a check to the `ClassLabel` type to ensure the values are in the valid label range `-1, 0, ..., num_classes-1` (-1 is used for missing values). The error in your case happens only if the `labels` column is of type `Sequence(ClassLabel(...))` before the `map` call and can be avoided by calling `dataset = dataset.cast_column(\"labels\", Sequence(Value(\"int\")))` beforehand. The token-classification examples in Transformers introduce a new `labels` column, so their type is also `Sequence(Value(\"int\"))`, which doesn't lead to an error as this type unbounded. ", "I'm fine with re-adding support for all negative values for unknown/missing labels @mariosasko, wdyt ?" ]
2022-06-15T20:47:22
2022-06-16T13:54:07
2022-06-16T13:54:07
## Describe the bug A bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets. ## Steps to reproduce the bug Steps are: - load whatever datset - write a preprocessing function such as "tokenize_and_align_labels" written in https://huggingface.co/docs/transformers/tasks/token_classification - map the function on dataset and get "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features # Sample code to reproduce the bug def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True, max_length=38,padding="max_length") labels = [] for i, label in enumerate(examples[f"labels"]): word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. previous_word_idx = None label_ids = [] for word_idx in word_ids: # Set the special tokens to -100. if word_idx is None: label_ids.append(-100) elif word_idx != previous_word_idx: # Only label the first token of a given word. label_ids.append(label[word_idx]) else: label_ids.append(-100) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") dt = dataset.map(tokenize_and_align_labels, batched=True) ## Expected results New dataset objects should load and do on older versions. ## Actual results "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features ## Environment info everything works fine on older installations of datasets/transformers Issue arises when installing datasets on google collab under python3.7 I can't manage to find the exact output you're requirering but version printed is datasets-2.3.2
romainremyb
https://github.com/huggingface/datasets/issues/4508
null
false
1,272,615,932
4,507
How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script
closed
[ "Hi @liyucheng09.\r\n\r\nUsers can pass the `split` parameter to `load_dataset`. For example, if your split name is \"train\",\r\n```python\r\nds = load_dataset(\"dataset_name\", split=\"train\")\r\n```\r\nwill return a `Dataset` instance.", "@albertvillanova Thanks! I can't believe I didn't know this feature till now." ]
2022-06-15T18:56:34
2022-06-16T10:40:08
2022-06-16T10:40:08
If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair. Or I can paraphrase the question in the following way: how to skip `_split_generators` step in `DatasetBuilder` to let `as_dataset` gives a single `Dataset` rather than a list`[Dataset]`? Many thanks for any help.
liyucheng09
https://github.com/huggingface/datasets/issues/4507
null
false
1,272,516,895
4,506
Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results
closed
[ "Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping function is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`", "@lhoestq\r\nseems like quite critical stuff for me, if I'm not making a mistake", "Hi ! Thanks for reporting. This bug seems to appear in python 3.9 using dill 3.5.1\r\n\r\nAs a workaround you can use an older version of dill:\r\n```\r\npip install \"dill<0.3.5\"\r\n```", "installing `dill<0.3.5` after installing `datasets` by pip results in dependency conflict with the version required for `multiprocess`. It can be solved by installing `pip install datasets \"dill<0.3.5\"` (simultaneously) on a clean environment", "This has been fixed in https://github.com/huggingface/datasets/pull/4516, we will do a new release soon to include the fix :)" ]
2022-06-15T17:11:31
2023-02-16T03:14:32
2022-06-28T13:23:05
## Describe the bug Sometimes I get messages about not being able to hash a method: `Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset. _map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` Whilst the function looks like this: ```python @staticmethod def _separate_speaker_id_from_dialogue(example: arrow_dataset.Example): speaker_id, dialogue = tuple(zip(*(example["dialogue"]))) example["speaker_id"] = speaker_id example["dialogue"] = dialogue return example ``` This is the first step in my preprocessing pipeline, but sometimes the message about failure to hash is not appearing on the first step, but then appears on a later step. This error is sometimes causing a failure to use cached data, instead of re-running all steps again. ## Steps to reproduce the bug ```python import copy import datasets from datasets import arrow_dataset def main(): dataset = datasets.load_dataset("blended_skill_talk") res = dataset.map(method) print(res) def method(example: arrow_dataset.Example): example['previous_utterance_copy'] = copy.deepcopy(example['previous_utterance']) return example if __name__ == '__main__': main() ``` Run with: ``` python -m reproduce_error ``` ## Expected results Dataset is mapped and cached correctly. ## Actual results The code outputs this at some point: `Parameter 'function'=<function method at 0x7faa83d2a160> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 20.04.3 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Datasets version: 2.3.1
DrMatters
https://github.com/huggingface/datasets/issues/4506
null
false
1,272,477,226
4,505
Fix double dots in data files
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)" ]
2022-06-15T16:31:04
2022-06-15T17:15:58
2022-06-15T17:05:53
As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot) I fixed this and added a test cc @sgugger @ydshieh
lhoestq
https://github.com/huggingface/datasets/pull/4505
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4505", "html_url": "https://github.com/huggingface/datasets/pull/4505", "diff_url": "https://github.com/huggingface/datasets/pull/4505.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4505.patch", "merged_at": "2022-06-15T17:05:53" }
true
1,272,418,480
4,504
Can you please add the Stanford dog dataset?
closed
[ "would you like to give it a try, @dgrnd4? (maybe with the help of the dataset author?)", "@julien-c i am sorry but I have no idea about how it works: can I add the dataset by myself, following \"instructions to add a new dataset\"?\r\nCan I add a dataset even if it's not mine? (it's public in the link that I wrote on the post)\r\n", "Hi! The [ADD NEW DATASET](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) instructions are indeed the best place to start. It's also perfectly fine to add a dataset if it's public, even if it's not yours. Let me know if you need some additional pointers.", "If no one is working on this, I could take this up!", "@khushmeeet this is the [link](https://huggingface.co/datasets/dgrnd4/stanford_dog_dataset) where I added the dataset already. If you can I would ask you to do this:\r\n1) The dataset it's all in TRAINING SET: can you please divide it in Training,Test and Validation Set? If you can for each class, take the 80% for the Training set and the 10% for Test and 10% Validation\r\n2) The images has different size, can you please resize all the images in 224,224,3? Look even at the last dimension \"3\" because some images has dimension 4!\r\n\r\nThank you!!", "Hi @khushmeeet! Thanks for the interest. You can self-assign the issue by commenting `#self-assign` on it. \r\n\r\nAlso, I think we can skip @dgrnd4's steps as we try to avoid any custom processing on top of raw data. One can later copy the script and override `_post_process` in it to perform such processing on the generated dataset.", "Thanks @mariosasko \r\n\r\n@dgrnd4 As dataset is there on Hub, and preprocessing is not recommended. I am not sure if there is any other task to do. However, I can't seem to find relevant `.py` files for this dataset in GitHub repo.", "@khushmeeet @mariosasko The point is that the images must be processed and must have the same size in order to can be used for things for example \"Training\". ", "@dgrnd4 Yes, but this can be done after loading (`map` to resize images and `train_test_split` to create extra splits)\r\n\r\n@khushmeeet The linked version is implemented as a no-code dataset and is generated directly from the ZIP archive, but our \"GitHub\" datasets (these are datasets without a user/org namespace on the Hub) need a generation script, and you can find one [here](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/image_classification/stanford_dogs.py). `datasets` started as a fork of TFDS, so we share similar script structure, which makes it trivial to adapt it.", "@mariosasko The point is that if I use something like this:\r\nx_train, x_test = train_test_split(dataset, test_size=0.1) \r\n\r\nto get Train 90% and Test 10%, and then to get the Validation Set (10% of the whole 100%):\r\n\r\n```\r\ntrain_ratio = 0.80\r\nvalidation_ratio = 0.10\r\ntest_ratio = 0.10\r\n\r\nx_train, x_test, y_train, y_test = train_test_split(dataX, dataY, test_size=1 - train_ratio)\r\nx_val, x_test, y_val, y_test = train_test_split(x_test, y_test, test_size=test_ratio/(test_ratio + validation_ratio)) \r\n\r\n```\r\n\r\nThe point is that the structure of the data is:\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 20580\r\n })\r\n})\r\n\r\n```\r\n\r\nSo how to extract images and labels?\r\n\r\nEDIT --> Split of the dataset in Train-Test-Validation:\r\n```\r\nimport datasets\r\nfrom datasets.dataset_dict import DatasetDict\r\nfrom datasets import Dataset\r\n\r\npercentage_divison_test = int(len(dataset['train'])/100 *10) # 10% --> 2058 \r\npercentage_divison_validation = int(len(dataset['train'])/100 *20) # 20% --> 4116\r\n\r\ndataset_ = datasets.DatasetDict({\"train\": Dataset.from_dict({\r\n\r\n 'image': dataset['train'][0 : len(dataset['train']) ]['image'], \r\n 'labels': dataset['train'][0 : len(dataset['train']) ]['label'] }), \r\n \r\n \"test\": Dataset.from_dict({ #20580-4116 (validation) ,20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['label'] }), \r\n \r\n \"validation\": Dataset.from_dict({ # 20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['label'] }), \r\n })\r\n```", "@mariosasko in order to resize images I'm trying this method: \r\n```\r\nfor i in range(0,len(dataset['train'])): #len(dataset['train'])\r\n\r\n ex = dataset['train'][i] #i\r\n image = ex['image']\r\n image = image.convert(\"RGB\") # <class 'PIL.Image.Image'> <PIL.Image.Image image mode=RGB size=500x333 at 0x7F84F1948150>\r\n image_resized = image.resize(size_to_resize) # <PIL.Image.Image image mode=RGB size=224x224 at 0x7F84F17885D0>\r\n\r\n dataset['train'][i]['image'] = image_resized \r\n```\r\n\r\nBecause the DatasetDict is formed by arrows that are immutable, the changing assignment in the last line of code, doesn't work!\r\nDo you have any idea in order to get a valid result?", "#self-assign", "I have raised PR for adding stanford-dog dataset. I have not added any data preprocessing code. Only dataset generation script is there. Let me know any changes required, or anything to add to README.", "Is this issue still open, i am new to open source thus want to take this one as my start.", "@zutarich This issue should have been closed since the dataset in question is available on the Hub [here](https://huggingface.co/datasets/dgrnd4/stanford_dog_dataset).", "I didn't know about this issue until now but i added my version of the dataset on the hub **with the bboxes** :\r\nhttps://huggingface.co/datasets/Alanox/stanford-dogs\r\n\r\nAlthough I could have made it cleaner and built the splits from the .txt files + put into the coco format.\r\nThere is a [stanford-dogs.py](https://huggingface.co/datasets/Alanox/stanford-dogs/blob/main/stanford-dogs.py) file if you want to help adding these missing metadatas.\r\nHope this helps" ]
2022-06-15T15:39:35
2024-12-09T15:44:11
2023-10-18T18:55:30
## Adding a Dataset - **Name:** *Stanford dog dataset* - **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Data:** *[link to the Github repository or current dataset location](http://vision.stanford.edu/aditya86/ImageNetDogs/)* - **Motivation:** *The dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It is useful for fine-grain purpose * Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
dgrnd4
https://github.com/huggingface/datasets/issues/4504
null
false
1,272,367,055
4,503
Refactor and add metadata to fever dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "But this is somehow fever v3 dataset (see this link https://fever.ai/ under the dropdown menu called Datasets). Our fever dataset already contains v1 and v2 configs. Then, I added this as if v3 config (but named feverous instead of v3 to align with the original naming by data owners).", "In any case, if you really think this should be a new dataset, then I would propose to create it on the Hub instead, as \"fever/feverous\".", "> In any case, if you really think this should be a new dataset, then I would propose to create it on the Hub instead, as \"fever/feverous\".\r\n\r\nYea makes sense ! thanks :) let's push more datasets on the hub rather than on github from now on", "I have added \"feverous\" dataset to the Hub: https://huggingface.co/datasets/fever/feverous\r\n\r\nI change the name of this PR accordingly, as now it only:\r\n- Refactors code and include for both Fever v1.0 and v2.0 specific:\r\n - Descriptions\r\n - Citations\r\n - Homepages\r\n- Updates documentation card aligned with above:\r\n - It was missing v2.0 description and citation.\r\n- Update metadata JSON" ]
2022-06-15T14:59:47
2022-07-06T11:54:15
2022-07-06T11:41:30
Related to: #4452 and #3792.
albertvillanova
https://github.com/huggingface/datasets/pull/4503
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4503", "html_url": "https://github.com/huggingface/datasets/pull/4503", "diff_url": "https://github.com/huggingface/datasets/pull/4503.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4503.patch", "merged_at": "2022-07-06T11:41:30" }
true
1,272,353,700
4,502
Logic bug in arrow_writer?
closed
[ "Hi @cccntu you're right, as when `batch_examples={}` the current if-statement won't be triggered as the condition won't be satisfied, I'll prepare a PR to address it as well as add the regression tests so that this issue is handled properly.", "Hi @alvarobartt ,\r\nThanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.", "> Hi @alvarobartt , Thanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.\r\n\r\nSo it depends on how you're actually chunking the data as if you're not handling empty chunks `batch_examples={}` or `batch_examples=None`, you may end up running into this issue. So you could check the chunks before you actually call `ArrowWriter.write_batch`, but anyway the fix you proposed I think improves the logic of `write_batch` to avoid running into these issues.", "Thanks, I added a if-print and I found it does return an empty examples in the chunking function that is passed to `.map()`.", "Hi ! We consider an empty batch to look like this:\r\n```python\r\nempty_batch = {\r\n \"column_1\": [],\r\n \"column_2\": [],\r\n ...\r\n}\r\n```\r\n\r\nWhile `{}` corresponds to a batch with no columns.\r\n\r\nTherefore calling this code should fail, because the two batches don't have the same columns:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({})\r\n```\r\n\r\nIf you want to write an empty batch, you should do this instead:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({\"a\": []})\r\n```", "Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using `if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...`?\r\n\r\nUpdating the regressions tests with an empty batch formatted as `{\"col_1\": [], \"col_2\": []}` instead of `{}` works fine with the current if, and also with the one proposed by @cccntu.", "> Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...?\r\n\r\nThere's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for `{}` here\r\n\r\nIn particular the check `if not batch_examples or len(next(iter(batch_examples.values()))) == 0:` doesn't raise an error while it should, that why the old `if` is fine IMO\r\n\r\n> Updating the regressions tests with an empty batch formatted as {\"col_1\": [], \"col_2\": []} instead of {} works fine with the current if, and also with the one proposed by @cccntu.\r\n\r\nCool ! If you want you can update your PR to add the regression tests, to make sure that `{\"col_1\": [], \"col_2\": []}` works but not `{}`", "Great thanks for the response! So I'll just add that regression test and remove the current if-statement.", "Hi @lhoestq ,\r\n\r\nThanks for your explanation. Now I get it that `{}` means the columns are different. But wouldn't it be nice if the code can ignore it, like it ignores `{\"a\": []}`?\r\n\r\n\r\n--- \r\nBTW, \r\n> There's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for {} here\r\n\r\nI remember the error happens around here:\r\nhttps://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L506-L507\r\nThe error says something like `arrays` and `schema` doesn't have the same length. And it's not very clear I passed a `{}`.\r\n\r\nedit: actual error message\r\n```\r\nFile \"site-packages/datasets/arrow_writer.py\", line 595, in write_batch\r\n pa_table = pa.Table.from_arrays(arrays, schema=schema)\r\n File \"pyarrow/table.pxi\", line 3557, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/table.pxi\", line 1401, in pyarrow.lib._sanitize_arrays\r\nValueError: Schema and number of arrays unequal\r\n```", "> But wouldn't it be nice if the code can ignore it, like it ignores {\"a\": []}?\r\n\r\nI think it would make things confusing because it doesn't follow our definition of a batch: \"the columns of a batch = the keys of the dict\". It would probably break certain behaviors as well. For example if you remove all the columns of a dataset (using `.remove_colums(...)` or `.map(..., remove_columns=...)`), the writer has to write 0 columns, and currently the only way to tell the writer to do so using `write_batch` is to pass `{}`.\r\n\r\n> The error says something like arrays and schema doesn't have the same length. And it's not very clear I passed a {}.\r\n\r\nYea the message can actually be improved indeed, it's definitely not clear. Maybe we can add a line right before the call `pa.Table.from_arrays` to make sure the keys of the batch match the field names of the schema" ]
2022-06-15T14:50:00
2022-06-18T15:15:51
2022-06-18T15:15:51
https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488 I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows: ``` - if batch_examples and len(next(iter(batch_examples.values()))) == 0: + if not batch_examples or len(next(iter(batch_examples.values()))) == 0: return ``` @lhoestq
changjonathanc
https://github.com/huggingface/datasets/issues/4502
null
false
1,272,300,646
4,501
Corrected broken links in doc
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-15T14:12:17
2022-06-15T15:11:05
2022-06-15T15:00:56
null
clefourrier
https://github.com/huggingface/datasets/pull/4501
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4501", "html_url": "https://github.com/huggingface/datasets/pull/4501", "diff_url": "https://github.com/huggingface/datasets/pull/4501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4501.patch", "merged_at": "2022-06-15T15:00:56" }
true
1,272,281,992
4,500
Add `concatenate_datasets` for iterable datasets
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks ! I addressed your comments :)\r\n\r\n> There is a slight difference in concatenate_datasets between the version for map-style datasets and the one for iterable datasets\r\n\r\nIndeed, here is what I did to fix this:\r\n\r\n- axis 0: fill missing columns with None.\r\n(I first iterate over the input datasets to infer their columns from the first examples, then I set the features of the resulting dataset to be the merged features)\r\nThis is consistent with non-streaming concatenation\r\n\r\n- axis 1: **fill the missing rows with None**, for consistency with axis 0\r\n(but let me know what you think, I can still revert this behavior and raise an error when one of the dataset runs out of examples)\r\nWe might have to align the non-streaming concatenation with this behavior though, for consistency. What do you think ?", "Added more comments as suggested, and some typing\r\n\r\nWhile factorizing _apply_features_types for both IterableDataset and TypedExamplesIterable, I fixed a missing `token_per_repo_id` that was not passed to TypedExamplesIteable\r\n\r\nLet me know what you think now @mariosasko " ]
2022-06-15T13:58:50
2022-06-28T21:25:39
2022-06-28T21:15:04
`concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets` Fix https://github.com/huggingface/datasets/issues/2564 I also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on the `Dataset` object internals And I moved `concatenate_datasets` from arrow_dataset.py to combine.py to have it with `interleave_datasets` (though it's also copied in arrow_dataset module for backward compatibility for now)
lhoestq
https://github.com/huggingface/datasets/pull/4500
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4500", "html_url": "https://github.com/huggingface/datasets/pull/4500", "diff_url": "https://github.com/huggingface/datasets/pull/4500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4500.patch", "merged_at": "2022-06-28T21:15:04" }
true
1,272,118,162
4,499
fix ETT m1/m2 test/val dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thansk for the fix ! Can you regenerate the datasets_infos.json please ? This way it will update the expected number of examples in the test and val splits", "ah yes!" ]
2022-06-15T11:51:02
2022-06-15T14:55:56
2022-06-15T14:45:13
https://huggingface.co/datasets/ett/discussions/1
kashif
https://github.com/huggingface/datasets/pull/4499
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4499", "html_url": "https://github.com/huggingface/datasets/pull/4499", "diff_url": "https://github.com/huggingface/datasets/pull/4499.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4499.patch", "merged_at": "2022-06-15T14:45:12" }
true
1,272,100,549
4,498
WER and CER > 1
closed
[ "WER can have values bigger than 1.0, this is expected when there are too many insertions\r\n\r\nFrom [wikipedia](https://en.wikipedia.org/wiki/Word_error_rate):\r\n> Note that since N is the number of words in the reference, the word error rate can be larger than 1.0" ]
2022-06-15T11:35:12
2022-06-15T16:38:05
2022-06-15T16:38:05
## Describe the bug It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd. If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#L105) line to ```python return min(incorrect / total, 1.0) ``` ## Steps to reproduce the bug ```python from datasets import load_metric wer = load_metric("wer") wer_value = wer.compute(predictions=["Hi World vka"], references=["Hello"]) print(wer_value) ``` ## Expected results ``` 1.0 ``` ## Actual results ``` 3.0 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
sadrasabouri
https://github.com/huggingface/datasets/issues/4498
null
false
1,271,964,338
4,497
Re-add download_manager module in utils
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the fix.\r\n\r\nI'm wondering how this fixes backward compatibility...\r\n\r\nExecuting this code:\r\n```python\r\nfrom datasets.utils.download_manager import DownloadMode\r\n```\r\nwe will have\r\n```python\r\nDownloadMode = None\r\n```\r\n\r\nIf afterwards we use something like:\r\n```python\r\nif download_mode == DownloadMode.FORCE_REDOWNLOAD\r\n```\r\nthat will raise an exception.", "It works fine on my side:\r\n```python\r\n>>> from datasets.utils.download_manager import DownloadMode\r\n>>> DownloadMode is not None\r\nTrue\r\n```", "As reported in https://github.com/huggingface/evaluate/pull/143\r\n```python\r\nfrom datasets.utils import DownloadConfig\r\n```\r\nis also missing, I'm re-adding it", "Took the liberty of merging this one, to do a patch release soon. If we think of a better approach we can improve it later" ]
2022-06-15T09:44:33
2022-06-15T10:33:28
2022-06-15T10:23:44
https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager` This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager` This PR re-adds `datasets.utils.download_manager` without circular imports. We could also show a message that says that accessing it is deprecated, but I think we can do this in a subsequent PR, and just focus on doing a patch release for now
lhoestq
https://github.com/huggingface/datasets/pull/4497
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4497", "html_url": "https://github.com/huggingface/datasets/pull/4497", "diff_url": "https://github.com/huggingface/datasets/pull/4497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4497.patch", "merged_at": "2022-06-15T10:23:44" }
true
1,271,945,704
4,496
Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!" ]
2022-06-15T09:29:16
2022-07-07T17:06:51
2022-07-07T16:55:48
As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose.
alvarobartt
https://github.com/huggingface/datasets/pull/4496
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4496", "html_url": "https://github.com/huggingface/datasets/pull/4496", "diff_url": "https://github.com/huggingface/datasets/pull/4496.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4496.patch", "merged_at": "2022-07-07T16:55:48" }
true
1,271,851,025
4,495
Fix patching module that doesn't exist
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-15T08:17:50
2022-06-15T16:40:49
2022-06-15T08:54:09
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing Bug introduced by #4375 Fix https://github.com/huggingface/datasets/issues/4494
lhoestq
https://github.com/huggingface/datasets/pull/4495
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4495", "html_url": "https://github.com/huggingface/datasets/pull/4495", "diff_url": "https://github.com/huggingface/datasets/pull/4495.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4495.patch", "merged_at": "2022-06-15T08:54:09" }
true
1,271,850,599
4,494
Patching fails for modules that are not installed or don't exist
closed
[]
2022-06-15T08:17:29
2022-06-15T08:54:09
2022-06-15T08:54:09
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing We use patching to extend such functions to support remote URLs and work in streaming mode
lhoestq
https://github.com/huggingface/datasets/issues/4494
null
false
1,271,306,385
4,493
Add `@transmit_format` in `flatten`
closed
[ "@mariosasko please let me know whether we need to include some sort of tests to make sure that the decorator is working as expected. Thanks! πŸ€— ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4493). All of your documentation changes will be reflected on that endpoint.", "Hi, thanks for working on this! Yes, please add (simple) tests so we can avoid any unexpected behavior in the future.\r\n\r\n`@transmit_format` doesn't handle column renaming, so I removed it from `rename_column` and `rename_columns` and added a comment to explain this.", "Oops, I thought this PR was already merged and deleted from the source repository, I'll be creating a new branch out of `main` so as to re-create this PR... My bad :weary:" ]
2022-06-14T20:09:09
2022-09-27T11:37:25
2022-09-27T10:48:54
As suggested by @mariosasko in https://github.com/huggingface/datasets/pull/4411, we should include the `@transmit_format` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated. **Edit**: according to @mariosasko comment below, the decorator `@transmit_format` doesn't handle column renaming, so it's done manually for those instead.
alvarobartt
https://github.com/huggingface/datasets/pull/4493
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4493", "html_url": "https://github.com/huggingface/datasets/pull/4493", "diff_url": "https://github.com/huggingface/datasets/pull/4493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4493.patch", "merged_at": null }
true
1,271,112,497
4,492
Pin the revision in imagenet download links
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-14T17:15:17
2022-06-14T17:35:13
2022-06-14T17:25:45
Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism. cc @mariosasko
lhoestq
https://github.com/huggingface/datasets/pull/4492
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4492", "html_url": "https://github.com/huggingface/datasets/pull/4492", "diff_url": "https://github.com/huggingface/datasets/pull/4492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4492.patch", "merged_at": "2022-06-14T17:25:45" }
true
1,270,803,822
4,491
Dataset Viewer issue for Pavithree/test
closed
[ "This issue can be resolved according to this post https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset." ]
2022-06-14T13:23:10
2022-06-14T14:37:21
2022-06-14T14:34:33
### Link https://huggingface.co/datasets/Pavithree/test ### Description I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help. ### Owner _No response_
Pavithree
https://github.com/huggingface/datasets/issues/4491
null
false