id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
682,573,232
523
Speed up Tokenization by optimizing cast_to_python_objects
closed
[]
2020-08-20T09:42:02
2020-08-24T08:54:15
2020-08-24T08:54:14
I changed how `cast_to_python_objects` works to make it faster. It is used to cast numpy/pytorch/tensorflow/pandas objects to python lists, and it works recursively. To avoid iterating over possibly long lists, it first checks if the first element that is not None has to be casted. If the first element needs to be...
lhoestq
https://github.com/huggingface/datasets/pull/523
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/523", "html_url": "https://github.com/huggingface/datasets/pull/523", "diff_url": "https://github.com/huggingface/datasets/pull/523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/523.patch", "merged_at": "2020-08-24T08:54:14"...
true
682,478,833
522
dictionnary typo in docs
closed
[]
2020-08-20T07:11:05
2020-08-20T07:52:14
2020-08-20T07:52:13
Many places dictionary is spelled dictionnary, not sure if its on purpose or not. Fixed in this pr: https://github.com/huggingface/nlp/pull/521
yonigottesman
https://github.com/huggingface/datasets/issues/522
null
false
682,477,648
521
Fix dictionnary (dictionary) typo
closed
[]
2020-08-20T07:09:02
2020-08-20T07:52:04
2020-08-20T07:52:04
This error happens many times I'm thinking maybe its spelled like this on purpose?
yonigottesman
https://github.com/huggingface/datasets/pull/521
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/521", "html_url": "https://github.com/huggingface/datasets/pull/521", "diff_url": "https://github.com/huggingface/datasets/pull/521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/521.patch", "merged_at": "2020-08-20T07:52:04"...
true
682,264,839
520
Transform references for sacrebleu
closed
[]
2020-08-20T00:26:55
2020-08-20T09:30:54
2020-08-20T09:30:53
Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and r...
jbragg
https://github.com/huggingface/datasets/pull/520
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/520", "html_url": "https://github.com/huggingface/datasets/pull/520", "diff_url": "https://github.com/huggingface/datasets/pull/520.diff", "patch_url": "https://github.com/huggingface/datasets/pull/520.patch", "merged_at": "2020-08-20T09:30:53"...
true
682,193,882
519
[BUG] Metrics throwing new error on master since 0.4.0
closed
[]
2020-08-19T21:29:15
2022-06-02T16:41:01
2020-08-19T22:04:40
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add_batch(predictions=predictions, references=references) ...
jbragg
https://github.com/huggingface/datasets/issues/519
null
false
682,131,165
518
[METRICS, breaking] Refactor caching behavior, pickle/cloudpickle metrics and dataset, add tests on metrics
closed
[]
2020-08-19T19:43:08
2020-08-24T16:01:40
2020-08-24T16:01:39
Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled/cloudpickled after instantiation. Also add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances. Changes significantly the caching behavior for the metri...
thomwolf
https://github.com/huggingface/datasets/pull/518
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/518", "html_url": "https://github.com/huggingface/datasets/pull/518", "diff_url": "https://github.com/huggingface/datasets/pull/518.diff", "patch_url": "https://github.com/huggingface/datasets/pull/518.patch", "merged_at": "2020-08-24T16:01:39"...
true
681,896,944
517
add MLDoc dataset
open
[]
2020-08-19T14:41:59
2021-08-03T05:59:33
null
Hi, I am recommending that someone add MLDoc, a multilingual news topic classification dataset. - Here's a link to the Github: https://github.com/facebookresearch/MLDoc - and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf Looks like the dataset contains news stories in multiple languages...
jxmorris12
https://github.com/huggingface/datasets/issues/517
null
false
681,846,032
516
[Breaking] Rename formated to formatted
closed
[]
2020-08-19T13:35:23
2020-08-20T08:41:17
2020-08-20T08:41:16
`formated` is not correct but `formatted` is
lhoestq
https://github.com/huggingface/datasets/pull/516
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/516", "html_url": "https://github.com/huggingface/datasets/pull/516", "diff_url": "https://github.com/huggingface/datasets/pull/516.diff", "patch_url": "https://github.com/huggingface/datasets/pull/516.patch", "merged_at": "2020-08-20T08:41:16"...
true
681,845,619
515
Fix batched map for formatted dataset
closed
[]
2020-08-19T13:34:50
2020-08-20T20:30:43
2020-08-20T20:30:42
If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000). The happened during the creation of the `pa.Table`, since columns had different lengths.
lhoestq
https://github.com/huggingface/datasets/pull/515
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/515", "html_url": "https://github.com/huggingface/datasets/pull/515", "diff_url": "https://github.com/huggingface/datasets/pull/515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/515.patch", "merged_at": "2020-08-20T20:30:42"...
true
681,256,348
514
dataset.shuffle(keep_in_memory=True) is never allowed
closed
[]
2020-08-18T18:47:40
2022-10-10T12:21:58
2022-10-10T12:21:58
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
vegarab
https://github.com/huggingface/datasets/issues/514
null
false
681,215,612
513
[speedup] Use indices mappings instead of deepcopy for all the samples reordering methods
closed
[]
2020-08-18T17:36:02
2020-08-28T08:41:51
2020-08-28T08:41:50
Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests. All the samples re-ordering/selecti...
thomwolf
https://github.com/huggingface/datasets/pull/513
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/513", "html_url": "https://github.com/huggingface/datasets/pull/513", "diff_url": "https://github.com/huggingface/datasets/pull/513.diff", "patch_url": "https://github.com/huggingface/datasets/pull/513.patch", "merged_at": "2020-08-28T08:41:50"...
true
681,137,164
512
Delete CONTRIBUTING.md
closed
[]
2020-08-18T15:33:25
2020-08-18T15:48:21
2020-08-18T15:39:07
ChenZehong13
https://github.com/huggingface/datasets/pull/512
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/512", "html_url": "https://github.com/huggingface/datasets/pull/512", "diff_url": "https://github.com/huggingface/datasets/pull/512.diff", "patch_url": "https://github.com/huggingface/datasets/pull/512.patch", "merged_at": null }
true
681,055,553
511
dataset.shuffle() and select() resets format. Intended?
closed
[]
2020-08-18T13:46:01
2020-09-14T08:45:38
2020-09-14T08:45:38
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later...
vegarab
https://github.com/huggingface/datasets/issues/511
null
false
680,823,644
510
Version of numpy to use the library
closed
[]
2020-08-18T08:59:13
2020-08-19T18:35:56
2020-08-19T18:35:56
Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library. Th...
isspek
https://github.com/huggingface/datasets/issues/510
null
false
679,711,585
509
Converting TensorFlow dataset example
closed
[]
2020-08-16T08:05:20
2021-08-03T06:01:18
2021-08-03T06:01:17
Hi, I want to use TensorFlow datasets with this repo, I noticed you made some conversion script, can you give a simple example of using it? Thanks
saareliad
https://github.com/huggingface/datasets/issues/509
null
false
679,705,734
508
TypeError: Receiver() takes no arguments
closed
[]
2020-08-16T07:18:16
2020-09-01T14:53:33
2020-09-01T14:49:03
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` Th...
sebastiantomac
https://github.com/huggingface/datasets/issues/508
null
false
679,400,683
507
Errors when I use
closed
[]
2020-08-14T21:03:57
2020-08-14T21:39:10
2020-08-14T21:39:10
I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors I am using **transformers 3.0.2** code . from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization_auto import AutoToke...
mchari
https://github.com/huggingface/datasets/issues/507
null
false
679,164,788
506
fix dataset.map for function without outputs
closed
[]
2020-08-14T13:40:22
2020-08-17T11:24:39
2020-08-17T11:24:38
As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable. I fixed that and added tests. Thanks @avloss for reporting
lhoestq
https://github.com/huggingface/datasets/pull/506
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/506", "html_url": "https://github.com/huggingface/datasets/pull/506", "diff_url": "https://github.com/huggingface/datasets/pull/506.diff", "patch_url": "https://github.com/huggingface/datasets/pull/506.patch", "merged_at": "2020-08-17T11:24:38"...
true
678,791,400
505
tmp_file referenced before assignment
closed
[]
2020-08-13T23:27:33
2020-08-14T13:42:46
2020-08-14T13:42:46
Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file".
avloss
https://github.com/huggingface/datasets/pull/505
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/505", "html_url": "https://github.com/huggingface/datasets/pull/505", "diff_url": "https://github.com/huggingface/datasets/pull/505.diff", "patch_url": "https://github.com/huggingface/datasets/pull/505.patch", "merged_at": null }
true
678,756,211
504
Added downloading to Hyperpartisan news detection
closed
[]
2020-08-13T21:53:46
2020-08-27T08:18:41
2020-08-27T08:18:41
Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel ! Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `de...
ghomasHudson
https://github.com/huggingface/datasets/pull/504
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/504", "html_url": "https://github.com/huggingface/datasets/pull/504", "diff_url": "https://github.com/huggingface/datasets/pull/504.diff", "patch_url": "https://github.com/huggingface/datasets/pull/504.patch", "merged_at": "2020-08-27T08:18:41"...
true
678,726,538
503
CompGuessWhat?! 0.2.0
closed
[]
2020-08-13T20:51:26
2020-10-21T06:54:29
2020-10-21T06:54:29
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
aleSuglia
https://github.com/huggingface/datasets/pull/503
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/503", "html_url": "https://github.com/huggingface/datasets/pull/503", "diff_url": "https://github.com/huggingface/datasets/pull/503.diff", "patch_url": "https://github.com/huggingface/datasets/pull/503.patch", "merged_at": null }
true
678,546,070
502
Fix tokenizers caching
closed
[]
2020-08-13T15:53:37
2020-08-19T13:37:19
2020-08-19T13:37:18
I've found some cases where the caching didn't work properly for tokenizers: 1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions 2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates 3. if a tokenizer is u...
lhoestq
https://github.com/huggingface/datasets/pull/502
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/502", "html_url": "https://github.com/huggingface/datasets/pull/502", "diff_url": "https://github.com/huggingface/datasets/pull/502.diff", "patch_url": "https://github.com/huggingface/datasets/pull/502.patch", "merged_at": "2020-08-19T13:37:17"...
true
677,952,893
501
Caching doesn't work for map (non-deterministic)
closed
[]
2020-08-12T20:20:07
2022-08-08T11:02:23
2020-08-24T16:34:35
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2") def conv...
wulu473
https://github.com/huggingface/datasets/issues/501
null
false
677,841,708
500
Use hnsw in wiki_dpr
closed
[]
2020-08-12T16:58:07
2020-08-20T07:59:19
2020-08-20T07:59:18
The HNSW faiss index is much faster that regular Flat index.
lhoestq
https://github.com/huggingface/datasets/pull/500
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/500", "html_url": "https://github.com/huggingface/datasets/pull/500", "diff_url": "https://github.com/huggingface/datasets/pull/500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/500.patch", "merged_at": "2020-08-20T07:59:18"...
true
677,709,938
499
Narrativeqa (with full text)
closed
[]
2020-08-12T13:49:43
2020-12-09T11:21:02
2020-12-09T11:21:02
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset. Few notes: - Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine. - Can't get the dummy data to work. Currently putting stuff at: ...
ghomasHudson
https://github.com/huggingface/datasets/pull/499
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/499", "html_url": "https://github.com/huggingface/datasets/pull/499", "diff_url": "https://github.com/huggingface/datasets/pull/499.diff", "patch_url": "https://github.com/huggingface/datasets/pull/499.patch", "merged_at": null }
true
677,597,479
498
dont use beam fs to save info for local cache dir
closed
[]
2020-08-12T11:00:00
2020-08-14T13:17:21
2020-08-14T13:17:20
If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info Fix #490
lhoestq
https://github.com/huggingface/datasets/pull/498
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/498", "html_url": "https://github.com/huggingface/datasets/pull/498", "diff_url": "https://github.com/huggingface/datasets/pull/498.diff", "patch_url": "https://github.com/huggingface/datasets/pull/498.patch", "merged_at": "2020-08-14T13:17:20"...
true
677,057,116
497
skip header in PAWS-X
closed
[]
2020-08-11T17:26:25
2020-08-19T09:50:02
2020-08-19T09:50:01
This should fix #485 I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one). Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I remove...
lhoestq
https://github.com/huggingface/datasets/pull/497
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/497", "html_url": "https://github.com/huggingface/datasets/pull/497", "diff_url": "https://github.com/huggingface/datasets/pull/497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/497.patch", "merged_at": "2020-08-19T09:50:01"...
true
677,016,998
496
fix bad type in overflow check
closed
[]
2020-08-11T16:24:58
2020-08-14T13:29:35
2020-08-14T13:29:34
When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field. This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example). This s...
lhoestq
https://github.com/huggingface/datasets/pull/496
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/496", "html_url": "https://github.com/huggingface/datasets/pull/496", "diff_url": "https://github.com/huggingface/datasets/pull/496.diff", "patch_url": "https://github.com/huggingface/datasets/pull/496.patch", "merged_at": "2020-08-14T13:29:34"...
true
676,959,289
495
stack vectors in pytorch and tensorflow
closed
[]
2020-08-11T15:12:53
2020-08-12T09:30:49
2020-08-12T09:30:48
When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`. I added support for stacked tensors for both pytorch and tensorflow. For ragged tensors, they are stack...
lhoestq
https://github.com/huggingface/datasets/pull/495
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/495", "html_url": "https://github.com/huggingface/datasets/pull/495", "diff_url": "https://github.com/huggingface/datasets/pull/495.diff", "patch_url": "https://github.com/huggingface/datasets/pull/495.patch", "merged_at": "2020-08-12T09:30:48"...
true
676,886,955
494
Fix numpy stacking
closed
[]
2020-08-11T13:40:30
2020-08-11T14:56:50
2020-08-11T13:49:52
When getting items using a column name as a key, numpy arrays were not stacked. I fixed that and added some tests. There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the...
lhoestq
https://github.com/huggingface/datasets/pull/494
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/494", "html_url": "https://github.com/huggingface/datasets/pull/494", "diff_url": "https://github.com/huggingface/datasets/pull/494.diff", "patch_url": "https://github.com/huggingface/datasets/pull/494.patch", "merged_at": "2020-08-11T13:49:52"...
true
676,527,351
493
Fix wmt zh-en url
closed
[]
2020-08-11T02:14:52
2020-08-11T02:22:28
2020-08-11T02:22:12
I verified that ``` wget https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 ``` runs in 2 minutes.
sshleifer
https://github.com/huggingface/datasets/pull/493
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/493", "html_url": "https://github.com/huggingface/datasets/pull/493", "diff_url": "https://github.com/huggingface/datasets/pull/493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/493.patch", "merged_at": null }
true
676,495,064
492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
closed
[]
2020-08-11T00:27:46
2020-08-26T16:17:19
2020-08-26T16:17:19
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dse...
jarednielsen
https://github.com/huggingface/datasets/issues/492
null
false
676,486,275
491
No 0.4.0 release on GitHub
closed
[]
2020-08-10T23:59:57
2020-08-11T16:50:07
2020-08-11T16:50:07
0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo.
jarednielsen
https://github.com/huggingface/datasets/issues/491
null
false
676,482,242
490
Loading preprocessed Wikipedia dataset requires apache_beam
closed
[]
2020-08-10T23:46:50
2020-08-14T13:17:20
2020-08-14T13:17:20
Running `nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")` gives an error if apache_beam is not installed, stemming from https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988 This succeeded without the dependency in ve...
jarednielsen
https://github.com/huggingface/datasets/issues/490
null
false
676,456,257
489
ug
closed
[]
2020-08-10T22:33:03
2020-08-10T22:55:14
2020-08-10T22:33:40
timothyjlaurent
https://github.com/huggingface/datasets/issues/489
null
false
676,299,993
488
issues with downloading datasets for wmt16 and wmt19
closed
[]
2020-08-10T17:32:51
2022-10-04T17:46:59
2022-10-04T17:46:58
I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and no...
stas00
https://github.com/huggingface/datasets/issues/488
null
false
676,143,029
487
Fix elasticsearch result ids returning as strings
closed
[]
2020-08-10T13:37:11
2020-08-31T10:42:46
2020-08-31T10:42:46
I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers.
sai-prasanna
https://github.com/huggingface/datasets/pull/487
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/487", "html_url": "https://github.com/huggingface/datasets/pull/487", "diff_url": "https://github.com/huggingface/datasets/pull/487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/487.patch", "merged_at": "2020-08-31T10:42:46"...
true
675,649,034
486
Bookcorpus data contains pretokenized text
closed
[]
2020-08-09T06:53:24
2022-10-04T17:44:33
2022-10-04T17:44:33
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
orsharir
https://github.com/huggingface/datasets/issues/486
null
false
675,595,393
485
PAWS dataset first item is header
closed
[]
2020-08-08T22:05:25
2020-08-19T09:50:01
2020-08-19T09:50:01
``` import nlp dataset = nlp.load_dataset('xtreme', 'PAWS-X.en') dataset['test'][0] ``` prints the following ``` {'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'} ``` dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names t...
jxmorris12
https://github.com/huggingface/datasets/issues/485
null
false
675,088,983
484
update mirror for RT dataset
closed
[]
2020-08-07T15:25:45
2020-08-24T13:33:37
2020-08-24T13:33:37
jxmorris12
https://github.com/huggingface/datasets/pull/484
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/484", "html_url": "https://github.com/huggingface/datasets/pull/484", "diff_url": "https://github.com/huggingface/datasets/pull/484.diff", "patch_url": "https://github.com/huggingface/datasets/pull/484.patch", "merged_at": "2020-08-24T13:33:37"...
true
675,080,694
483
rotten tomatoes movie review dataset taken down
closed
[]
2020-08-07T15:12:01
2020-09-08T09:36:34
2020-09-08T09:36:33
In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore.
jxmorris12
https://github.com/huggingface/datasets/issues/483
null
false
674,851,147
482
Bugs : dataset.map() is frozen on ELI5
closed
[]
2020-08-07T08:23:35
2023-04-06T09:39:59
2020-08-11T23:55:15
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta...
ratthachat
https://github.com/huggingface/datasets/issues/482
null
false
674,567,389
481
Apply utf-8 encoding to all datasets
closed
[]
2020-08-06T20:02:09
2020-08-20T08:16:08
2020-08-20T08:16:08
## Description This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function ```python def apply_encoding_on_file_open(filepath: str): """Apply UTF-8 encoding for all insta...
lewtun
https://github.com/huggingface/datasets/pull/481
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/481", "html_url": "https://github.com/huggingface/datasets/pull/481", "diff_url": "https://github.com/huggingface/datasets/pull/481.diff", "patch_url": "https://github.com/huggingface/datasets/pull/481.patch", "merged_at": "2020-08-20T08:16:08"...
true
674,245,959
480
Column indexing hotfix
closed
[]
2020-08-06T11:37:05
2023-09-24T09:49:33
2020-08-12T08:36:10
As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there.
TevenLeScao
https://github.com/huggingface/datasets/pull/480
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/480", "html_url": "https://github.com/huggingface/datasets/pull/480", "diff_url": "https://github.com/huggingface/datasets/pull/480.diff", "patch_url": "https://github.com/huggingface/datasets/pull/480.patch", "merged_at": null }
true
673,905,407
479
add METEOR metric
closed
[]
2020-08-05T23:13:00
2020-08-19T13:39:09
2020-08-19T13:39:09
Added the METEOR metric. Can be used like this: ```python import nlp meteor = nlp.load_metric('metrics/meteor') meteor.compute(["some string", "some string"], ["some string", "some similar string"]) # {'meteor': 0.6411637931034483} meteor.add("some string", "some string") meteor.add('some string", "some simila...
vegarab
https://github.com/huggingface/datasets/pull/479
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/479", "html_url": "https://github.com/huggingface/datasets/pull/479", "diff_url": "https://github.com/huggingface/datasets/pull/479.diff", "patch_url": "https://github.com/huggingface/datasets/pull/479.patch", "merged_at": "2020-08-19T13:39:09"...
true
673,178,317
478
Export TFRecord to GCP bucket
closed
[]
2020-08-05T01:08:32
2020-08-05T01:21:37
2020-08-05T01:21:36
Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')` Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket. `dataset.export('local.tfrecord')` works fine, but `dataset....
astariul
https://github.com/huggingface/datasets/issues/478
null
false
673,142,143
477
Overview.ipynb throws exceptions with nlp 0.4.0
closed
[]
2020-08-04T23:18:15
2021-08-03T06:02:15
2021-08-03T06:02:15
with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-48907f2ad433> in <module> ----> 1 features = {x: trai...
mandy-li
https://github.com/huggingface/datasets/issues/477
null
false
672,991,854
476
CheckList
closed
[]
2020-08-04T18:32:05
2022-10-03T09:43:37
2022-10-03T09:43:37
Sorry for the large pull request. - Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook - Added a checklist wrapper
marcotcr
https://github.com/huggingface/datasets/pull/476
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/476", "html_url": "https://github.com/huggingface/datasets/pull/476", "diff_url": "https://github.com/huggingface/datasets/pull/476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/476.patch", "merged_at": null }
true
672,884,595
475
misc. bugs and quality of life
closed
[]
2020-08-04T15:32:29
2020-08-17T21:14:08
2020-08-17T21:14:07
A few misc. bugs and QOL improvements that I've come across in using the library. Let me know if you don't like any of them and I can adjust/remove them. 1. Printing datasets without a description field throws an error when formatting the `single_line_description`. This fixes that, and also adds some formatting to t...
joeddav
https://github.com/huggingface/datasets/pull/475
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/475", "html_url": "https://github.com/huggingface/datasets/pull/475", "diff_url": "https://github.com/huggingface/datasets/pull/475.diff", "patch_url": "https://github.com/huggingface/datasets/pull/475.patch", "merged_at": "2020-08-17T21:14:07"...
true
672,407,330
474
test_load_real_dataset when config has BUILDER_CONFIGS that matter
closed
[]
2020-08-03T23:46:36
2020-09-07T14:53:13
2020-09-07T14:53:13
It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error. I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingfa...
marcotcr
https://github.com/huggingface/datasets/issues/474
null
false
672,007,247
473
add DoQA dataset (ACL 2020)
closed
[]
2020-08-03T11:26:52
2020-09-10T17:19:11
2020-09-03T11:44:15
add DoQA dataset (ACL 2020) http://ixa.eus/node/12931
mariamabarham
https://github.com/huggingface/datasets/pull/473
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/473", "html_url": "https://github.com/huggingface/datasets/pull/473", "diff_url": "https://github.com/huggingface/datasets/pull/473.diff", "patch_url": "https://github.com/huggingface/datasets/pull/473.patch", "merged_at": "2020-09-03T11:44:14"...
true
672,000,745
472
add crd3 dataset
closed
[]
2020-08-03T11:15:02
2020-08-03T11:22:10
2020-08-03T11:22:09
opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems
mariamabarham
https://github.com/huggingface/datasets/pull/472
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/472", "html_url": "https://github.com/huggingface/datasets/pull/472", "diff_url": "https://github.com/huggingface/datasets/pull/472.diff", "patch_url": "https://github.com/huggingface/datasets/pull/472.patch", "merged_at": "2020-08-03T11:22:09"...
true
671,996,423
471
add reuters21578 dataset
closed
[]
2020-08-03T11:07:14
2022-08-04T08:39:11
2020-09-03T09:58:50
new PR to add the reuters21578 dataset and fix the circle CI problems. Fix partially: - #353 Subsequent PR after: - #449
mariamabarham
https://github.com/huggingface/datasets/pull/471
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/471", "html_url": "https://github.com/huggingface/datasets/pull/471", "diff_url": "https://github.com/huggingface/datasets/pull/471.diff", "patch_url": "https://github.com/huggingface/datasets/pull/471.patch", "merged_at": "2020-09-03T09:58:50"...
true
671,952,276
470
Adding IWSLT 2017 dataset.
closed
[]
2020-08-03T09:52:39
2020-09-07T12:33:30
2020-09-07T12:33:30
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*. ``` Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair) ``` I'm unsure how to h...
Narsil
https://github.com/huggingface/datasets/pull/470
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/470", "html_url": "https://github.com/huggingface/datasets/pull/470", "diff_url": "https://github.com/huggingface/datasets/pull/470.diff", "patch_url": "https://github.com/huggingface/datasets/pull/470.patch", "merged_at": "2020-09-07T12:33:30"...
true
671,876,963
469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
closed
[]
2020-08-03T07:48:29
2023-07-20T15:54:17
2023-07-20T15:54:17
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
Murgates
https://github.com/huggingface/datasets/issues/469
null
false
671,622,441
468
UnicodeDecodeError while loading PAN-X task of XTREME dataset
closed
[]
2020-08-02T14:05:10
2020-08-20T08:16:08
2020-08-20T08:16:08
Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-inp...
lewtun
https://github.com/huggingface/datasets/issues/468
null
false
671,580,010
467
DOCS: Fix typo
closed
[]
2020-08-02T08:59:37
2020-08-02T13:52:27
2020-08-02T09:18:54
Fix typo from dictionnary -> dictionary
bharatr21
https://github.com/huggingface/datasets/pull/467
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/467", "html_url": "https://github.com/huggingface/datasets/pull/467", "diff_url": "https://github.com/huggingface/datasets/pull/467.diff", "patch_url": "https://github.com/huggingface/datasets/pull/467.patch", "merged_at": "2020-08-02T09:18:54"...
true
670,766,891
466
[METRICS] Various improvements on metrics
closed
[]
2020-08-01T11:03:45
2020-08-17T15:15:00
2020-08-17T15:14:59
- Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes - Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics
thomwolf
https://github.com/huggingface/datasets/pull/466
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/466", "html_url": "https://github.com/huggingface/datasets/pull/466", "diff_url": "https://github.com/huggingface/datasets/pull/466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/466.patch", "merged_at": "2020-08-17T15:14:59"...
true
669,889,779
465
Keep features after transform
closed
[]
2020-07-31T14:43:21
2020-07-31T18:27:33
2020-07-31T18:27:32
When applying a transform like `map`, some features were lost (and inferred features were used). It was the case for ClassLabel, Translation, etc. To fix that, I did some modifications in the `ArrowWriter`: - added the `update_features` parameter. When it's `True`, then the features specified by the user (if any...
lhoestq
https://github.com/huggingface/datasets/pull/465
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/465", "html_url": "https://github.com/huggingface/datasets/pull/465", "diff_url": "https://github.com/huggingface/datasets/pull/465.diff", "patch_url": "https://github.com/huggingface/datasets/pull/465.patch", "merged_at": "2020-07-31T18:27:32"...
true
669,767,381
464
Add rename, remove and cast in-place operations
closed
[]
2020-07-31T12:30:21
2020-07-31T15:50:02
2020-07-31T15:50:00
Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method. These methods are added to `Dataset` as well as `DatasetDict`. Added tests for these new methods and add the methods to the doc. Naming follows th...
thomwolf
https://github.com/huggingface/datasets/pull/464
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/464", "html_url": "https://github.com/huggingface/datasets/pull/464", "diff_url": "https://github.com/huggingface/datasets/pull/464.diff", "patch_url": "https://github.com/huggingface/datasets/pull/464.patch", "merged_at": "2020-07-31T15:50:00"...
true
669,735,455
463
Add dataset/mlsum
closed
[]
2020-07-31T11:50:52
2020-08-24T14:54:42
2020-08-24T14:54:42
New pull request that should correct the previous errors. The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset
RachelKer
https://github.com/huggingface/datasets/pull/463
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/463", "html_url": "https://github.com/huggingface/datasets/pull/463", "diff_url": "https://github.com/huggingface/datasets/pull/463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/463.patch", "merged_at": null }
true
669,715,547
462
add DoQA (ACL 2020) dataset
closed
[]
2020-07-31T11:25:56
2023-09-24T09:48:42
2020-08-03T11:28:27
adds DoQA (ACL 2020) dataset
mariamabarham
https://github.com/huggingface/datasets/pull/462
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/462", "html_url": "https://github.com/huggingface/datasets/pull/462", "diff_url": "https://github.com/huggingface/datasets/pull/462.diff", "patch_url": "https://github.com/huggingface/datasets/pull/462.patch", "merged_at": null }
true
669,703,508
461
Doqa
closed
[]
2020-07-31T11:11:12
2023-09-24T09:48:40
2020-07-31T11:13:15
add DoQA (ACL 2020) dataset
mariamabarham
https://github.com/huggingface/datasets/pull/461
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/461", "html_url": "https://github.com/huggingface/datasets/pull/461", "diff_url": "https://github.com/huggingface/datasets/pull/461.diff", "patch_url": "https://github.com/huggingface/datasets/pull/461.patch", "merged_at": null }
true
669,585,256
460
Fix KeyboardInterrupt in map and bad indices in select
closed
[]
2020-07-31T08:57:15
2020-07-31T11:32:19
2020-07-31T11:32:18
If you interrupted a map function while it was writing, the cached file was not discarded. Therefore the next time you called map, it was loading an incomplete arrow file. We had the same issue with select if there was a bad indice at one point. To fix that I used temporary files that are renamed once everything...
lhoestq
https://github.com/huggingface/datasets/pull/460
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/460", "html_url": "https://github.com/huggingface/datasets/pull/460", "diff_url": "https://github.com/huggingface/datasets/pull/460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/460.patch", "merged_at": "2020-07-31T11:32:18"...
true
669,545,437
459
[Breaking] Update Dataset and DatasetDict API
closed
[]
2020-07-31T08:11:33
2020-08-26T08:28:36
2020-08-26T08:28:35
This PR contains a few breaking changes so it's probably good to keep it for the next (major) release: - rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we s...
thomwolf
https://github.com/huggingface/datasets/pull/459
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/459", "html_url": "https://github.com/huggingface/datasets/pull/459", "diff_url": "https://github.com/huggingface/datasets/pull/459.diff", "patch_url": "https://github.com/huggingface/datasets/pull/459.patch", "merged_at": "2020-08-26T08:28:35"...
true
668,972,666
458
Install CoVal metric from github
closed
[]
2020-07-30T16:59:25
2020-07-31T13:56:33
2020-07-31T13:56:33
Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https://github.com/huggingface/nlp/pull/455)) Also changed the function call to use named rather than positional argum...
yjernite
https://github.com/huggingface/datasets/pull/458
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/458", "html_url": "https://github.com/huggingface/datasets/pull/458", "diff_url": "https://github.com/huggingface/datasets/pull/458.diff", "patch_url": "https://github.com/huggingface/datasets/pull/458.patch", "merged_at": "2020-07-31T13:56:33"...
true
668,898,386
457
add set_format to DatasetDict + tests
closed
[]
2020-07-30T15:53:20
2020-07-30T17:34:36
2020-07-30T17:34:34
Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`. Add tests to these for `Dataset` and `DatasetDict`. Fix some bugs uncovered by the tests for `pandas` formating.
thomwolf
https://github.com/huggingface/datasets/pull/457
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/457", "html_url": "https://github.com/huggingface/datasets/pull/457", "diff_url": "https://github.com/huggingface/datasets/pull/457.diff", "patch_url": "https://github.com/huggingface/datasets/pull/457.patch", "merged_at": "2020-07-30T17:34:34"...
true
668,723,785
456
add crd3(ACL 2020) dataset
closed
[]
2020-07-30T13:28:35
2023-09-24T09:48:47
2020-08-03T11:28:52
This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020
mariamabarham
https://github.com/huggingface/datasets/pull/456
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/456", "html_url": "https://github.com/huggingface/datasets/pull/456", "diff_url": "https://github.com/huggingface/datasets/pull/456.diff", "patch_url": "https://github.com/huggingface/datasets/pull/456.patch", "merged_at": null }
true
668,037,965
455
Add bleurt
closed
[]
2020-07-29T18:08:32
2020-07-31T13:56:14
2020-07-31T13:56:14
This PR adds the BLEURT metric to the library. The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`. Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend usi...
yjernite
https://github.com/huggingface/datasets/pull/455
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/455", "html_url": "https://github.com/huggingface/datasets/pull/455", "diff_url": "https://github.com/huggingface/datasets/pull/455.diff", "patch_url": "https://github.com/huggingface/datasets/pull/455.patch", "merged_at": "2020-07-31T13:56:14"...
true
668,011,577
454
Create SECURITY.md
closed
[]
2020-07-29T17:23:34
2020-07-29T21:45:52
2020-07-29T21:45:52
ChenZehong13
https://github.com/huggingface/datasets/pull/454
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/454", "html_url": "https://github.com/huggingface/datasets/pull/454", "diff_url": "https://github.com/huggingface/datasets/pull/454.diff", "patch_url": "https://github.com/huggingface/datasets/pull/454.patch", "merged_at": null }
true
667,728,247
453
add builder tests
closed
[]
2020-07-29T10:22:07
2020-07-29T11:14:06
2020-07-29T11:14:05
I added `as_dataset` and `download_and_prepare` to the tests
lhoestq
https://github.com/huggingface/datasets/pull/453
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/453", "html_url": "https://github.com/huggingface/datasets/pull/453", "diff_url": "https://github.com/huggingface/datasets/pull/453.diff", "patch_url": "https://github.com/huggingface/datasets/pull/453.patch", "merged_at": "2020-07-29T11:14:05"...
true
667,498,295
452
Guardian authorship dataset
closed
[]
2020-07-29T02:23:57
2020-08-20T15:09:57
2020-08-20T15:07:56
A new dataset: Guardian news articles for authorship attribution **tests passed:** python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship **Tests failed:** Real data:...
malikaltakrori
https://github.com/huggingface/datasets/pull/452
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/452", "html_url": "https://github.com/huggingface/datasets/pull/452", "diff_url": "https://github.com/huggingface/datasets/pull/452.diff", "patch_url": "https://github.com/huggingface/datasets/pull/452.patch", "merged_at": "2020-08-20T15:07:55"...
true
667,210,468
451
Fix csv/json/txt cache dir
closed
[]
2020-07-28T16:30:51
2020-07-29T13:57:23
2020-07-29T13:57:22
The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user. To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir. This should fix #444
lhoestq
https://github.com/huggingface/datasets/pull/451
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/451", "html_url": "https://github.com/huggingface/datasets/pull/451", "diff_url": "https://github.com/huggingface/datasets/pull/451.diff", "patch_url": "https://github.com/huggingface/datasets/pull/451.patch", "merged_at": "2020-07-29T13:57:22"...
true
667,074,120
450
add sogou_news
closed
[]
2020-07-28T13:29:10
2020-07-29T13:30:18
2020-07-29T13:30:17
This PR adds the sogou news dataset #353
mariamabarham
https://github.com/huggingface/datasets/pull/450
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/450", "html_url": "https://github.com/huggingface/datasets/pull/450", "diff_url": "https://github.com/huggingface/datasets/pull/450.diff", "patch_url": "https://github.com/huggingface/datasets/pull/450.patch", "merged_at": "2020-07-29T13:30:17"...
true
666,898,923
449
add reuters21578 dataset
closed
[]
2020-07-28T08:58:12
2023-09-24T09:49:28
2020-08-03T11:10:31
This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html #353 The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read ...
mariamabarham
https://github.com/huggingface/datasets/pull/449
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/449", "html_url": "https://github.com/huggingface/datasets/pull/449", "diff_url": "https://github.com/huggingface/datasets/pull/449.diff", "patch_url": "https://github.com/huggingface/datasets/pull/449.patch", "merged_at": null }
true
666,893,443
448
add aws load metric test
closed
[]
2020-07-28T08:50:22
2020-07-28T15:02:27
2020-07-28T15:02:27
Following issue #445 Added a test to recognize import errors of all metrics
idoh
https://github.com/huggingface/datasets/pull/448
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/448", "html_url": "https://github.com/huggingface/datasets/pull/448", "diff_url": "https://github.com/huggingface/datasets/pull/448.diff", "patch_url": "https://github.com/huggingface/datasets/pull/448.patch", "merged_at": "2020-07-28T15:02:26"...
true
666,842,115
447
[BugFix] fix wrong import of DEFAULT_TOKENIZER
closed
[]
2020-07-28T07:41:10
2020-07-28T12:58:01
2020-07-28T12:52:05
Fixed the path to `DEFAULT_TOKENIZER` #445
idoh
https://github.com/huggingface/datasets/pull/447
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/447", "html_url": "https://github.com/huggingface/datasets/pull/447", "diff_url": "https://github.com/huggingface/datasets/pull/447.diff", "patch_url": "https://github.com/huggingface/datasets/pull/447.patch", "merged_at": "2020-07-28T12:52:05"...
true
666,837,351
446
[BugFix] fix wrong import of DEFAULT_TOKENIZER
closed
[]
2020-07-28T07:32:47
2020-07-28T07:34:46
2020-07-28T07:33:59
Fixed the path to `DEFAULT_TOKENIZER` #445
idoh
https://github.com/huggingface/datasets/pull/446
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/446", "html_url": "https://github.com/huggingface/datasets/pull/446", "diff_url": "https://github.com/huggingface/datasets/pull/446.diff", "patch_url": "https://github.com/huggingface/datasets/pull/446.patch", "merged_at": null }
true
666,836,658
445
DEFAULT_TOKENIZER import error in sacrebleu
closed
[]
2020-07-28T07:31:30
2020-07-28T12:58:56
2020-07-28T12:58:56
Latest Version 0.3.0 When loading the metric "sacrebleu" there is an import error due to the wrong path ![image](https://user-images.githubusercontent.com/5303103/88633063-2c5e5f00-d0bd-11ea-8ca8-4704dc975433.png)
idoh
https://github.com/huggingface/datasets/issues/445
null
false
666,280,842
444
Keep loading old file even I specify a new file in load_dataset
closed
[]
2020-07-27T13:08:06
2020-07-29T13:57:22
2020-07-29T13:57:22
I used load a file called 'a.csv' by ``` dataset = load_dataset('csv', data_file='./a.csv') ``` And after a while, I tried to load another csv called 'b.csv' ``` dataset = load_dataset('csv', data_file='./b.csv') ``` However, the new dataset seems to remain the old 'a.csv' and not loading new csv file. Even...
joshhu
https://github.com/huggingface/datasets/issues/444
null
false
666,246,716
443
Cannot unpickle saved .pt dataset with torch.save()/load()
closed
[]
2020-07-27T12:13:37
2020-07-27T13:05:11
2020-07-27T13:05:11
Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling: ```python >>> import torch >>> import nlp >>> squad = nlp.load_dataset("squad.py", split="train") >>> squad Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype...
vegarab
https://github.com/huggingface/datasets/issues/443
null
false
666,201,810
442
[Suggestion] Glue Diagnostic Data with Labels
open
[]
2020-07-27T10:59:58
2020-08-24T15:13:20
null
Hello! First of all, thanks for setting up this useful project! I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set. Yet, the data with labels is available, too (see als...
ggbetz
https://github.com/huggingface/datasets/issues/442
null
false
666,148,413
441
Add features parameter in load dataset
closed
[]
2020-07-27T09:50:01
2020-07-30T12:51:17
2020-07-30T12:51:16
Added `features` argument in `nlp.load_dataset`. If they don't match the data type, it raises a `ValueError`. It's a draft PR because #440 needs to be merged first.
lhoestq
https://github.com/huggingface/datasets/pull/441
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/441", "html_url": "https://github.com/huggingface/datasets/pull/441", "diff_url": "https://github.com/huggingface/datasets/pull/441.diff", "patch_url": "https://github.com/huggingface/datasets/pull/441.patch", "merged_at": "2020-07-30T12:51:16"...
true
666,116,823
440
Fix user specified features in map
closed
[]
2020-07-27T09:04:26
2020-07-28T09:25:23
2020-07-28T09:25:22
`.map` didn't keep the user specified features because of an issue in the writer. The writer used to overwrite the user specified features with inferred features. I also added tests to make sure it doesn't happen again.
lhoestq
https://github.com/huggingface/datasets/pull/440
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/440", "html_url": "https://github.com/huggingface/datasets/pull/440", "diff_url": "https://github.com/huggingface/datasets/pull/440.diff", "patch_url": "https://github.com/huggingface/datasets/pull/440.patch", "merged_at": "2020-07-28T09:25:22"...
true
665,964,673
439
Issues: Adding a FAISS or Elastic Search index to a Dataset
closed
[]
2020-07-27T04:25:17
2020-10-28T01:46:24
2020-10-28T01:46:24
It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on t...
nsankar
https://github.com/huggingface/datasets/issues/439
null
false
665,865,490
438
New Datasets: IWSLT15+, ITTB
open
[]
2020-07-26T21:43:04
2020-08-24T15:12:15
null
**Links:** [iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html) Don't know if that link is up to date. [ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/) **Motivation**: replicate mbart finetuning results (table below) ![image](https://user-images.githubusercontent.com/60450...
sshleifer
https://github.com/huggingface/datasets/issues/438
null
false
665,597,176
437
Fix XTREME PAN-X loading
closed
[]
2020-07-25T14:44:57
2020-07-30T08:28:15
2020-07-30T08:28:15
Hi 🤗 In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sen...
lvwerra
https://github.com/huggingface/datasets/pull/437
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/437", "html_url": "https://github.com/huggingface/datasets/pull/437", "diff_url": "https://github.com/huggingface/datasets/pull/437.diff", "patch_url": "https://github.com/huggingface/datasets/pull/437.patch", "merged_at": "2020-07-30T08:28:15"...
true
665,582,167
436
Google Colab - load_dataset - PyArrow exception
closed
[]
2020-07-25T13:05:20
2020-08-20T08:08:18
2020-08-20T08:08:18
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just rest...
nsankar
https://github.com/huggingface/datasets/issues/436
null
false
665,507,141
435
ImportWarning for pyarrow 1.0.0
closed
[]
2020-07-25T03:44:39
2020-09-08T17:57:15
2020-08-03T16:37:32
The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files
HanGuo97
https://github.com/huggingface/datasets/issues/435
null
false
665,477,638
434
Fixed check for pyarrow
closed
[]
2020-07-25T00:16:53
2020-07-25T06:36:34
2020-07-25T06:36:34
Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0
nadahlberg
https://github.com/huggingface/datasets/pull/434
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/434", "html_url": "https://github.com/huggingface/datasets/pull/434", "diff_url": "https://github.com/huggingface/datasets/pull/434.diff", "patch_url": "https://github.com/huggingface/datasets/pull/434.patch", "merged_at": "2020-07-25T06:36:34"...
true
665,311,025
433
How to reuse functionality of a (generic) dataset?
closed
[]
2020-07-24T17:27:37
2022-10-04T17:59:34
2022-10-04T17:59:33
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to...
ArneBinder
https://github.com/huggingface/datasets/issues/433
null
false
665,234,340
432
Fix handling of config files while loading datasets from multiple processes
closed
[]
2020-07-24T15:10:57
2020-08-01T17:11:42
2020-07-30T08:25:28
When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in par...
orsharir
https://github.com/huggingface/datasets/pull/432
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/432", "html_url": "https://github.com/huggingface/datasets/pull/432", "diff_url": "https://github.com/huggingface/datasets/pull/432.diff", "patch_url": "https://github.com/huggingface/datasets/pull/432.patch", "merged_at": "2020-07-30T08:25:28"...
true
665,044,416
431
Specify split post processing + Add post processing resources downloading
closed
[]
2020-07-24T09:29:19
2020-07-31T09:05:04
2020-07-31T09:05:03
Previously if you tried to do ```python from nlp import load_dataset wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True) ``` Then you'd get an error `Index size should match Dataset size...` This was because it was trying to use the full index (21M elements). ...
lhoestq
https://github.com/huggingface/datasets/pull/431
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/431", "html_url": "https://github.com/huggingface/datasets/pull/431", "diff_url": "https://github.com/huggingface/datasets/pull/431.diff", "patch_url": "https://github.com/huggingface/datasets/pull/431.patch", "merged_at": "2020-07-31T09:05:03"...
true
664,583,837
430
add DatasetDict
closed
[]
2020-07-23T15:43:49
2020-08-04T01:01:53
2020-07-29T09:06:22
## Add DatasetDict ### Overview When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example). If you wanted to apply dataset transforms you had to iterate over each split and apply the transform. Instead of returning a dict, it now returns a `nlp.Dat...
lhoestq
https://github.com/huggingface/datasets/pull/430
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/430", "html_url": "https://github.com/huggingface/datasets/pull/430", "diff_url": "https://github.com/huggingface/datasets/pull/430.diff", "patch_url": "https://github.com/huggingface/datasets/pull/430.patch", "merged_at": "2020-07-29T09:06:22"...
true
664,412,137
429
mlsum
closed
[]
2020-07-23T11:52:39
2020-07-31T11:46:20
2020-07-31T11:46:20
Hello, The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https...
RachelKer
https://github.com/huggingface/datasets/pull/429
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/429", "html_url": "https://github.com/huggingface/datasets/pull/429", "diff_url": "https://github.com/huggingface/datasets/pull/429.diff", "patch_url": "https://github.com/huggingface/datasets/pull/429.patch", "merged_at": null }
true
664,367,086
428
fix concatenate_datasets
closed
[]
2020-07-23T10:30:59
2020-07-23T10:35:00
2020-07-23T10:34:58
`concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423
lhoestq
https://github.com/huggingface/datasets/pull/428
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/428", "html_url": "https://github.com/huggingface/datasets/pull/428", "diff_url": "https://github.com/huggingface/datasets/pull/428.diff", "patch_url": "https://github.com/huggingface/datasets/pull/428.patch", "merged_at": "2020-07-23T10:34:58"...
true
664,341,623
427
Allow sequence features for beam + add processed Natural Questions
closed
[]
2020-07-23T09:52:41
2020-07-23T13:09:30
2020-07-23T13:09:29
## Allow Sequence features for Beam Datasets + add Natural Questions ### The issue The steps of beam datasets processing is the following: - download the source files and send them in a remote storage (gcs) - process the files using a beam runner (dataflow) - save output in remote storage (gcs) - convert outp...
lhoestq
https://github.com/huggingface/datasets/pull/427
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/427", "html_url": "https://github.com/huggingface/datasets/pull/427", "diff_url": "https://github.com/huggingface/datasets/pull/427.diff", "patch_url": "https://github.com/huggingface/datasets/pull/427.patch", "merged_at": "2020-07-23T13:09:29"...
true
664,203,897
426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
closed
[]
2020-07-23T05:00:41
2021-03-12T09:34:12
2020-09-07T14:48:04
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
timothyjlaurent
https://github.com/huggingface/datasets/issues/426
null
false
664,029,848
425
Correct data structure for PAN-X task in XTREME dataset?
closed
[]
2020-07-22T20:29:20
2020-08-02T13:30:34
2020-08-02T13:30:34
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
lewtun
https://github.com/huggingface/datasets/issues/425
null
false
663,858,552
424
Web of science
closed
[]
2020-07-22T15:38:31
2020-07-23T14:27:58
2020-07-23T14:27:56
this PR adds the WebofScience dataset #353
mariamabarham
https://github.com/huggingface/datasets/pull/424
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/424", "html_url": "https://github.com/huggingface/datasets/pull/424", "diff_url": "https://github.com/huggingface/datasets/pull/424.diff", "patch_url": "https://github.com/huggingface/datasets/pull/424.patch", "merged_at": "2020-07-23T14:27:56"...
true