vector sequencelengths 1.02k 1.02k | text stringlengths 2 11.8k |
|---|---|
[
0.017450048,
0.027976139,
-0.013819876,
-0.013910981,
0.009054402,
-0.01927214,
-0.04538414,
-0.01614655,
0.011808566,
0.060213175,
-0.023126569,
0.004418576,
-0.00076212554,
-0.06727729,
-0.0045166886,
0.0008659323,
-0.018347077,
-0.08151765,
-0.044459075,
-0.037002508,
-0.0... | Text, use a Tokenizer to convert text into a sequence of tokens, create a numerical representation of the tokens, and assemble them into tensors.
Speech and audio, use a Feature extractor to extract sequential features from audio waveforms and convert them into tensors.
Image inputs use a ImageProcessor to convert images into tensors.
Multimodal inputs, use a Processor to combine a tokenizer and a feature extractor or image processor. |
[
0.02303682,
-0.00828121,
-0.0087192245,
-0.006159578,
-0.034712642,
0.010395998,
0.008397558,
0.025897603,
0.003938708,
0.06800174,
-0.015864335,
-0.0070492947,
0.01496093,
-0.023091573,
-0.008869792,
0.015919087,
0.014317596,
-0.0591867,
-0.06422386,
-0.028744696,
0.01587802... | AutoProcessor always works and automatically chooses the correct class for the model you're using, whether you're using a tokenizer, image processor, feature extractor or processor.
Before you begin, install 🤗 Datasets so you can load some datasets to experiment with:
pip install datasets
Natural Language Processing |
[
0.0074918536,
-0.023553181,
-0.004722405,
0.010636446,
-0.026936837,
0.0021699527,
0.014696832,
-0.009275628,
0.00054616603,
0.061435405,
0.011666254,
0.0053329347,
0.0037992562,
-0.050401747,
-0.007826541,
0.032836165,
0.00027354277,
-0.029702606,
-0.072204255,
0.005939786,
... | If you plan on using a pretrained model, it's important to use the associated pretrained tokenizer. This ensures the text is split the same way as the pretraining corpus, and uses the same corresponding tokens-to-index (usually referred to as the vocab) during pretraining.
Get started by loading a pretrained tokenizer with the [AutoTokenizer.from_pretrained] method. This downloads the vocab a model was pretrained with: |
[
0.035197075,
-0.010226371,
-0.024511736,
0.012829578,
-0.021514105,
-0.00440322,
0.021327648,
0.0032737295,
0.00053471513,
0.029173125,
0.024884647,
-0.029603407,
0.012341924,
-0.04015966,
0.01929098,
0.030291859,
0.0061781337,
-0.03964332,
-0.042913467,
-0.0046362896,
-0.042... | Get started by loading a pretrained tokenizer with the [AutoTokenizer.from_pretrained] method. This downloads the vocab a model was pretrained with:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
Then pass your text to the tokenizer: |
[
0.013469498,
-0.019460663,
-0.004574814,
-0.01667045,
0.015084138,
0.0032629187,
-0.0076766224,
-0.024941942,
-0.0063842023,
0.03631524,
-0.033057634,
-0.016996212,
0.009220445,
-0.031244703,
-0.0007896157,
0.00016310166,
-0.035295468,
-0.045946427,
-0.04056429,
-0.021046976,
... | Before you begin, install 🤗 Datasets so you can load some datasets to experiment with:
pip install datasets
Natural Language Processing
The main tool for preprocessing textual data is a tokenizer. A tokenizer splits text into tokens according to a set of rules. The tokens are converted into numbers and then tensors, which become the model inputs. Any additional inputs required by the model are added by the tokenizer. |
[
0.009508875,
0.00023022221,
0.012187825,
0.008811232,
-0.010129777,
-0.052797627,
0.027794098,
-0.0011929696,
-0.027417373,
0.056732334,
-0.0047509493,
0.026691822,
-0.0019534004,
-0.017678276,
-0.022129238,
-0.047969937,
-0.025101198,
-0.050007053,
-0.026147662,
-0.011818074,
... | Then pass your text to the tokenizer:
encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.")
print(encoded_input)
{'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} |
[
0.030563097,
-0.0011147205,
-0.00519615,
-0.022153307,
0.025864339,
-0.015055782,
0.0047587277,
0.0020266043,
-0.013461309,
0.033074744,
0.013863455,
0.011528186,
0.032312784,
-0.048765495,
0.001670317,
0.028277215,
-0.006197987,
-0.019063132,
-0.05000721,
-0.01828706,
0.0030... | Preprocess
[[open-in-colab]]
Before you can train a model on a dataset, it needs to be preprocessed into the expected model input format. Whether your data is text, images, or audio, they need to be converted and assembled into batches of tensors. 🤗 Transformers provides a set of preprocessing classes to help prepare your data for the model. In this tutorial, you'll learn that for: |
[
0.033140212,
0.008717748,
-0.007880731,
-0.004493648,
-0.02241504,
-0.054590553,
0.0045965016,
-0.0030182276,
-0.007476409,
0.072919816,
-0.022117117,
0.028855817,
0.018527875,
-0.036999002,
-0.051668085,
-0.00897311,
-0.033111837,
-0.07064994,
-0.022840641,
-0.0015552207,
0.... |
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_inputs = tokenizer(batch_sentences)
print(encoded_inputs)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1]]} |
[
0.003307359,
0.00770276,
0.0037937355,
-0.0025543766,
-0.02315872,
-0.030839862,
-0.00025489723,
0.0029074496,
-0.03810308,
0.06646423,
-0.01766807,
0.04744151,
-0.035768475,
-0.01652959,
-0.045251016,
-0.01477143,
-0.010159861,
-0.031070441,
-0.035595544,
-0.0027056935,
0.02... | The tokenizer returns a dictionary with three important items:
input_ids are the indices corresponding to each token in the sentence.
attention_mask indicates whether a token should be attended to or not.
token_type_ids identifies which sequence a token belongs to when there is more than one sequence.
Return your input by decoding the input_ids:
tokenizer.decode(encoded_input["input_ids"])
'[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' |
[
0.025511455,
-0.019959738,
0.0076372805,
0.016229222,
-0.023411203,
-0.027009536,
0.0061832597,
0.032693435,
-0.037863288,
0.06233196,
0.009157393,
0.02382244,
-0.040183846,
-0.032340948,
-0.05161039,
-0.033985898,
0.007046126,
-0.046411168,
-0.051698513,
-0.01382054,
0.03307... | Return your input by decoding the input_ids:
tokenizer.decode(encoded_input["input_ids"])
'[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]'
As you can see, the tokenizer added two special tokens - CLS and SEP (classifier and separator) - to the sentence. Not all models need
special tokens, but if they do, the tokenizer automatically adds them for you.
If there are several sentences you want to preprocess, pass them as a list to the tokenizer: |
[
0.029753411,
0.0062574334,
-0.0029865024,
-0.0020468072,
-0.027449178,
-0.050088268,
0.0074023493,
-0.0012682284,
-0.0077839876,
0.07223771,
-0.021458171,
0.022466274,
0.017555377,
-0.024050433,
-0.057749845,
-0.0091377245,
-0.035110753,
-0.08036013,
-0.02085331,
0.0072511337,
... |
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True, truncation=True)
print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} |
[
0.03756829,
-0.013209988,
0.0003731145,
-0.0015592715,
-0.025656395,
0.0050674155,
-0.03282019,
0.016160196,
-0.014577497,
0.045120824,
0.016535046,
-0.0018482184,
-0.03368096,
-0.032209326,
-0.021088779,
0.026836477,
-0.008371648,
-0.043704722,
-0.021727411,
0.020241896,
-0.... | Check out the Padding and truncation concept guide to learn more different padding and truncation arguments.
Build tensors
Finally, you want the tokenizer to return the actual tensors that get fed to the model.
Set the return_tensors parameter to either pt for PyTorch, or tf for TensorFlow: |
[
0.041654207,
0.017644791,
0.040205117,
0.03486338,
-0.0077142683,
-0.003881996,
0.0052103275,
-0.024904443,
-0.046001475,
0.061543666,
0.008509846,
0.02920909,
-0.01440565,
-0.04242137,
-0.06290752,
0.0070181373,
-0.0052564996,
-0.044950176,
-0.03293126,
0.026509807,
0.007728... | Pad
Sentences aren't always the same length which can be an issue because tensors, the model inputs, need to have a uniform shape. Padding is a strategy for ensuring tensors are rectangular by adding a special padding token to shorter sentences.
Set the padding parameter to True to pad the shorter sequences in the batch to match the longest sequence: |
[
0.032342672,
0.006755882,
-0.006727361,
-0.0035009373,
-0.024242742,
-0.051423136,
0.0057719117,
-0.0025989646,
-0.0051373225,
0.07295639,
-0.02250297,
0.024855942,
0.020506509,
-0.029832834,
-0.054389305,
-0.011714948,
-0.03530884,
-0.07712044,
-0.019394195,
0.0037041483,
0.... |
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True)
print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} |
[
0.045902483,
-0.018098537,
0.06036491,
0.024659941,
-0.020818785,
-0.021980701,
0.0016950291,
-0.0052593746,
-0.029280262,
0.06933216,
0.04161023,
0.0005805304,
-0.0055532707,
-0.011386768,
-0.03127602,
0.029170904,
0.003311458,
-0.0606383,
-0.026190935,
0.049374558,
0.012610... | The first and third sentences are now padded with 0's because they are shorter.
Truncation
On the other end of the spectrum, sometimes a sequence may be too long for a model to handle. In this case, you'll need to truncate the sequence to a shorter length.
Set the truncation parameter to True to truncate a sequence to the maximum length accepted by the model: |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 14