admin管理员组

文章数量:1022784

I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using

tokenizer.add_tokens(list(new_tokens)).

However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:

\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]

When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in

extender_tokenizer.add_tokens(['Αυτό', 'είναι'])

it does work.

I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?

Reference code:

from tokenizers import Tokenizer, models, trainers, pre_tokenizers

tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)

I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using

tokenizer.add_tokens(list(new_tokens)).

However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:

\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]

When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in

extender_tokenizer.add_tokens(['Αυτό', 'είναι'])

it does work.

I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?

Reference code:

from tokenizers import Tokenizer, models, trainers, pre_tokenizers

tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)
Share Improve this question asked Nov 19, 2024 at 8:31 AnmovaAnmova 1 3
  • 1 Use tokenizers.UnicodeScripts instead of ByteLevel. Disrupting the single bytes of a multibyte character will create chaos. (Recognizing Unicode, UTF-16 or more likely UTF-8 is an other subject.) – Joop Eggen Commented Nov 19, 2024 at 8:40
  • This solved the tokenization problem and characters could be added to the base tokenizer. Thank you! – Anmova Commented Nov 19, 2024 at 12:03
  • If this solved all, write your own answer and accept it, so the question gets closed. Thanks. Love NLP. – Joop Eggen Commented Nov 19, 2024 at 15:49
Add a comment  | 

1 Answer 1

Reset to default 0

As Joop Eggen noted, using tokenizers.UnicodeScripts instead of ByteLevel solved the issue.

I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using

tokenizer.add_tokens(list(new_tokens)).

However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:

\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]

When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in

extender_tokenizer.add_tokens(['Αυτό', 'είναι'])

it does work.

I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?

Reference code:

from tokenizers import Tokenizer, models, trainers, pre_tokenizers

tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)

I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using

tokenizer.add_tokens(list(new_tokens)).

However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:

\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]

When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in

extender_tokenizer.add_tokens(['Αυτό', 'είναι'])

it does work.

I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?

Reference code:

from tokenizers import Tokenizer, models, trainers, pre_tokenizers

tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)
Share Improve this question asked Nov 19, 2024 at 8:31 AnmovaAnmova 1 3
  • 1 Use tokenizers.UnicodeScripts instead of ByteLevel. Disrupting the single bytes of a multibyte character will create chaos. (Recognizing Unicode, UTF-16 or more likely UTF-8 is an other subject.) – Joop Eggen Commented Nov 19, 2024 at 8:40
  • This solved the tokenization problem and characters could be added to the base tokenizer. Thank you! – Anmova Commented Nov 19, 2024 at 12:03
  • If this solved all, write your own answer and accept it, so the question gets closed. Thanks. Love NLP. – Joop Eggen Commented Nov 19, 2024 at 15:49
Add a comment  | 

1 Answer 1

Reset to default 0

As Joop Eggen noted, using tokenizers.UnicodeScripts instead of ByteLevel solved the issue.

本文标签: nlpUnderstanding bytepair encoding tokenization for Greek charactersStack Overflow