admin管理员组文章数量:1022784
I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using
tokenizer.add_tokens(list(new_tokens)).
However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:
\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]
When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in
extender_tokenizer.add_tokens(['Αυτό', 'είναι'])
it does work.
I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?
Reference code:
from tokenizers import Tokenizer, models, trainers, pre_tokenizers
tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)
I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using
tokenizer.add_tokens(list(new_tokens)).
However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:
\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]
When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in
extender_tokenizer.add_tokens(['Αυτό', 'είναι'])
it does work.
I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?
Reference code:
from tokenizers import Tokenizer, models, trainers, pre_tokenizers
tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)
Share
Improve this question
asked Nov 19, 2024 at 8:31
AnmovaAnmova
1
3
|
1 Answer
Reset to default 0As Joop Eggen noted, using tokenizers.UnicodeScripts
instead of ByteLevel
solved the issue.
I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using
tokenizer.add_tokens(list(new_tokens)).
However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:
\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]
When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in
extender_tokenizer.add_tokens(['Αυτό', 'είναι'])
it does work.
I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?
Reference code:
from tokenizers import Tokenizer, models, trainers, pre_tokenizers
tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)
I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using
tokenizer.add_tokens(list(new_tokens)).
However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:
\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]
When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in
extender_tokenizer.add_tokens(['Αυτό', 'είναι'])
it does work.
I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?
Reference code:
from tokenizers import Tokenizer, models, trainers, pre_tokenizers
tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)
Share
Improve this question
asked Nov 19, 2024 at 8:31
AnmovaAnmova
1
3
-
1
Use
tokenizers.UnicodeScripts
instead ofByteLevel
. Disrupting the single bytes of a multibyte character will create chaos. (Recognizing Unicode, UTF-16 or more likely UTF-8 is an other subject.) – Joop Eggen Commented Nov 19, 2024 at 8:40 - This solved the tokenization problem and characters could be added to the base tokenizer. Thank you! – Anmova Commented Nov 19, 2024 at 12:03
- If this solved all, write your own answer and accept it, so the question gets closed. Thanks. Love NLP. – Joop Eggen Commented Nov 19, 2024 at 15:49
1 Answer
Reset to default 0As Joop Eggen noted, using tokenizers.UnicodeScripts
instead of ByteLevel
solved the issue.
本文标签: nlpUnderstanding bytepair encoding tokenization for Greek charactersStack Overflow
版权声明:本文标题:nlp - Understanding byte-pair encoding tokenization for Greek characters - Stack Overflow 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://it.en369.cn/questions/1745575005a2156964.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
tokenizers.UnicodeScripts
instead ofByteLevel
. Disrupting the single bytes of a multibyte character will create chaos. (Recognizing Unicode, UTF-16 or more likely UTF-8 is an other subject.) – Joop Eggen Commented Nov 19, 2024 at 8:40