admin管理员组文章数量:1023738
I want to download a model from hugging face to be used with unsloth for trainig:
from unsloth import FastLanguageModel,
max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/Llama-3.2-1B-Instruct",
max_seq_length=max_seq_length,
load_in_4bit=False,
)
However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.
My question: How can I load unsloth model from local hard drive?
I want to download a model from hugging face to be used with unsloth for trainig:
from unsloth import FastLanguageModel,
max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/Llama-3.2-1B-Instruct",
max_seq_length=max_seq_length,
load_in_4bit=False,
)
However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.
My question: How can I load unsloth model from local hard drive?
Share Improve this question asked Nov 18, 2024 at 20:33 MattMatt 256 bronze badges1 Answer
Reset to default 0Turns out it is actually really simple, you load the model like this:
from unsloth import FastLanguageModel,
model, tokenizer = FastLanguageModel.from_pretrained(
"/content/model"
)
I want to download a model from hugging face to be used with unsloth for trainig:
from unsloth import FastLanguageModel,
max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/Llama-3.2-1B-Instruct",
max_seq_length=max_seq_length,
load_in_4bit=False,
)
However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.
My question: How can I load unsloth model from local hard drive?
I want to download a model from hugging face to be used with unsloth for trainig:
from unsloth import FastLanguageModel,
max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/Llama-3.2-1B-Instruct",
max_seq_length=max_seq_length,
load_in_4bit=False,
)
However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.
My question: How can I load unsloth model from local hard drive?
Share Improve this question asked Nov 18, 2024 at 20:33 MattMatt 256 bronze badges1 Answer
Reset to default 0Turns out it is actually really simple, you load the model like this:
from unsloth import FastLanguageModel,
model, tokenizer = FastLanguageModel.from_pretrained(
"/content/model"
)
本文标签: pythonDoes unsloth support cache directory for modelsStack Overflow
版权声明:本文标题:python - Does unsloth support cache directory for models? - Stack Overflow 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://it.en369.cn/questions/1745595847a2158163.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论