admin管理员组

文章数量:1023738

I want to download a model from hugging face to be used with unsloth for trainig:

from unsloth import FastLanguageModel,

max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Llama-3.2-1B-Instruct",
    max_seq_length=max_seq_length,
    load_in_4bit=False,
)

However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.

My question: How can I load unsloth model from local hard drive?

I want to download a model from hugging face to be used with unsloth for trainig:

from unsloth import FastLanguageModel,

max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Llama-3.2-1B-Instruct",
    max_seq_length=max_seq_length,
    load_in_4bit=False,
)

However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.

My question: How can I load unsloth model from local hard drive?

Share Improve this question asked Nov 18, 2024 at 20:33 MattMatt 256 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

Turns out it is actually really simple, you load the model like this:

from unsloth import FastLanguageModel,

model, tokenizer = FastLanguageModel.from_pretrained(
    "/content/model"
)

I want to download a model from hugging face to be used with unsloth for trainig:

from unsloth import FastLanguageModel,

max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Llama-3.2-1B-Instruct",
    max_seq_length=max_seq_length,
    load_in_4bit=False,
)

However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.

My question: How can I load unsloth model from local hard drive?

I want to download a model from hugging face to be used with unsloth for trainig:

from unsloth import FastLanguageModel,

max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Llama-3.2-1B-Instruct",
    max_seq_length=max_seq_length,
    load_in_4bit=False,
)

However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.

My question: How can I load unsloth model from local hard drive?

Share Improve this question asked Nov 18, 2024 at 20:33 MattMatt 256 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

Turns out it is actually really simple, you load the model like this:

from unsloth import FastLanguageModel,

model, tokenizer = FastLanguageModel.from_pretrained(
    "/content/model"
)

本文标签: pythonDoes unsloth support cache directory for modelsStack Overflow