Skip to content
Discussion options

You must be logged in to vote

Oh my apologies I did not notice we have already solved this issue since late June 2025 and we did not notify any of you - so sorry!

Gemma-3 works as expected, but you need to update Unsloth or rerun the Gemma-3 notebook for eg our Gemma 3 270M Chess example or Gemma 3 4B finetuning example

To update Unsloth, please do:

pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth unsloth_zoo

To enable full finetuning on Gemma-3 do:

model, tokenizer = FastModel.from_pretrained(
    model_name = "unsloth/gemma-3-270m-it",
    max_seq_length = max_seq_length, # Choose any for long context!
    load_in_4bit = False,  # 4 bit quantization to reduce memory
    load_in_8bit = False, #…

Replies: 4 comments 2 replies

Comment options

You must be logged in to vote
1 reply
@peteparker123
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@uscne
Comment options

Answer selected by danielhanchen
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
5 participants