llama-3-2-3b-instruct-new-atlas-dataset

Fine-tuned version of meta-llama/Llama-3.2-3B-Instruct.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Canfield/llama-3-2-3b-instruct-new-atlas-dataset")
tokenizer = AutoTokenizer.from_pretrained("Canfield/llama-3-2-3b-instruct-new-atlas-dataset")

# Generate text
messages = [
    {"role": "user", "content": "Hello, how are you?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

Model Details

  • Base Model: meta-llama/Llama-3.2-3B-Instruct
  • Fine-tuning Method: LoRA/SFT
  • Task: Instruction following and chat
Downloads last month
28
Safetensors
Model size
3B params
Tensor type
F32
BF16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Canfield/llama-3-2-3b-instruct-new-atlas-dataset

Quantized
(401)
this model