llama-3-2-3b-instruct-new-atlas-dataset
Fine-tuned version of meta-llama/Llama-3.2-3B-Instruct.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Canfield/llama-3-2-3b-instruct-new-atlas-dataset")
tokenizer = AutoTokenizer.from_pretrained("Canfield/llama-3-2-3b-instruct-new-atlas-dataset")
# Generate text
messages = [
{"role": "user", "content": "Hello, how are you?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
Model Details
- Base Model: meta-llama/Llama-3.2-3B-Instruct
- Fine-tuning Method: LoRA/SFT
- Task: Instruction following and chat
- Downloads last month
- 28
Model tree for Canfield/llama-3-2-3b-instruct-new-atlas-dataset
Base model
meta-llama/Llama-3.2-3B-Instruct