|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: mistralai/Mistral-7B-Instruct-v0.2 |
|
|
tags: |
|
|
- lora |
|
|
- chat |
|
|
- turkish |
|
|
- instruct |
|
|
--- |
|
|
adapter_model.safetensors |
|
|
adapter_config.json |
|
|
(optional) |
|
|
tokenizer.json |
|
|
README.md |
|
|
|
|
|
# ChatGPT Benzeri LoRA Model |
|
|
|
|
|
Bu repo **LoRA ağırlıklarını** içerir. |
|
|
Base model ayrıca indirilmelidir. |
|
|
|
|
|
## Kullanım |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
from peft import PeftModel |
|
|
|
|
|
base_model = "mistralai/Mistral-7B-Instruct-v0.2" |
|
|
lora_model = "KULLANICI_ADI/MODEL_ADI" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(base_model) |
|
|
model = AutoModelForCausalLM.from_pretrained(base_model) |
|
|
model = PeftModel.from_pretrained(model, lora_model) |
|
|
|
|
|
prompt = "Merhaba, nasılsın?" |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
outputs = model.generate(**inputs, max_new_tokens=200) |
|
|
|
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|