Teuken-7B-instruct-v0.6-SP-GGUF
Teuken-7.5B-BF16-CM.gguf is the exact clone of Teuken 7B for the llama.cpp with sentencepiece inference engine.
Teuken-7.5B-Q4_K_M.gguf is a Q4_K_M 4 Bit quantized version of Teuken-7.5B-BF16-CM.gguf for llama.cpp with sentencepiece inference engine.
llama.cpp with sentencepiece is a fork of llama.cpp using sentencepiece tokenizer as external library. Additionally Teuken 7B is supported.
- Downloads last month
- 920
Hardware compatibility
Log In
to view the estimation
4-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for andy300/Teuken-7B-instruct-v0.6-SP-GGUF
Base model
openGPT-X/Teuken-7B-base-v0.6
Finetuned
openGPT-X/Teuken-7B-instruct-v0.6