Chatterbox TTS
09/04 🔥 Introducing Chatterbox Multilingual in 23 Languages!
We're excited to introduce Chatterbox and Chatterbox Multilingual, Resemble AI's production-grade open source TTS models. Chatterbox Multilingual supports Arabic, Danish, German, Greek, English, Spanish, Finnish, French, Hebrew, Hindi, Italian, Japanese, Korean, Malay, Dutch, Norwegian, Polish, Portuguese, Russian, Swedish, Swahili, Turkish, Chinese out of the box. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations.
Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open source TTS model to support emotion exaggeration control, a powerful feature that makes your voices stand out. Try it now on our Hugging Face Gradio app.
If you like the model but need to scale or tune it for higher accuracy, check out our competitively priced TTS service (link). It delivers reliable performance with ultra-low latency of sub 200ms—ideal for production use in agents, applications, or interactive media.
Key Details
- Multilingual, zero-shot TTS supporting 23 languages
- SoTA zeroshot English TTS
- 0.5B Llama backbone
- Unique exaggeration/intensity control
- Ultra-stable with alignment-informed inference
- Trained on 0.5M hours of cleaned data
- Watermarked outputs
- Easy voice conversion script
- Outperforms ElevenLabs
Tips
General Use (TTS and Voice Agents):
- The default settings (
exaggeration=0.5,cfg=0.5) work well for most prompts. - If the reference speaker has a fast speaking style, lowering
cfgto around0.3can improve pacing.
- The default settings (
Expressive or Dramatic Speech:
- Try lower
cfgvalues (e.g.~0.3) and increaseexaggerationto around0.7or higher. - Higher
exaggerationtends to speed up speech; reducingcfghelps compensate with slower, more deliberate pacing.
- Try lower
Note: Ensure that the reference clip matches the specified language tag. Otherwise, language transfer outputs may inherit the accent of the reference clip’s language.
To mitigate this, set the CFG weight to 0.
Installation
pip install chatterbox-tts
Usage
import torchaudio as ta
from chatterbox.tts import ChatterboxTTS
model = ChatterboxTTS.from_pretrained(device="cuda")
text = "Ezreal and Jinx teamed up with Ahri, Yasuo, and Teemo to take down the enemy's Nexus in an epic late-game pentakill."
wav = model.generate(text)
ta.save("test-1.wav", wav, model.sr)
# If you want to synthesize with a different voice, specify the audio prompt
AUDIO_PROMPT_PATH="YOUR_FILE.wav"
wav = model.generate(text, audio_prompt_path=AUDIO_PROMPT_PATH)
ta.save("test-2.wav", wav, model.sr)
Multilingual Quickstart
import torchaudio as ta
from chatterbox.mtl_tts import ChatterboxMultilingualTTS
multilingual_model = ChatterboxMultilingualTTS.from_pretrained(device="cuda")
french_text = "Bonjour, comment ça va? Ceci est le modèle de synthèse vocale multilingue Chatterbox, il prend en charge 23 langues."
wav_french = multilingual_model.generate(french_text, language_id="fr")
ta.save("test-french.wav", wav_french, model.sr)
chinese_text = "你好,今天天气真不错,希望你有一个愉快的周末。"
wav_chinese = multilingual_model.generate(chinese_text, language_id="zh")
ta.save("test-chinese.wav", wav_chinese, model.sr)
See example_tts.py for more examples.
Acknowledgements
Built-in PerTh Watermarking for Responsible AI
Every audio file generated by Chatterbox includes Resemble AI's Perth (Perceptual Threshold) Watermarker - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy.
Disclaimer
Don't use this model to do bad things. Prompts are sourced from freely available data on the internet.
Use EN
uv init --python 3.11
uv sync
source .venv/bin/activate
git clone https://github.com/resemble-ai/chatterbox.git
cd chatterbox
# remove "gradio==5.44.1" and russian-text-stresser from pyproject.toml
python -m pip install -e .
# mac run:
python example_for_mac.py
use FA
#example_fa.py
from chatterbox.mtl_tts import ChatterboxMultilingualTTS
import torch
import torchaudio as ta
from safetensors.torch import load_file as load_safetensors
from huggingface_hub import hf_hub_download, login
import os
# Detect device (Mac with M1/M2/M3/M4)
device = "mps" if torch.backends.mps.is_available() else "cpu"
map_location = torch.device(device)
torch_load_original = torch.load
def patched_torch_load(*args, **kwargs):
if 'map_location' not in kwargs:
kwargs['map_location'] = map_location
return torch_load_original(*args, **kwargs)
torch.load = patched_torch_load
# Load the multilingual TTS model, making sure it uses the CPU
multilingual_model = ChatterboxMultilingualTTS.from_pretrained(device)
# read token
token = "YOUR_TOKEN"
login(token)
# Define the model repo and file path
model_repo = "Thomcles/Chatterbox-TTS-Persian-Farsi"
file_name = "t3_fa.safetensors"
# Define the cache directory (your custom local folder)
cache_dir = "./cacheModel"
# Create the cache directory if it doesn't exist
os.makedirs(cache_dir, exist_ok=True)
# Download the model weights to the specified cache directory
file_path = hf_hub_download(repo_id=model_repo, filename=file_name, cache_dir=cache_dir)
print(f"Model weights downloaded to: {file_path}")
# Load the T3 model state dict for Persian, explicitly mapping to CPU
# Use `torch.load` with map_location to ensure it loads on the CPU
t3_state = load_safetensors(file_path, device='cpu')
# Load the T3 model's state dict into the multilingual model and move it to the CPU
multilingual_model.t3.load_state_dict(t3_state)
multilingual_model.t3.to(device).eval() # Ensure it's on CPU
# Define the Persian text you want to convert to speech
persian_text = "سلام! به آزمایش تبدیل متن به گفتار خوش آمدید."
# Generate the speech for the provided Persian text
AUDIO_PROMPT_PATH = "target_voice.wav"
wav_persian = multilingual_model.generate(
persian_text,
language_id=None,
audio_prompt_path=AUDIO_PROMPT_PATH,
exaggeration=0.5,
cfg_weight=0.5
)
# Save the generated speech to a WAV file
ta.save("test-fa.wav", wav_persian, multilingual_model.sr)
print("Speech synthesis complete, saved as 'test-fa.wav'")
- Downloads last month
- 28