Luau Qwen3 4B FIM v0.1
A specialized fine tune of Qwen/Qwen3-4B-Instruct-2507 trained specifically for FIM Luau code based on "Efficient Training of Language Models to Fill in the Middle" by Mohammad Bavarian et al., 2022. Instead of being a chatbot, it performs Luau autocomplete.
Expected format
<|repo_name|> and <|file_sep|> are technically optional, but you will get better responses when they are included.
If using a chat API:
[
{ "role": "system", "content": "You are a code completion assistant." },
{ "role": "user", "content": f"<|repo_name|>{reponame}<|file_sep|>{filename}<|fim_suffix|>{suffix}<|fim_prefix|>{prefix}<|fim_middle|>" }
]
If using a completions API, you'll need to essentially bake the chat template in:
prompt = f"<|im_start|>system\nYou are a code completion assistant.<|im_end|>\n<|im_start|>user\n<|repo_name|>{reponame}<|file_sep|>{filename}<|fim_suffix|>{suffix}<|fim_prefix|>{prefix}<|fim_middle|><|im_end|>\n<|im_start|>assistant\n"
Here is an example config.yaml for using this with Continue.dev for autocomplete in VSCode backed by LM Studio:
name: Local Autocomplete
version: 1.0.0
schema: v1
models:
- name: Luau Qwen3 4B FIM v0.1
provider: lmstudio
apiBase: http://localhost:1234/v1
model: luau-qwen3-4b-fim-v0.1
roles:
- autocomplete
defaultCompletionOptions:
stop: [
"<|im_end|>",
"</s>",
"<|repo_name|>",
"<|file_sep|>",
"```"
]
promptTemplates:
autocomplete: "<|im_start|>system\nYou are a code completion assistant.<|im_end|>\n<|im_start|>user\n<|repo_name|>{{{reponame}}}<|file_sep|>{{{filename}}}<|fim_suffix|>{{{suffix}}}<|fim_prefix|>{{{prefix}}}<|fim_middle|><|im_end|>\n<|im_start|>assistant\n"
Model Information
- Developer: Zack Williams (boatbomber)
- Sponsor: Torpedo Software LLC
- Base Model: Qwen/Qwen3-4B-Instruct-2507
- Training Method: SFT (Supervised Finetuning)
Training Methodology
Dataset
Source: TorpedoSoftware/the-luau-stack
- 500,000 FIM-formatted Luau code snippets
- Completion to end of line, end of block, next few lines, etc
- Varied between Suffix Prefix Middle order and Prefix Suffix Middle order
Training Process
- ~140 GPU hours on RTX 3090
- Rank stabilized LoRA adapter with rank=128
- 250,000 steps with batch size 2
- Full precision training
- Final merge to BF16 model
Training Progress
Quantization
Dynamic GGUF quantizations provided at sizes ranging approximately from 2 to 4 GB.
- Downloads last month
- 260
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support





