CodeV-GGUF

The CodeV models (CodeV-SFT and CodeV-RL from RenlyH) are 7B vision-language models fine-tuned from Qwen/Qwen2.5-VL-7B-Instruct, designed for faithful visual reasoning through a two-stage pipeline of supervised fine-tuning (SFT) followed by reinforcement learning (RL) using Tool-Aware Policy Optimization (TAPO), which represents visual tools as executable Python code and provides step-wise rewards based on question-tool output alignment to ensure evidence-consistent tool use without reward hacking. CodeV-SFT serves as the cold-start initialization with high-quality trajectories rich in tool invocation patterns, while CodeV-RL applies TAPO to boost performance, achieving 1-3 points over zero-shot RL and 6-8 points over SFT baselines on visual search benchmarks with substantial gains in faithful tool-use rates, alongside strong results in multimodal reasoning and math tasks. This approach addresses unfaithful reasoning in agentic VLMs—where high accuracy masks irrelevant tool calls—by explicitly supervising intermediate behaviors for trustworthy image-based problem-solving.

CodeV-RL [GGUF]

File Name Quant Type File Size File Link
CodeV-RL.BF16.gguf BF16 15.2 GB Download
CodeV-RL.F16.gguf F16 15.2 GB Download
CodeV-RL.Q8_0.gguf Q8_0 8.1 GB Download
CodeV-RL.mmproj-bf16.gguf mmproj-bf16 1.36 GB Download
CodeV-RL.mmproj-f16.gguf mmproj-f16 1.35 GB Download
CodeV-RL.mmproj-q8_0.gguf mmproj-q8_0 856 MB Download

CodeV-SFT [GGUF]

File Name Quant Type File Size File Link
CodeV-SFT.BF16.gguf BF16 15.2 GB Download
CodeV-SFT.F16.gguf F16 15.2 GB Download
CodeV-SFT.Q8_0.gguf Q8_0 8.1 GB Download
CodeV-SFT.mmproj-bf16.gguf mmproj-bf16 1.36 GB Download
CodeV-SFT.mmproj-f16.gguf mmproj-f16 1.35 GB Download
CodeV-SFT.mmproj-q8_0.gguf mmproj-q8_0 856 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
-
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/CodeV-GGUF

Finetuned
RenlyH/CodeV-RL
Quantized
(1)
this model