Model Card for Qwen3-0.6B-MNLP_mcqa_rl

This model is a fine-tuned version of andresnowak/Qwen3-0.6B-MNLP_mcqa_model_text. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="andresnowak/Qwen3-0.6B-MNLP_mcqa_rl", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

This model was trained with GRPO, a method introduced in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models. And the methodology was Starting from the andresnowak/Qwen3-0.6B-MNLP_mcqa_model_text, that was trained to output [Letter]. [Answer] the model was then trained again for 2 epochs on the same dataset but now with RLVR, if we find [Letter]. in the output we give $1.0$ and if not we give $-1.0$ (it was a very simple verifiable reward).

And the arguments used where:

environment:
  seed: 42

model:
  name: andresnowak/Qwen3-0.6B-MNLP_mcqa_model_text
  hub_model_id: andresnowak/Qwen3-0.6B-MNLP_mcqa_rl

dataset_train:
  - name: andresnowak/MNLP_MCQA_dataset
    config: train
    subset_name: math_qa
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ScienceQA
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: mmlu-auxiliary-train-auto-labelled
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ai2_arc_challenge
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ai2_arc_easy
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: medmcqa
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: openbookqa
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: sciq
    config: train

dataset_validation:
  - name: andresnowak/MNLP_MCQA_dataset
    config: validation
    subset_name: math_qa
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ScienceQA
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: mmlu
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ai2_arc_challenge
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ai2_arc_easy
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: medmcqa
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: openbookqa
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: sciq
    config: validation

dataset_mmlu:
  - name: cais/mmlu
    config: validation
    subjects: ["abstract_algebra", "anatomy", "astronomy", "college_biology", "college_chemistry", "college_computer_science", "college_mathematics", "college_physics", "computer_security", "conceptual_physics", "electrical_engineering", "elementary_mathematics", "high_school_biology",  "high_school_chemistry", "high_school_computer_science", "high_school_mathematics", "high_school_physics", "high_school_statistics", "machine_learning"]


training:
  output_dir: ./output
  logging_dir: ./logs
  resume_dir: None
  report_to: wandb
  learning_rate: 1e-5
  per_device_train_batch_size: 8
  per_device_eval_batch_size: 8
  gradient_accumulation_steps: 8 # to get effective 64
  num_train_epochs: 1
  weight_decay: 0.00
  warmup_ratio: 0.05
  max_grad_norm: 1.0
  num_generations: 4
  completion_length: 100
  beta: 0.2

Evaluation Results

The model was evaluated on a suite of Multiple Choice Question Answering (MCQA) benchmarks (on its validation and test sets repsectively for each one), and NLP4education is only the approximated 1000 question and answers given to use.

The performance on the MCQA benchmarks after RL fine-tuning is as follows (This model has a very good performance on Math QA):

First evaluation: The tests where done with this prompt (type 5):

This question assesses challenging STEM problems as found on graduate standardized tests. Carefully evaluate the options and select the correct answer.

---
[Insert Question Here]
---
[Insert Choices Here, e.g.:
A. Option 1
B. Option 2
C. Option 3
D. Option 4]
---

Your response should include the letter and the exact text of the correct choice.
Example: B. Entropy increases.
Answer:

And the teseting was done on [Letter]. [Text answer]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 64.3% 63.6%
ARC Easy 82.6% 81.9%
GPQA 33.0% 31.7%
Math QA 35.2% 34.6%
MCQA Evals 42.7% 41.0%
MMLU 49.4% 49.4%
MMLU Pro 15.1% 14.7%
MuSR 49.1% 47.0%
NLP4Education 47.2% 45.8%
Overall 46.5% 45.5%

Second evaluation: (type 0)

The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.

---
*[Insert Question Here]*
---
*[Insert Choices Here, e.g.:*
*A. Option 1*
*B. Option 2*
*C. Option 3*
*D. Option 4]*
---
Answer:

And the teseting was done on [Letter]. [Text answer]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 66.62% 66.08%
ARC Easy 84.25% 82.04%
GPQA 29.69% 28.35%
Math QA 34.86% 33.50%
MCQA Evals 44.42% 40.52%
MMLU 49.29% 49.29%
MMLU Pro 16.81% 17.04%
MuSR 49.07% 46.96%
NLP4Education 49.73% 46.33%
Overall 47.19% 45.57%

Third evaluation: (type 2)


This is part of an assessment on graduate-level science, technology, engineering, and mathematics (STEM) concepts. Each question is multiple-choice and requires a single correct answer.

---
*[Insert Question Here]*
---
*[Insert Choices Here, e.g.:*
*A. Option 1*
*B. Option 2*
*C. Option 3*
*D. Option 4]*
---
For grading purposes, respond with: [LETTER]. [VERBATIM TEXT]
Example: D. Planck constant
Your Response:

And the teseting was done on [Letter]. [Text answer]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 40.31% 40.31%
ARC Easy 56.21% 56.21%
GPQA 23.66% 23.66%
Math QA 25.92% 25.92%
MCQA Evals 33.12% 33.12%
MMLU 49.29% 49.29%
MMLU Pro 14.01% 14.01%
MuSR 49.21% 49.21%
NLP4Education 34.71% 34.71%
Overall 36.27% 36.27%

First evaluation [Letter]: (type 0)

The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.

---
*[Insert Question Here]*
---
*[Insert Choices Here, e.g.:*
*A. Option 1*
*B. Option 2*
*C. Option 3*
*D. Option 4]*
---
Answer:

And the teseting was done on [Letter]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 66.62% 66.62%
ARC Easy 84.25% 84.25%
GPQA 27.23% 27.23%
Math QA 34.93% 34.93%
MCQA Evals 44.42% 44.42%
MMLU 49.29% 49.29%
MMLU Pro 17.26% 17.26%
MuSR 49.21% 49.21%
NLP4Education 50.08% 50.08%
Overall 47.03% 47.03%

Framework versions

  • TRL: 0.15.2
  • Transformers: 4.51.3
  • Pytorch: 2.5.1+cu121
  • Datasets: 3.6.0
  • Tokenizers: 0.21.0

Citations

Cite GRPO as:

@article{zhihong2024deepseekmath,
    title        = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
    author       = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
    year         = 2024,
    eprint       = {arXiv:2402.03300},
}

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
25
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for andresnowak/Qwen3-0.6B-MNLP_mcqa_rl

Finetuned
(1)
this model

Dataset used to train andresnowak/Qwen3-0.6B-MNLP_mcqa_rl

Collections including andresnowak/Qwen3-0.6B-MNLP_mcqa_rl