MNLP_M3_mcqa_model - (Qwen3-0.6B-mcqa_model_2)

This model is a fine-tuned version of unsloth/Qwen3-0.6B-Base on an unknown dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training and evaluation data

Training was done on the training splits of

  • MEDMCQA (34,000 random examples with seed 42)
  • MMLU_stem
  • MMLU_stem_10_choices agumented
  • Sciq
  • Ai2 Arc
  • Math_qa
  • ScienceQa
  • Openbookqa

Training procedure

The procedure for training was done with example of any amount of choices, for each batch size we padd the options to the biggest amount of options and example has in that batch, and from there we do the training by only grabbing the last logit form doing a feedforward on the whole prompt (question with choices) and we do cross entropy loss on this last logit with the options to choose from (so we don't do cross entyropy on the whole vocabulary we only do it on the tokens of the letters of the options (e.g. A, B, C and D)).

We also template all the training examples with 7 random templates as to make the model robuts to different types of ways one could ask an MCQA question, using different prompts can make the results vary a lot

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.04
  • num_epochs: 2

Evaluation Results

The model was evaluated on a suite of Multiple Choice Question Answering (MCQA) benchmarks (on its validation and test sets repsectively for each one), and NLP4education is only the approximated 1000 question and answers given to use.

Important Note on MCQA Evals Benchmark:

The performance on these benchmarks is as follows:

First evaluation: The tests where done with this prompt (type 5):

This question assesses challenging STEM problems as found on graduate standardized tests. Carefully evaluate the options and select the correct answer.

---
[Insert Question Here]
---
[Insert Choices Here, e.g.:
A. Option 1
B. Option 2
C. Option 3
D. Option 4]
---

Your response should include the letter and the exact text of the correct choice.
Example: B. Entropy increases.
Answer:

And the teseting was done on [Letter]. [Text answer]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 63.90% 62.41%
ARC Easy 81.64% 77.87%
GPQA 31.92% 30.58%
Math QA 31.84% 31.11%
MCQA Evals 42.60% 38.44%
MMLU 50.94% 50.94%
MMLU Pro 15.19% 13.79%
MuSR 53.04% 51.19%
NLP4Education 44.49% 41.71%
Overall 46.17% 44.23%

Second evaluation: (type 0)

The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.

---
*[Insert Question Here]*
---
*[Insert Choices Here, e.g.:*
*A. Option 1*
*B. Option 2*
*C. Option 3*
*D. Option 4]*
---
Answer:

And the teseting was done on [Letter]. [Text answer]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 67.17% 64.51%
ARC Easy 83.71% 79.57%
GPQA 28.35% 28.79%
Math QA 36.38% 34.66%
MCQA Evals 45.06% 38.31%
MMLU 50.68% 50.68%
MMLU Pro 16.22% 14.31%
MuSR 53.04% 51.19%
NLP4Education 48.71% 44.18%
Overall 47.70% 45.13%

Third evaluation: (type 2)


This is part of an assessment on graduate-level science, technology, engineering, and mathematics (STEM) concepts. Each question is multiple-choice and requires a single correct answer.

---
*[Insert Question Here]*
---
*[Insert Choices Here, e.g.:*
*A. Option 1*
*B. Option 2*
*C. Option 3*
*D. Option 4]*
---
For grading purposes, respond with: [LETTER]. [VERBATIM TEXT]
Example: D. Planck constant
Your Response:

And the teseting was done on [Letter]. [Text answer]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 49.97% 46.02%
ARC Easy 63.34% 55.84%
GPQA 17.41% 20.09%
Math QA 29.90% 29.50%
MCQA Evals 33.64% 32.47%
MMLU 50.94% 50.94%
MMLU Pro 14.09% 11.21%
MuSR 53.04% 51.19%
NLP4Education 38.47% 37.06%
Overall 38.98% 37.15%

First evaluation: (type 0)

The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.

---
*[Insert Question Here]*
---
*[Insert Choices Here, e.g.:*
*A. Option 1*
*B. Option 2*
*C. Option 3*
*D. Option 4]*
---
Answer:

And the teseting was done on [Letter]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 68.46% 68.46%
ARC Easy 84.11% 84.11%
GPQA 37.95% 37.95%
Math QA 39.31% 39.31%
MCQA Evals 45.06% 45.06%
MMLU 50.75% 50.75%
MMLU Pro 19.25% 19.25%
MuSR 51.72% 51.72%
NLP4Education 49.80% 49.80%
Overall 49.60% 49.60%

Framework versions

  • Transformers 4.52.4
  • Pytorch 2.7.0+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.0
Downloads last month
17
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for andresnowak/MNLP_M3_mcqa_model

Finetuned
(106)
this model

Datasets used to train andresnowak/MNLP_M3_mcqa_model

Collections including andresnowak/MNLP_M3_mcqa_model

Evaluation results