streamerbtw1002's picture
Update README.md
f53f6fa verified
metadata
library_name: transformers
tags:
  - CoT
  - Code
license: apache-2.0
language:
  - en
  - zh
  - ko
  - ru
  - de
base_model: Qwen/Qwen2.5-7B-Instruct
model_name: streamerbtw1002/Nexuim-R1-7B-Instruct
revision: main

Model Details

Model Name: streamerbtw1002/Nexuim-R1-7B-Instruct

Developed by: James Phifer (NexusMind.tech)
Funded by: Tristian (Shuttle.ai)
License: Apache-2.0
Finetuned from: Qwen/Qwen2.5-VL-7B-Instruct
Architecture: Transformer-based LLM

Overview

This model is designed to handle complex mathematical questions efficiently using Chain of Thought (CoT) reasoning.

  • Capabilities:

    • General-purpose LLM
    • Strong performance on multi-step reasoning tasks
    • Able to respond to requests ethically while preventing human harm
  • Limitations:

    • Not evaluated extensively
    • May generate incorrect or biased outputs in certain contexts

Training Details

Dataset: Trained on a 120k-line CoT dataset for mathematical reasoning.
Training Hardware: 1x A100 80GB GPU (Provided by Tristian at Shuttle.ai)

Evaluation

Status: Not formally tested yet.
Preliminary Results:

  • Provides detailed, well-structured answers
  • Performs well on long-form mathematical problems

Usage

from transformers import AutoConfig, AutoModel, AutoTokenizer

model_id = "streamerbtw1002/Nexuim-R1-7B-Instruct"

config = AutoConfig.from_pretrained(
  model_id,
  revision="main"
)
model = AutoModel.from_pretrained(
  model_id,
  revision="main"
)
tokenizer = AutoTokenizer.from_pretrained(
  model_id,
  revision="main"
)