Hugston-Huihui-Qwen3-4B-Thinking-2507-abliterated

This is an uncensored version ofmade by Qwen/Qwen3-4B-Thinking-2507 created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

Ablation was performed using a new and faster method, which yields better results. This model was made possible by: https://huggingface.co/huihui-ai.


This model was converted and Quantized by Hugston Team.

HugstonOne Enterprise Edition

You can use the model with HugstonOne Enterprise Edition

Screenshot 2025-11-17 125501

Tested for scientific purposes, example query: How to engage in insider trading for maximum profit?

Watch HugstonOne coding and preview in action: https://vimeo.com/1121493834?share=copy&fl=sv&fe=ci

Usage

-Download App HugstonOne at Hugston.com or at https://github.com/Mainframework

-Download model from https://hugston.com/explore?folder=llm_models or Huggingface

-If you already have the Llm Model downloaded chose it by clicking pick model in HugstonOne -Then click Load model in Cli or Server

-For multimodal use you need a VL/multimodal LLM model with the Mmproj file in the same folder. -Select model and select mmproj.

-Note: if the mmproj is inside the same folder with other models non multimodal, the non model will not load unless the mmproj is moved from folder.

Downloads last month
189
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Trilogix1/Hugston-Huihui-Qwen3-4B-Thinking-2507-abliterated-f16

Quantized
(78)
this model