How to correctly use the “balanced dataloader (ID + proximal OoD)” fine-tuning in M-HOOD?
#1
by
Suyan01
- opened
Hello authors,
First, thanks for releasing the code and the paper—great work!
I’m applying your method to my custom one-class detector and I’ve already finished the Proximal OoD data preparation step. In my first attempt, I merged the selected proximal OoD images into my original YOLOv8 dataset with empty labels and fine-tuned the model. This already improves hallucination suppression.
However, I see in your code/docs there seems to be a second way to train: “Using a balanced dataloader with ID and proxy OoD data”. I would like to confirm how to properly use this path.