Pseudo-labeling involves training models on a small amount of labeled data and then using those models' predictions on unlabeled data as labels for further training, which therefore decreases the required labeling effort. In this paper, we investigate the effects of pseudo-labeli
...
Pseudo-labeling involves training models on a small amount of labeled data and then using those models' predictions on unlabeled data as labels for further training, which therefore decreases the required labeling effort. In this paper, we investigate the effects of pseudo-labeling using the robust YOLOv8 object detector. The research establishes a naive basis pseudo-labeling approach and explores improvement methods, such as object ratio based dynamic thresholds to reduce class imbalance issues, confidence scaling to retain more information when predicting a single class with less confusion, and an ensemble approach to leverage performance benefits offered by model ensembling. Experiments do not show significant improvements and mostly decreasing performance is seen on the VOC2012 dataset. However, improvements relative to the basis are observed with all proposed methods.