Optical flow estimation is a core task in computer vision, yet many existing models struggle with lighting-induced appearance changes that are common in real-world scenarios. This work presents a focused evaluation of recent deep learning-based optical flow models under controlle
...
Optical flow estimation is a core task in computer vision, yet many existing models struggle with lighting-induced appearance changes that are common in real-world scenarios. This work presents a focused evaluation of recent deep learning-based optical flow models under controlled lighting variations, using a custom dataset composed of indoor and outdoor scenes recorded with a static camera. Scenarios include glare, moving shadows, intensity shifts, and outdoor shadows, with ground truth flow defined as zero to isolate the effect of illumination changes. Four models—RAFT, GMFlow, SEA-RAFT, and FlowDiffuser—are benchmarked using standard metrics (EPE and F1-all). The results reveal that even in the absence of physical motion, several models produce significant flow estimates, particularly under shadow and intensity variation. SEA-RAFT and RAFT show relatively higher robustness, while GMFlow and FlowDiffuser are more sensitive to lighting artifacts. The findings highlight a critical gap in current model generalization and emphasize the need for lighting-aware architectures and training strategies.