Abstract

Knowing when a trained segmentation model is encountering data that is different to its training data is important. Understanding and mitigating the effects of this play an important part in their application from a performance and assurance perspective—this being a safety concern in applications such as autonomous vehicles. This article presents a segmentation network that can detect errors caused by challenging test domains without any additional annotation in a single forward pass. As annotation costs limit the diversity of labeled datasets, we use easy-to-obtain, uncurated and unlabeled data to learn to perform uncertainty estimation by selectively enforcing consistency over data augmentation. To this end, a novel segmentation benchmark based on the sense-assess-eXplain (SAX) is used, which includes labeled test data spanning three autonomous-driving domains, ranging in appearance from dense urban to off-road. The proposed method, named γ-SSL, consistently outperforms uncertainty estimation and out-of-distribution techniques on this difficult benchmark—by up to 10.7% in area under the receiver operating characteristic curve and 19.2% in area under the precision-recall curve in the most challenging of the three scenarios.

Paper: https://ieeexplore.ieee.org/document/10530426

Bibtex

@ARTICLE{williams2024,
  author={Williams, David S. W. and Martini, Daniele De and Gadd, Matthew and Newman, Paul},
  journal={IEEE Transactions on Robotics}, 
  title={Mitigating Distributional Shift in Semantic Segmentation via Uncertainty Estimation From Unlabeled Data}, 
  year={2024},
  volume={40},
  number={},
  pages={3146-3165},
  keywords={Uncertainty;Estimation;Robots;Data models;Training;Task analysis;Training data;Autonomous vehicle (AV) navigation;deep learning in robotics and automation;introspection;out-of-distribution (OoD) detection;performance assessment;semantic scene understanding;uncertainty estimation},
  doi={10.1109/TRO.2024.3401020}}