Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published January 2024 | Published
Conference Paper

RGB-X Object Detection via Scene-Specific Fusion Modules

Abstract

Multimodal deep sensor fusion has the potential to enable autonomous vehicles to visually understand their surrounding environments in all weather conditions. However, existing deep sensor fusion methods usually employ convoluted architectures with intermingled multimodal features, requiring large coregistered multimodal datasets for training. In this work, we present an efficient and modular RGB-X fusion network that can leverage and fuse pre-trained single-modal models via scene-specific fusion modules, thereby enabling joint input-adaptive network architectures to be created using small, coregistered multimodal datasets. Our experiments demonstrate the superiority of our method compared to existing works on RGB-thermal and RGB-gated datasets, performing fusion using only a small amount of additional parameters.

Copyright and License

© 2024 IEEE.

Code Availability

Our code is available at https://github.com/dsriaditya999/RGBXFusion.

Acknowledgement

This work was funded by the Ford University Research Program.

Additional details

Created:
May 2, 2024
Modified:
May 2, 2024