Abstract
Anomaly detection in multimodal sensor environments presents significant challenges when data quality varies across different modalities due to environmental noise, sensor degradation, or communication disruptions. This paper proposes a Selective Modality Weighting (SMW) framework
that dynamically adjusts the contribution of each sensory modality based on its reliability and information quality for robust anomaly identification. The framework integrates an adaptive attention mechanism that learns modality-specific confidence scores during training and applies selective weighting during inference to suppress unreliable data streams while amplifying trustworthy signals. Our methodology combines deep autoencoder architectures with cross-modal consistency validation to establish baseline normality patterns across heterogeneous sensor types.Experimental evaluations on industrial monitoring datasets demonstrate that SMW achieves superior anomaly detection accuracy compared to static fusion approaches, particularly in scenarios with simulated sensor noise levels ranging from 10% to 40%. Results indicate a 12.7% improvement in F1-score over baseline multimodal methods when operating under high-noise conditions. The proposed framework offers a practical solution for deployment in real-world environments where sensor reliability cannot be guaranteed, contributing to more resilient anomalydetection systems for critical infrastructure monitoring, industrial quality control, and autonomous system safety.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright (c) 2026 Haolin Zheng, Mingrui Cao, Andrew Keller (Author)