Abstract
Anomaly detection in multivariate time series (MTS) is crucial in domains such as industrial monitoring, cybersecurity, healthcare, and autonomous driving. Deep learning approaches have improved anomaly detection but lack interpretability. We propose an explainable anomaly detection (XAD) framework using a sparse non-linear vector autoregressive network (SNL-VAR-Net). This framework combines neural networks with vector autoregression for non-linear representation learning and interpretable models. We employ regularization to enforce sparsity, enabling efficient handling of long-range dependencies. Additionally, augmented Lagrange multiplier-based techniques for low-rank and sparse decomposition reduce the impact of noise. Evaluation on publicly available datasets shows that SNL-VAR-Net offers comparable performance to deep learning methods with better interpretability.