Toward human-centered explainability: Natural language explanations for anomaly detection

Published in Information Systems Frontiers (Springer), 2026

Recommended citation: Padín-Torrente, H., Carneiro-Diaz, V., & Ortega-Fernandez, I. (2026). Toward human-centered explainability: Natural language explanations for anomaly detection. Information Systems Frontiers, 1–17. https://doi.org/10.1007/s10796-026-10717-3 https://doi.org/10.1007/s10796-026-10717-3

Anomaly-detection systems based on artificial intelligence are increasingly used in cybersecurity and industrial environments, but their outputs are often difficult for human analysts to interpret. This paper explores human-centered explainability approaches for anomaly detection through the generation of natural-language explanations that help analysts understand model predictions and anomalous behaviors.

The work investigates methodologies for translating complex machine-learning outputs into interpretable and context-aware explanations that can support decision-making processes in cybersecurity operations. By combining anomaly-detection techniques with natural-language explanation mechanisms, the proposed approach improves transparency, usability, and analyst trust in AI-assisted systems.

The results demonstrate the potential of natural-language explainability frameworks to bridge the gap between advanced anomaly-detection models and human-centered cybersecurity analysis.

Access paper here