FAIR - Explainability in Neural Networks for Image Classification Tasks

Call: INCIBE - Spanish Cybersecurity Institute

Dates: -

Project Website

Abstract

FAIR investigates explainability and transparency mechanisms for deep neural networks applied to image-classification tasks, with a particular focus on trustworthy artificial intelligence for cybersecurity and high-risk AI applications. The project aims to improve the interpretability, robustness, and accountability of modern AI systems by developing methodologies that allow better understanding of neural-network decision-making processes.

Research activities include the analysis of explainable AI (XAI) techniques for deep learning, interpretability evaluation frameworks, trustworthy image-classification pipelines, and robustness analysis against adversarial behaviors and security threats. The main objective is to propose novel explainable methods for the detection of manipulated online content, such as deepfakes in images and videos.

Details

  • Funded under: INCIBE (CPP4 – CPP001/24)
  • Coordinator: GRADIANT
  • Consortium: Universidade de Vigo, CITMaGA
  • Role: Research team member
  • Research topics: Explainable AI, Trustworthy AI, Neural Networks, Deepfake Detection