Advancing Explainable AI�Testing and Enhancing Techniques Across Multidisciplinary Use-Cases
Presenter: Simone Scardapane
Introduction
The MUCCA project
MUCCA Multi-disciplinary Use Cases for Convergent new Approaches to AI explainability ��CHIST-ERA IV xAI H2020 EU grant 2.2021-7.2024
M
U
C
C
A
The MUCCA consortium
4
Sapienza University of Rome (IT)
Departments of Physics, Physiology, and Information Engineering
HEP: data-analysis, detectors, simulation; AI: ML/DL methods in basic/applied research and industry.
Istituto Nazionale Fisica Nucleare (IT)
Rome group
Fundamental research with cutting edge technologies and instruments, applications (HEP, medicine)
University of Sofia St.Kl.Ohridski (BG) �Faculty of Physics
extended expertise in detector development, firmware, experiment software in HEP
Polytechnic University of Bucharest (RO) Department of Hydraulics, Hydraulic Equipment and Environmental Engineering
Complex Fluids and Microfluidics expertise: mucus/saliva rheology, reconstruction and simulation of respiratory airways, AI applications for airflow predictions in respiratory conducts
University of Liverpool (UK)
Department of Physics
physics data analysis at hadron colliders experiments, simulation, ML and DL methods in HEP
Medlea S.r.l.s (IT)
high tech startup, with an established track record in medical image analysis and high-performance simulation and capabilities of developing and deploying industry-standard software solutions
Istituto Superiore di Sanità (IT)
expertise in neural networks modeling, cortical network
dynamics, theory inspired data analysis
AI for scientific discovery
Black-box
Contents
Explainability (xAI) as the potential “bridge” between the AI expert and the scientist.�
Research questions:
“Interpretability”
AI models
The Use Cases
WP1: HEP Physics
Application of AI-methods to
searches for New Physics at ATLAS @LHC. xAI to improve transparency and impact of systematics errors
WP2: HEP detectors
Application of AI-methods to
calorimeter detectors
(PADME). xAI to improve performances
and systematics comprehension
WP3: HEP real time systems
Develop AI-based real time selection algorithms for FPGAs at ATLAS. Use xAI
methods to understand complex systems
WP4: Medical Imaging
Develop xAI pipeline for segmentation
of brain tumours in magnetic resonance imaging. Use publicy available databases for xAI developments, focusing on explainability of training strategy
WP6: Neuro-science
Test xAI techniques to uncover
computational brain strategies
and selection of dynamical neural models
WP5: Functional imaging
Test xAI methodology in respiratory systems. Analyse complex systems (passage of air and mucus) to derive model and test xAI
WP7: xAI tools
Survey of xAI methods relvant for
the use-cases, develop xAI usage pipelines: analysis of results
The Use Cases
WP1: HEP Physics
Application of AI-methods to
searches for New Physics at ATLAS @LHC. xAI to improve transparency and impact of systematics errors
WP2: HEP detectors
Application of AI-methods to
calorimeter detectors
(PADME). xAI to improve performances
and systematics comprehension
WP3: HEP real time systems
Develop AI-based real time selection algorithms for FPGAs at ATLAS. Use xAI
methods to understand complex systems
WP4: Medical Imaging
Develop xAI pipeline for segmentation
of brain tumours in magnetic resonance imaging. Use publicy available databases for xAI developments, focusing on explainability of training strategy
WP6: Neuro-science
Test xAI techniques to uncover
computational brain strategies
and selection of dynamical neural models
WP5: Functional imaging
Test xAI methodology in respiratory systems. Analyse complex systems (passage of air and mucus) to derive model and test xAI
WP7: xAI tools
Survey of xAI methods relvant for
the use-cases, develop xAI usage pipelines: analysis of results
MUCCA use cases
Real-time HEP triggers
Real-time Triggers
in HEP
Goal: reconstruct momentum and angle of a muon track from the RPC detector hit information in less than 400ns.
Strategy: multi-stage AI model compression based on quantisation and knowledge transfer.
pattern of a muon particle
noise
noise
The model
Performance
12
Inference time per event on FPGA
Xilinx Ultrascale+ XCV13P
Single muon trigger efficiency curve
for a nominal threshold of 10 GeV
FPGA resource occupation
Teacher
Student w/o teacher
Student w/ teacher
Strategy 1: saliency maps
Overabundance of (potentially conflicting) explanations!
Strategy 1: saliency maps
Strategy 2: soft decision trees
Strategy 3: data attribution
MUCCA use cases
Search for new physics at ATLAS
Introduction
Goal: use two searches for new physics at ATLAS Collaboration at CERN as demonstrators of employability of ML techniques and testbed for xAI.
Search 1 - SUSY: for dark matter candidates resulting from the decay of new particles predicted by Supersymmetry.
Search 2 - DARK: for “dark” photons, light particles belonging to a new hidden sector not yet discovered because too feebly interacting with ordinary matter.
DARK- Dark photons search
Signal leaves different signature in the detector wrt background (signal signature is effectively an unknown). ML discriminator (3D-CNN) uses image classification trained to distinguish background processes from signal mapping clusters of hadrons (jets) in 3D coordinates.
ATLAS calorimeter system
The full pipeline
The ATLAS detector orthogonal view
3D image�(sparse)
Graph representation�(sparse)
Ongoing research (unpublished)
Saliency maps
Trac-in model
Opponent (signal event)
Takeaways
Conclusion
A new generation of xAI?
Post-hoc explainability
…
Transformer
“Lion”
Explainer (e.g., relevance)
“Intrinsic” intepretability
…
Transformer
“Lion”
Token selection
Discrete selection!
A practical example
In-the-loop explainability (controllability)
…
Transformer
“Lion”
Token selection
Human evaluation
Closed-loop xAI
Thanks for listening
Simone Scardapane�Assistant Professor