1 of 28

Advancing Explainable AI�Testing and Enhancing Techniques Across Multidisciplinary Use-Cases

Presenter: Simone Scardapane

2 of 28

Introduction

The MUCCA project

3 of 28

MUCCA Multi-disciplinary Use Cases for Convergent new Approaches to AI explainability ��CHIST-ERA IV xAI H2020 EU grant 2.2021-7.2024

M

U

C

C

A

4 of 28

The MUCCA consortium

4

Sapienza University of Rome (IT)

Departments of Physics, Physiology, and Information Engineering

HEP: data-analysis, detectors, simulation; AI: ML/DL methods in basic/applied research and industry.

Istituto Nazionale Fisica Nucleare (IT)

Rome group

Fundamental research with cutting edge technologies and instruments, applications (HEP, medicine)

University of Sofia St.Kl.Ohridski (BG) �Faculty of Physics

extended expertise in detector development, firmware, experiment software in HEP

Polytechnic University of Bucharest (RO) Department of Hydraulics, Hydraulic Equipment and Environmental Engineering

Complex Fluids and Microfluidics expertise: mucus/saliva rheology, reconstruction and simulation of respiratory airways, AI applications for airflow predictions in respiratory conducts

University of Liverpool (UK)

Department of Physics

physics data analysis at hadron colliders experiments, simulation, ML and DL methods in HEP

Medlea S.r.l.s (IT)

high tech startup, with an established track record in medical image analysis and high-performance simulation and capabilities of developing and deploying industry-standard software solutions

Istituto Superiore di Sanità (IT)

expertise in neural networks modeling, cortical network

dynamics, theory inspired data analysis

5 of 28

AI for scientific discovery

Black-box

6 of 28

Contents

Explainability (xAI) as the potential “bridge” between the AI expert and the scientist.�

Research questions:

  1. How to select a “good” xAI algorithm? Which method among hundreds (saliency map, data attribution, …)?
  2. How to combine multiple, potentially contradictory explanations (convergent explanation)?
  3. How do we “explain the explanation”?

Interpretability

AI models

7 of 28

The Use Cases

WP1: HEP Physics

Application of AI-methods to

searches for New Physics at ATLAS @LHC. xAI to improve transparency and impact of systematics errors

WP2: HEP detectors

Application of AI-methods to

calorimeter detectors

(PADME). xAI to improve performances

and systematics comprehension

WP3: HEP real time systems

Develop AI-based real time selection algorithms for FPGAs at ATLAS. Use xAI

methods to understand complex systems

WP4: Medical Imaging

Develop xAI pipeline for segmentation

of brain tumours in magnetic resonance imaging. Use publicy available databases for xAI developments, focusing on explainability of training strategy

WP6: Neuro-science

Test xAI techniques to uncover

computational brain strategies

and selection of dynamical neural models

WP5: Functional imaging

Test xAI methodology in respiratory systems. Analyse complex systems (passage of air and mucus) to derive model and test xAI

WP7: xAI tools

Survey of xAI methods relvant for

the use-cases, develop xAI usage pipelines: analysis of results

8 of 28

The Use Cases

WP1: HEP Physics

Application of AI-methods to

searches for New Physics at ATLAS @LHC. xAI to improve transparency and impact of systematics errors

WP2: HEP detectors

Application of AI-methods to

calorimeter detectors

(PADME). xAI to improve performances

and systematics comprehension

WP3: HEP real time systems

Develop AI-based real time selection algorithms for FPGAs at ATLAS. Use xAI

methods to understand complex systems

WP4: Medical Imaging

Develop xAI pipeline for segmentation

of brain tumours in magnetic resonance imaging. Use publicy available databases for xAI developments, focusing on explainability of training strategy

WP6: Neuro-science

Test xAI techniques to uncover

computational brain strategies

and selection of dynamical neural models

WP5: Functional imaging

Test xAI methodology in respiratory systems. Analyse complex systems (passage of air and mucus) to derive model and test xAI

WP7: xAI tools

Survey of xAI methods relvant for

the use-cases, develop xAI usage pipelines: analysis of results

9 of 28

MUCCA use cases

Real-time HEP triggers

10 of 28

Real-time Triggers

in HEP

Goal: reconstruct momentum and angle of a muon track from the RPC detector hit information in less than 400ns.

Strategy: multi-stage AI model compression based on quantisation and knowledge transfer.

pattern of a muon particle

noise

noise

11 of 28

The model

12 of 28

Performance

12

Inference time per event on FPGA

Xilinx Ultrascale+ XCV13P

Single muon trigger efficiency curve

for a nominal threshold of 10 GeV

  • Teacher fp32: 5 ms (Tesla V100 GPU)
  • Student 4 bit: 438 ns (hls4ml)
  • Student 4 bit: 84 ns (our VHDL implementation)

FPGA resource occupation

Teacher

Student w/o teacher

Student w/ teacher

13 of 28

Strategy 1: saliency maps

Overabundance of (potentially conflicting) explanations!

14 of 28

Strategy 1: saliency maps

15 of 28

Strategy 2: soft decision trees

16 of 28

Strategy 3: data attribution

17 of 28

MUCCA use cases

Search for new physics at ATLAS

18 of 28

Introduction

Goal: use two searches for new physics at ATLAS Collaboration at CERN as demonstrators of employability of ML techniques and testbed for xAI.

Search 1 - SUSY: for dark matter candidates resulting from the decay of new particles predicted by Supersymmetry.

Search 2 - DARK: for “dark” photons, light particles belonging to a new hidden sector not yet discovered because too feebly interacting with ordinary matter.

19 of 28

DARK- Dark photons search

Signal leaves different signature in the detector wrt background (signal signature is effectively an unknown). ML discriminator (3D-CNN) uses image classification trained to distinguish background processes from signal mapping clusters of hadrons (jets) in 3D coordinates.

ATLAS calorimeter system

20 of 28

The full pipeline

The ATLAS detector orthogonal view

3D image�(sparse)

Graph representation�(sparse)

21 of 28

Ongoing research (unpublished)

Saliency maps

  • Top 4 influential nodes
  • Top influential data from training dataset

Trac-in model

Opponent (signal event)

22 of 28

Takeaways

  1. Novel AI techniques are highly effective (especially graph neural networks and compression algorithms).�
  2. Too many, incompatible xAI techniques are inadequate to provide an easy-to-glimpse information to the scientists. Even for an AI expert, combining them is non trivial.�
  3. In the future, we will probably need a novel, explainable-by-design family of neural networks.

23 of 28

Conclusion

A new generation of xAI?

24 of 28

Post-hoc explainability

Transformer

“Lion”

Explainer (e.g., relevance)

25 of 28

“Intrinsic” intepretability

Transformer

“Lion”

Token selection

Discrete selection!

26 of 28

A practical example

27 of 28

In-the-loop explainability (controllability)

Transformer

“Lion”

Token selection

Human evaluation

Closed-loop xAI

28 of 28

Thanks for listening

Simone Scardapane�Assistant Professor