1 of 5

Use of alternative hardware/Real Time Analysis

IWAPP discussion forum 2/5

Caterina Doglioni, Alessandro Lonardo,Filip Morawski, MP, Matteo Turisini

2 of 5

What is this about

With new technologies emerging (e.g., deep learning), scientific computing environment are becoming more and more heterogeneous. Several parallel computing devices (FPGAs, GPUs, etc) can be exploited to accelerate traditional algorithms and include deep learning components in the processing workflows. A wide set of dedicated computing devices (TPUs, IPUs, ...) are targeting specific use cases, e.g., convolutional neural networks, recurrent networks, graph networks. Neuromorphic computing opens the possibility to use spiking neural networks for signal processing.

On a longer term, quantum computing might offer interesting alternatives to solve large combinatoric problems. With private companies dictating the direction followed by innovation, big scientific collaborations might have to adapt their data processing to follow this trend, which could offer specific advantages.

3 of 5

Share your knowledge

HEP: moving towards GPUs/CPUs heterogeneous environment. FPGAs used in trigger, but could be a low-power option for parallel architecture. Custom solutions for specific environments (e.g., L1 HLT trigger) where we have full control on hardware choice. People looking at alternatives for specific tasks (e.g., IPU for graph neural networks).

L1 Triggers works in data-flow model (no scheduling etc) -> custom hardware on custom electronics

HLT triggers work like computing centers (scheduling event distributions across cores). There more standard techniques could be applied

GW: less data, on multiple channels (physics & noise, about 10K). As low as possible latency, with no clear boundary.Generation of triggers/signals to telescopes to be generated within seconds/minutes

Main difference is in the philosophy behind data taking

  • Filter events, keeping interesting ones by trigger (HEP)
  • Keep everything, compressing if needed (Astro)

This dictates needs and hardware needs (edge computing vs cloud, which hardware, etc)

Shift observed in HEP: experiments considering real-time analysis workflows more, and interest shifting to lossy compression rather than to event filtering

4 of 5

Issues of interest

HEP:

Ad-hoc solutions for specific use case (custom electronics, etc.)

For DL, special focus on custom architectures (e.g., graph networks) with specific hardware needs (e.g., sparse data handling)

Ideal situation: unified programming environment where magically high-level code and someone is going to deploy it in custom hardware (Alpaka, OneAPI,...).

Change of mentality required, which could be triggered by the challenge ahead

  • Current solution don’t scale to HL-LHC -> people might be interested to new workflows
  • Deep Learning is central to this, providing shortcuts to speed up analytic solutions by approximating them
  • Deep Learning easier to integrate with technological improvements, since industries are investing massively in that direction

5 of 5

Ideas for (common) work

Edge computing (can HEP work with hls4ml be of interest to others? MP & Elena talking on GW applications)

Cloud real-time computing: CMS has a basic tool (https://github.com/fastmachinelearning/SonicCMS) that could be interesting to others. Already considered as a solution for DUNE. possible use case: what if not all your centers have GPUs? Can you share them on the cloud?

Concrete possibility to work on anomaly detection, which seems to be a common problem of interest that could be declined on different domains (atro/hep/gw), different environments (L1, HLT @HEP, GW interferometers for multi-messanger astro, etc) and different hardware (custom electronics, heterogenous computing on edge, cloud) depending on latency requirements

New hardware could play a role

  • Graph networks on IPU
  • Spiking neural networks for time series