1 of 8

LAPP au CAF

24 Novembre 2024

1

2 of 8

Team by January 2025

2

Gravity in SM?

  • Composition of the team
    • 11 ( ~7,5 effective FTE) physicists with permanent position / 7 post-docs / 7 PhD
    • Post-docs funded by Labex , ANR, ERC, IN2P3
    • ~17 ( ~11 FTE) engineers in mechanic, electronic, online and Grid

  • Involvement of the team in computing
    • Physicist : S. Jezequel (10% on spare time ) : Site support+CAF
    • Engineers : Site operation (share of MUST platform to ATLAS) :
      • F. Chollet will retire in march 2026 being replaced by M. Gauthier-Lafaye as MUST technical responsible
      • WLCG should avoid non-converging tools (last example : Tockens)

  • Involvement in software
    • LAr : online + firmware
      • O. Arnaez (CPJ USMB), M. Delmastro,
      • F. Bellachia, S. Lafrasse, J. Jacquemier (arrivée 01/12/24), E. F. Rasambatra (CDD CPJ), N. Chevillot
    • ML on tracker (ANR, 1 postdoc + 1 PhD) will finish by end 2025

3 of 8

LAPP-T2 infrastructure for pledge 2024-2025

3

Gravity in SM?

  • 40 Gb/s external connexion was set in production in 2024

    • In time for the 2024 Data Challenge
    • Next target is 100 Gb/s connexion hopefully for 2026 data challenge

  • « Grid » resources (pledge 2025 bought and being deployed)

    • storage = Pledge 2025
      • Deployed : 7.5 PB (-0,8 +1,3 = 0.5 PB)
      • Objective : 10 PB for HL-LHC (by 2028?)

    • computing = 65 k Hep-Spec06 in 2025 (+ 10 k)

4 of 8

LAPP-T2 infrastructure non-pledge in 2024-2025

4

Gravity in SM?

  • Other « grid » resources

    • storage = 150 TB LAPP_LOCALGROUPDISK (out of warranty recycled GRID storage).
      • Possibility to increase by 300 TB each year
    • computing = 0

  • Other local (lab, university) resources

    • Local batch resources also shared with Grid
      • Very efficient to start jobs
    • Few interactive machines shared within laboratory

5 of 8

Analysis activity/requirements (1)

5

Gravity in SM?

Multiboson analysis (VBS WZ and Zy)

Team : O. Arnaez, L Di Ciaccio, I. Koletsou, E. Sauvan, A. Carneli (Post-doc CPJ 2024-2026), L. Boudet (PhD 2023-2025), M. Dubau (PhD 2023-2026), P. Ziakas (PhD 2024-2027)

  • Use dAOD stored on sps (small format) and LOCALGROUPDISK (bigger format)
    • Centralised management for sps for SM group
    • Stuck with non increasing sps space over last years : 70 TB
    • Recurrent minimal requirement of 90 TB
  • In 2024, processed on CCIN2P3 batch (choice between LAPP and CC adapting to effective availability and reliability)
  • 2025 : start Run-3 analysis

Global remark : Run-2 analysis still finishing / Run-2 + Run-3 starting

→ More storage sps/LOCALGROUPDISK will be necessary to cope with both

6 of 8

Analysis activity/requirements (2)

6

Gravity in SM?

Single Higgs / HH

Team : M. Delmastro, N. Berger, Z. Wu (Post-doc IN2P3), K. Oleksii (Post-doc IN2P3)

  • sps usage : 12 TB
  • In 2024: HH→yybb partial Run 3 analysis, Photon ID for diHiggs
    • Most of the analysis done on lxplus / EOS (shared ntuples)
  • 2025:
    • HH→yybb EFT interpretation (private samples to be generated, batch for fitting: used CC for previous EFT interpretations, might move part of the workflow there)
    • Full Run 3 HH→yybb design (ML Photon ID, use of increased Run 3 dataset with EasyJet ntuples - usually on EOS, might migrate part of the workflow to CC / sps)

ML Tracking : ACTS

Team: Jessica L. , F. Castillo (Post-doc ANR ATRAPP 2023-2025), J. Couthures (PhD)

  • sps usage : 33 TB

7 of 8

Analysis activity/requirements (3)

7

Gravity in SM?

SM : Drell-Yan

Team : T. Hryn’ova (ERC DITTO) + her ERC team

    • Post-doc : D. Lewis, N. Brahimi, R. Balasubramanian
    • PhD : T. Cavaliere (PhD 22-25), M. Zumbihl (24-27), T. Duong (24-27)

Subjects :Drell-Yan ee/mumu/tautau + b-jets

  • Tried to use IN2P3-CC_PHYS-SM for storage of ntuples + jobs on CERN batch
    • Pros:
  • Lots of space available (350 TB as of January 2023)
  • Replication capability (i.e. no need to download)
  • Cons:
  • Remote interactive access, whilst it could be considered a “pro” is actually slow and very inefficient
  • CERN batch always very busy and more prone to failed jobs with the remote access
  • Ultimately, running of all systematics/variables/selections was too inefficient and resulted in warnings from CC-IN2P3 admin
  • Moved to UChicago Analysis Facility to profit from available disk storage and local batch for 2024
  • SPS usage currently marginal but new request for 10 TB for Run 3 analyses

8 of 8

Calibration activity/requirements

8

Gravity in SM?

E/gamma

Electrons

  • Team : N. Brahimi
  • PhD : M. Zumbihl (24-27), T. Duong (24-27)
  • Use GPU for optimization of DNN algorithms for electron triggers: training DNN requires 2 GPUs with approximately 100GB of RAM and ~ 1 T of inputs on SPS

Photons

  • Team : M. Delmastro + M2 intern (PhD in 25-28)
  • Explore ML approaches to correct shower shapes (in synergy with electron studies), e.g. optimal transport, might need GPU for training

B-tagging

  • Team : Z. Wu, M. Delmastro
  • All b-tagging calibration performed at CERN on lxplus since Alma9 needed and only CentOS7 available when we began (it could be migrated to CC since September, still not done)