ABCDEFGHIJKLMNOPQRSTUVWXYZAAABACADAEAFAGAHAIAJAKAL
1
SoftwareConsortiumPartnerContact email (correspondant technique)To Be benchmarked,
To Be benchmarked (2024)
CommentsRepositoryLicenseParallelismData Résilence,
Résilience
DevOpsPCWP1WP2WP3WP4WP5WP6WP7Interfaced with
2
GeoSLLNL,Stanford,TEInria BXSOstefano.frambati@totalenergies.frNOT YEThttps://github.com/orgs/GEOSXOSS:: LGPL v*C++discretisation Elt fini/Elt Spectraux/Vol Finis,
couplage multiphysique
ROM: RB, POD, ...,
PINN
PBI deterministiciterative methodsGeophysics, Energy,
proxy-apps
3
Freefem++Sorbonne USorbonne U,
Inria PARIS
frederic.hecht@sorbonne-universite.fr,
frederic.hecht@sorbonne-universite.fr, pierre-henri.tournier@sorbonne-universite.fr, pierre.jolivet@sorbonne-universite.fr,
frederic.hecht@sorbonne-universite.fr,
pierre-henri.tournier@sorbonne-universite.fr, pierre.jolivet@sorbonne-universite.fr,
frederic.hecht@sorbonne-universite.fr,
pierre-henri.tournier@sorbonne-universite.fr
pierre.jolivet@sorbonne-universite.fr
CPUDSL pour les éléments finis. Le parallélisme est basé sur MPI.https://github.com/FreeFem/FreeFem-sourcesOSS:: LGPL v*C++MPIVTK,
in-house format,
HDF5,
Gmsh and asssociated formats
Continuous integration,
Test - Unit,
Test - Validation,
Container - Docker,
Packages - Debian
PC1 - ExaMA,
PC2 - WP1 - High-level approaches for developing efficient and composable parallel software,
PC3 - WP2 - Entrées sorties, stockage,
PC1 - ExaMA,
PC2 - WP1 - High-level approaches for developing efficient and composable parallel software,
cG, dG/hdG, adaptation de maillage, maillage non structuré, couplage multiphysiqueDDM, Couplage multi-physique Algebrique, multi-rhs, krylov reuseiterative methodsGeophysics, Energy,
Health,
Environment, Energy,
Aero,
mini-apps
PETSc,
MMG/ParMMG,
Scotch,
MUMPS,
HPDDM
4
Feel++Feel++ ConsortiumUnistra,
Inria Grenoble,
CNRS
christophe.prudhomme@cemosis.fr, vincent.chabannes@cemosis.frCPUGitHub - feelpp/feelpp: Feel++: Finite Element Embedded Language and Library in C++OSS:: LGPL v*,
OSS:: GPL v*
C++17,
Python
MPI,
Parallelism - C++17 and after,
Task based - C++
Json,
YAML,
HDF5,
Data-management system,
VTK,
in-house format,
VTK,
Ensight,
Gmsh and asssociated formats
checkpoint restartContinuous integration,
Container - Docker,
Container - Singularity,
Packages - Debian,
Packages - Ubuntu,
Packages - Fedora,
Packages - Spack,
Test - Unit,
Test - Verification,
Test - Validation
PC2 - WP3 - Runtime Systems,
PC2 - WP4 - Portable, scalable numerical building blocks and software,
PC1 - ExaMA,
PC2 - WP1 - High-level approaches for developing efficient and composable parallel software,
PC2 - WP2 - Just-in-Time code optimization with continuous feedback loop
maillage non structuré,
adaptation de maillage,
cG,
dG/hdG,
parallelisation en temps,
couplage multiphysique,
couplage multi-échelle,
inhouse,
interface,
discretisation Elt fini/Elt Spectraux/Vol Finis
ROM-DA: GEIM PBDW, ...,
ROM: NIRB,
ROM: RB, POD, ...
DDM, Couplage multi-physique AlgebriqueDA-stochastic: Ensemble iterative methodsinterfaceHealth, Environment, Energy, Physics,
mini-apps
OpenTurns,
MMG/ParMMG,
PETSc,
Salome,
Dymola/OpenModelica/FMU,
HPDDM
5
ScimbaUNISTRA, INRIAUnistraemmanuel.franck@inria.fr,
emmanuel.franck@inria.fr,,
emmanuel.franck@inria.fr,

emmanuel.franck@inria.fr
PythonGPU NN/Autoencoder,
PINN
pytorch
6
MMG/ParMMGInriaInria BXSO,
U Grenoble Alpes,
Sorbonne U
CPUC,
Fortran
MPIin-house formatContinuous integrationmaillage non structuré,
adaptation de maillage
7
CGALInriaInria CApierre.alliez@inria.frCPUPas encore de code GPU.,
Pas encore de code GPU.
Des algorithmes en parallème mémoire partagée,
Pas encore de code GPU.
Des algorithmes en parallèle (mémoire partagée, Intel TBB),
Pas encore de code GPU.,
Des algorithmes en parallèle (mémoire partagée, Intel TBB)
https://github.com/CGALOSS:: GPL v*,
OSS:: LGPL v*
C++maillage non structuré
8
HawenInriaInria BXSOflorian.faucher@inria.frYeshttps://gitlab.com/ffaucher/hawenOSS:: GPL v*FortranMPI,
Multithread-OpenMP
in-house format,
VTK,
Gmsh and asssociated formats
dG/hdG,
couplage multiphysique
multi-rhs, krylov reusePBI deterministiciterative methodsSensitivity analysisGeophysics, Energy,
Astrophysics,
mini-apps
MUMPS
9
UranieCEACEArudy.chocat@cea.fr, jean-baptiste.blanchard@cea.frCPUhttps://sourceforge.net/projects/uranie/OSS:: LGPL v*C++,
Python
MPI,
Multithread,
GPU
Json,
ASCII,
ROOT,
SQL
Continuous integration,
Test - Unit,
Test - Verification,
Test - Validation
NN/Autoencoderiterative methods,
Robust optimisation,
metaheuristic
Sensitivity analysis,
U propagation,
Robust optimisation
Salome
10
SalomeEDF,CEA
11
Dymola/OpenModelica/FMUModelica & FMIC++couplage multiphysique,
couplage multi-échelle
12
PETScArgonne National LaboratorySorbonne UINDIRECTPETSc is benchmarked via Feel++ and FreeFEM++ at leastOSS:: 2-clause BSDC,
C++,
Fortran,
Python,
Julia
MPI,
Multithread-OpenMP,
GPU
XML,
YAML,
HDF5,
Json,
in-house format,
VTK,
Gmsh and asssociated formats,
MED
Continuous integration,
Packages - Debian,
Packages - Ubuntu,
Packages - Fedora,
Packages - Other,
Packages - Spack,
Test - Unit,
Test - Verification,
Test - Validation
PC1 - ExaMA,
PC2 - WP4 - Portable, scalable numerical building blocks and software
DDM,
Couplage multi-physique Algebrique,
multi-precision,
calcul tensoriel,
multi-rhs, krylov reuse,
randomization,
low-rank,
interface
Freefem++,
Feel++,
MMG/ParMMG,
MUMPS,
Scotch,
PaStIX,
HPDDM
13
ScotchInria BXSOINDIRECTBenchmark via Maphys++C
14
StarPUInria BXSOINDIRECTC
15
MUMPSMumps TechnologiesSorbonne UINDIRECThttp://mumps-solver.org/OSS: Cecill-*FortranMPI,
Multithread-OpenMP
multi-precision,
multi-rhs, krylov reuse,
low-rank
Scotch
16
PaStIXInria, Université Bordeaux, BINPInria BXSOINDIRECTOSS: Cecill-*CMultithread,
MPI
Continuous integration,
Packages - GUIX-HPC,
Container - Singularity,
Test - Verification
PC2 - WP3 - Runtime Systems,
PC2 - WP1 - High-level approaches for developing efficient and composable parallel software
direct solver,
low-rank
Scotch
17
qr_mumpsCNRSINDIRECT
Benchmark via Maphys++, mais sera benckmarké en stand alone dans PC2,
Benchmark via Maphys++, mais sera
benckmarké en stand alone dans PC2
https://gitlab.com/qr_mumps/qr_mumps,
https://gitlab.com/qr_mumps/qr_mumps
OSS: Cecill-*FortranTask based - RuntimeContinuous integration,
Packages - GUIX-HPC,
Container - Singularity,
Test - Verification
PC2 - WP1 - High-level approaches for developing efficient and composable parallel software,
PC2 - WP3 - Runtime Systems
svd, eigen solver,
low-rank
StarPU,
Scotch
18
ScalFMMInria BXSOolivier.coulaud@inria.frNOT YEThttps://gitlab.inria.fr/solverstack/ScalFMMOSS: Cecill-*C++Task based - Runtime,
Multithread-OpenMP,
GPU ,
MPI
Continuous integration,
Packages - GUIX-HPC,
Container - Singularity,
Test - Verification
PC2 - WP1 - High-level approaches for developing efficient and composable parallel software,
PC2 - WP3 - Runtime Systems
dense / H matrixStarPU
19
FabulousInria BXSOgilles.marait@inria.frINDIRECThttps://gitlab.inria.fr/solverstack/fabulousOSS: Cecill-*C++Continuous integration,
Packages - GUIX-HPC,
Container - Singularity,
Test - Verification
PC2 - WP1 - High-level approaches for developing efficient and composable parallel software,
PC2 - WP3 - Runtime Systems
krylov solver,
multi-rhs, krylov reuse,
multi-precision
20
Maphys++Inria BXSOgilles.marait@inria.frHYBRIDhttps://gitlab.inria.fr/solverstack/maphys/maphysppOSS: Cecill-*C++,
C,
Fortran
MPI,
Multithread,
GPU
Continuous integration,
Packages - GUIX-HPC,
Container - Singularity,
Test - Verification
PC2 - WP3 - Runtime Systems,
PC2 - WP1 - High-level approaches for developing efficient and composable parallel software
direct solver,
svd, eigen solver,
krylov solver
PaStIX,
qr_mumps,
MUMPS,
Scotch
21
HPDDMSorbonne U,
Inria PARIS
pierre@joliv.etCPUC++/MPI,
Librairie autonome écrite en C++/MPI ; interfacée avec PETSc, FreeFEM, Feel++, Code_Aster, ...
https://github.com/hpddm/hpddmOSS:: LGPL v*C,
C++,
Fortran,
Python
MPI,
Multithread-OpenMP
in-house formatContinuous integration,
Test - Unit,
Test - Verification
PC1 - ExaMA,
PC2 - WP4 - Portable, scalable numerical building blocks and software
DDM,
multi-precision,
calcul tensoriel,
multi-rhs, krylov reuse,
randomization,
low-rank,
interface
Freefem++,
Feel++,
PETSc,
MUMPS,
PaStIX
22
23
PROMISESorbonne UFabienne.Jezequel@lip6.frNOT YEThttp://promise.lip6.fr/OSS:: LGPL v*PythonContinuous integrationPC1 - ExaMAmulti-precisionCADNA
24
CADNASorbonne UFabienne.Jezequel@lip6.frNOT YEThttp://cadna.lip6.fr/OSS:: LGPL v*C++,
Fortran
MPI,
Multithread-OpenMP,
GPU
Continuous integrationPC1 - ExaMAmulti-precision
25
SamuraiIP ParisCEA,
IPP
Loic Gouarinhttps://github.com/hpc-maths/samuraiOSS::BSDC++,
C++14,
C++17
MPI,
Multithread
HDF5Continuous integration,
Test - Unit,
Test - Verification,
Test - Validation,
Packages - Other
PC1 - ExaMA,
PC2 - WP1 - High-level approaches for developing efficient and composable parallel software,
PC2 - WP3 - Runtime Systems,
PC2 - WP4 - Portable, scalable numerical building blocks and software,
PC3 - WP2 - Entrées sorties, stockage
adaptation de maillagePETSc
26
ZellijUniversité de LilleInria Lilleel-ghazali.talbi@univ-lille.frGPUhttps://github.com/ThomasFirmin/zellijOSS: Cecill-*PythonMPIcheckpoint restartmetaheuristic,
iterative methods
27
pBBUniversité de LilleInria Lillenouredine.melab@univ-lille.frHYBRIDhttps://gitlab.inria.fr/jgmys/permutationbbOSS: Cecill-*C++MPI,
GPU ,
Multithread,
Chapel
checkpoint restartiterative methods
28
TRUST PlatformCEACEApierre.ledac@cea.frHYBRIDPossibilité de benchs CPU-only, GPU-only et vrai HYBRIDhttps://github.com/cea-trust-platformOSS::C++MPI,
GPU
HDF5,
MED,
VTK
Continuous integration,
Test - Unit,
Test - Verification
PC1, PC2, PC3, PC5Couplage multi-physique
29
MEDCouplingCEA, EDFCEANOT YETPas en première phase de benchmark. A venir ensuite quand couplage mis en place.https://github.com/cea-trust-platformOSS::C++MPIHDF5,
MED
Continuous integration,
Test - Unit,
Test - Verification
PC1, PC2, PC3, PC5Couplage multi-physique
30
ICoCo coupling interfaceCEACEANOT YETPas en première phase de benchmark. A venir ensuite quand couplage mis en place.https://github.com/cea-trust-platform/icoco-couplingOSS::C++MPIMEDPC1, PC2, PC3, PC5Couplage multi-physique
31
EUROPLEXUSCEA, JRC European Commission, EDF, ONERA, Safran Tech, CEANoInutile à benchmarker seul dans ExaMA.Fortran,
C++
MPIMED,
VTK
Continuous integration,
Test - Unit,
Test - Verification
PC1, PC2, PC3, PC5Couplage multi-physique
32
MANTACEA + consortium in development (see EUROPLEXUS)CEAolivier.jamond@cea.frCPUAccès au code et repo à voir avec le contact Olivier Jamond (solutions transitoires à ce stade au niveau CEA)C++MPIMED,
VTK,
Gmsh and asssociated formats,
MFront
Continuous integration,
Test - Unit,
Test - Verification
PC1, PC2, PC3, PC5Couplage multi-physiqueEnergy,
mini-apps,
proxy-apps
33
MFEMCEANOT YETA venir avec la thèse AMR peut-être en 2025https://github.com/mfem/mfemOSS::C++MPI,
Multithread
VTK,
Gmsh and asssociated formats
Continuous integration,
Test - Unit,
Test - Verification
PC1 - ExaMAFE exascale framework,
AMR
wave propagation in heterogeneous media,
PBI deterministic
34
PLEIADESCEA,
EDF,
Framatome
CEANOT YETPlus tard quand PLEIADES-HPC sera prêtFortran,
Python,
C++
MPI,
Multithread
MED,
HDF5,
XML,
MFront
Continuous integration,
Test - Unit,
Test - Verification
PC1 - ExaMA,
PC5 - Demonstrateur
couplage multi-échelle,
couplage multiphysique
Couplage multi-physiquemini-apps
35
tensorflowPytorch utilisé par les collègues, mais plus riche et plus stable, mais approche plus complexe ?C++,
C++17,
Python
NN/Autoencoder,
GAN
36
scikit-learnSera développé dans PC3 plutôtC++NN/Autoencoder,
GAN
37
pytorchC++NN/Autoencoder,
GAN
PBI stochastic
38
melissaINRIADéveloppé par Bruno Raffin ==> Sera benchmarké dans PC3,
Développé par Bruno Raffin ==> Sera benchmarké dans PC3. On sera utilisateurs seulement.,
Développé par Bruno Raffin ==> Sera benchmarké dans PC3. On sera utilisateurs seulement.
https://gitlab.inria.fr/melissaBSD3python, C/C++, fortran90DA-stochastic: Ensemble
39
crocoInria, IRD, CNRS, Univ Paul Sabatier, SHOM, Ifremer Laurent.Debreu@inria.frSofts volumineuxhttps://gitlab.inria.fr/croco-oceanFortranMPI,
GPU ,
Multithread-OpenMP
netcdf,
HDF5
checkpoint restartDA-deterministic: ndVARproxy-apps
40
OpenTURNSAirbus, EDF, IMACS, ONERA, Phimecahttps://openturns.github.ioOSS:: LGPL v*,
OSS:: GNU LGPL version 3
C++,
Python
MPIHDF5Continuous integration,
Test - Unit,
Test - Validation
PC1 - ExaMAiterative methodsSensitivity analysis,
U propagation,
Robust optimisation
41
DakotaC++Sensitivity analysis,
U propagation,
Robust optimisation
42
wavenetDeepMindhttps://github.com/benmoseley/seismic-simulation-wavenetMITpython, pytorch,
python, pytorch, tensorflow
PC1PBI stochastic proxy-apps
43
deeponethttps://github.com/lululxvi/deeponetCC BY-NC-SA 4.0python, matlab,
python, matlab, pytorch, tensorflow
PC1PBI stochastic
44
Arcane FrameworkCEA, IFPENCEAlydie.grospellier@cea.frHYBRIDPossibilité de benchs CPU-only, GPU-only avec le même binaire. https://github.com/arcaneframework/frameworkOSS: apache-2C++,
C#
MPI,
Multithread,
Multithread-TBB,
GPU ,
Task based - Runtime
XML,
HDF5,
Ensight,
Json
checkpoint restartContinuous integration,
Test - Unit,
Test - Verification
maillage non structuré,
adaptation de maillage,
AMR,
discretisation Elt fini/Elt Spectraux/Vol Finis,
couplage multiphysique
Couplage multi-physiqueEnvironment, Energy,
Physics
45
MaHyCoCEACEAjean-philippe.perlat@cea.frHYBRIDPossibilité de benchs CPU-only, GPU-only avec le même binaire.C++GPU , MPI, XMLcheckpoint restartContinuous integrationPC1 - ExaMAmaillage non structuréCouplage multi-physiqueExact methodsPhysicsArcane Framework
46
SpecxInriaInriaberenger.bramas@inria.frhttps://github.com/berenger-eu/specxOSS:: LGPL v*C++17Task based - RuntimeContinuous integration,
Test - Unit,
Test - Verification
PC2 - WP3 - Runtime SystemsFeel++
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100