A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | AA | AB | AC | AD | AE | AF | AG | AH | AI | AJ | AK | AL | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Software | Consortium | Partner | Contact email (correspondant technique) | To Be benchmarked, To Be benchmarked (2024) | Comments | Repository | License | Parallelism | Data | Résilence, Résilience | DevOps | PC | WP1 | WP2 | WP3 | WP4 | WP5 | WP6 | WP7 | Interfaced with | ||||||||||||||||||
2 | GeoS | LLNL,Stanford,TE | Inria BXSO | stefano.frambati@totalenergies.fr | NOT YET | https://github.com/orgs/GEOSX | OSS:: LGPL v* | C++ | discretisation Elt fini/Elt Spectraux/Vol Finis, couplage multiphysique | ROM: RB, POD, ..., PINN | PBI deterministic | iterative methods | Geophysics, Energy, proxy-apps | ||||||||||||||||||||||||||
3 | Freefem++ | Sorbonne U | Sorbonne U, Inria PARIS | frederic.hecht@sorbonne-universite.fr, frederic.hecht@sorbonne-universite.fr, pierre-henri.tournier@sorbonne-universite.fr, pierre.jolivet@sorbonne-universite.fr, frederic.hecht@sorbonne-universite.fr, pierre-henri.tournier@sorbonne-universite.fr, pierre.jolivet@sorbonne-universite.fr, frederic.hecht@sorbonne-universite.fr, pierre-henri.tournier@sorbonne-universite.fr pierre.jolivet@sorbonne-universite.fr | CPU | DSL pour les éléments finis. Le parallélisme est basé sur MPI. | https://github.com/FreeFem/FreeFem-sources | OSS:: LGPL v* | C++ | MPI | VTK, in-house format, HDF5, Gmsh and asssociated formats | Continuous integration, Test - Unit, Test - Validation, Container - Docker, Packages - Debian | PC1 - ExaMA, PC2 - WP1 - High-level approaches for developing efficient and composable parallel software, PC3 - WP2 - Entrées sorties, stockage, PC1 - ExaMA, PC2 - WP1 - High-level approaches for developing efficient and composable parallel software, | cG, dG/hdG, adaptation de maillage, maillage non structuré, couplage multiphysique | DDM, Couplage multi-physique Algebrique, multi-rhs, krylov reuse | iterative methods | Geophysics, Energy, Health, Environment, Energy, Aero, mini-apps | PETSc, MMG/ParMMG, Scotch, MUMPS, HPDDM | |||||||||||||||||||||
4 | Feel++ | Feel++ Consortium | Unistra, Inria Grenoble, CNRS | christophe.prudhomme@cemosis.fr, vincent.chabannes@cemosis.fr | CPU | GitHub - feelpp/feelpp: Feel++: Finite Element Embedded Language and Library in C++ | OSS:: LGPL v*, OSS:: GPL v* | C++17, Python | MPI, Parallelism - C++17 and after, Task based - C++ | Json, YAML, HDF5, Data-management system, VTK, in-house format, VTK, Ensight, Gmsh and asssociated formats | checkpoint restart | Continuous integration, Container - Docker, Container - Singularity, Packages - Debian, Packages - Ubuntu, Packages - Fedora, Packages - Spack, Test - Unit, Test - Verification, Test - Validation | PC2 - WP3 - Runtime Systems, PC2 - WP4 - Portable, scalable numerical building blocks and software, PC1 - ExaMA, PC2 - WP1 - High-level approaches for developing efficient and composable parallel software, PC2 - WP2 - Just-in-Time code optimization with continuous feedback loop | maillage non structuré, adaptation de maillage, cG, dG/hdG, parallelisation en temps, couplage multiphysique, couplage multi-échelle, inhouse, interface, discretisation Elt fini/Elt Spectraux/Vol Finis | ROM-DA: GEIM PBDW, ..., ROM: NIRB, ROM: RB, POD, ... | DDM, Couplage multi-physique Algebrique | DA-stochastic: Ensemble | iterative methods | interface | Health, Environment, Energy, Physics, mini-apps | OpenTurns, MMG/ParMMG, PETSc, Salome, Dymola/OpenModelica/FMU, HPDDM | ||||||||||||||||||
5 | Scimba | UNISTRA, INRIA | Unistra | emmanuel.franck@inria.fr, emmanuel.franck@inria.fr,, emmanuel.franck@inria.fr, emmanuel.franck@inria.fr | | Python | GPU | NN/Autoencoder, PINN | pytorch | ||||||||||||||||||||||||||||||
6 | MMG/ParMMG | Inria | Inria BXSO, U Grenoble Alpes, Sorbonne U | CPU | C, Fortran | MPI | in-house format | Continuous integration | maillage non structuré, adaptation de maillage | ||||||||||||||||||||||||||||||
7 | CGAL | Inria | Inria CA | pierre.alliez@inria.fr | CPU | Pas encore de code GPU., Pas encore de code GPU. Des algorithmes en parallème mémoire partagée, Pas encore de code GPU. Des algorithmes en parallèle (mémoire partagée, Intel TBB), Pas encore de code GPU., Des algorithmes en parallèle (mémoire partagée, Intel TBB) | https://github.com/CGAL | OSS:: GPL v*, OSS:: LGPL v* | C++ | maillage non structuré | |||||||||||||||||||||||||||||
8 | Hawen | Inria | Inria BXSO | florian.faucher@inria.fr | Yes | https://gitlab.com/ffaucher/hawen | OSS:: GPL v* | Fortran | MPI, Multithread-OpenMP | in-house format, VTK, Gmsh and asssociated formats | dG/hdG, couplage multiphysique | multi-rhs, krylov reuse | PBI deterministic | iterative methods | Sensitivity analysis | Geophysics, Energy, Astrophysics, mini-apps | MUMPS | ||||||||||||||||||||||
9 | Uranie | CEA | CEA | rudy.chocat@cea.fr, jean-baptiste.blanchard@cea.fr | CPU | https://sourceforge.net/projects/uranie/ | OSS:: LGPL v* | C++, Python | MPI, Multithread, GPU | Json, ASCII, ROOT, SQL | Continuous integration, Test - Unit, Test - Verification, Test - Validation | NN/Autoencoder | iterative methods, Robust optimisation, metaheuristic | Sensitivity analysis, U propagation, Robust optimisation | Salome | ||||||||||||||||||||||||
10 | Salome | EDF,CEA | | ||||||||||||||||||||||||||||||||||||
11 | Dymola/OpenModelica/FMU | Modelica & FMI | | C++ | couplage multiphysique, couplage multi-échelle | ||||||||||||||||||||||||||||||||||
12 | PETSc | Argonne National Laboratory | Sorbonne U | INDIRECT | PETSc is benchmarked via Feel++ and FreeFEM++ at least | OSS:: 2-clause BSD | C, C++, Fortran, Python, Julia | MPI, Multithread-OpenMP, GPU | XML, YAML, HDF5, Json, in-house format, VTK, Gmsh and asssociated formats, MED | Continuous integration, Packages - Debian, Packages - Ubuntu, Packages - Fedora, Packages - Other, Packages - Spack, Test - Unit, Test - Verification, Test - Validation | PC1 - ExaMA, PC2 - WP4 - Portable, scalable numerical building blocks and software | DDM, Couplage multi-physique Algebrique, multi-precision, calcul tensoriel, multi-rhs, krylov reuse, randomization, low-rank, interface | Freefem++, Feel++, MMG/ParMMG, MUMPS, Scotch, PaStIX, HPDDM | ||||||||||||||||||||||||||
13 | Scotch | Inria BXSO | INDIRECT | Benchmark via Maphys++ | C | ||||||||||||||||||||||||||||||||||
14 | StarPU | Inria BXSO | INDIRECT | C | |||||||||||||||||||||||||||||||||||
15 | MUMPS | Mumps Technologies | Sorbonne U | INDIRECT | http://mumps-solver.org/ | OSS: Cecill-* | Fortran | MPI, Multithread-OpenMP | multi-precision, multi-rhs, krylov reuse, low-rank | Scotch | |||||||||||||||||||||||||||||
16 | PaStIX | Inria, Université Bordeaux, BINP | Inria BXSO | INDIRECT | OSS: Cecill-* | C | Multithread, MPI | Continuous integration, Packages - GUIX-HPC, Container - Singularity, Test - Verification | PC2 - WP3 - Runtime Systems, PC2 - WP1 - High-level approaches for developing efficient and composable parallel software | direct solver, low-rank | Scotch | ||||||||||||||||||||||||||||
17 | qr_mumps | CNRS | INDIRECT | Benchmark via Maphys++, mais sera benckmarké en stand alone dans PC2, Benchmark via Maphys++, mais sera benckmarké en stand alone dans PC2 | https://gitlab.com/qr_mumps/qr_mumps, https://gitlab.com/qr_mumps/qr_mumps | OSS: Cecill-* | Fortran | Task based - Runtime | Continuous integration, Packages - GUIX-HPC, Container - Singularity, Test - Verification | PC2 - WP1 - High-level approaches for developing efficient and composable parallel software, PC2 - WP3 - Runtime Systems | svd, eigen solver, low-rank | StarPU, Scotch | |||||||||||||||||||||||||||
18 | ScalFMM | Inria BXSO | olivier.coulaud@inria.fr | NOT YET | https://gitlab.inria.fr/solverstack/ScalFMM | OSS: Cecill-* | C++ | Task based - Runtime, Multithread-OpenMP, GPU , MPI | Continuous integration, Packages - GUIX-HPC, Container - Singularity, Test - Verification | PC2 - WP1 - High-level approaches for developing efficient and composable parallel software, PC2 - WP3 - Runtime Systems | dense / H matrix | StarPU | |||||||||||||||||||||||||||
19 | Fabulous | Inria BXSO | gilles.marait@inria.fr | INDIRECT | https://gitlab.inria.fr/solverstack/fabulous | OSS: Cecill-* | C++ | Continuous integration, Packages - GUIX-HPC, Container - Singularity, Test - Verification | PC2 - WP1 - High-level approaches for developing efficient and composable parallel software, PC2 - WP3 - Runtime Systems | krylov solver, multi-rhs, krylov reuse, multi-precision | |||||||||||||||||||||||||||||
20 | Maphys++ | Inria BXSO | gilles.marait@inria.fr | HYBRID | https://gitlab.inria.fr/solverstack/maphys/maphyspp | OSS: Cecill-* | C++, C, Fortran | MPI, Multithread, GPU | Continuous integration, Packages - GUIX-HPC, Container - Singularity, Test - Verification | PC2 - WP3 - Runtime Systems, PC2 - WP1 - High-level approaches for developing efficient and composable parallel software | direct solver, svd, eigen solver, krylov solver | PaStIX, qr_mumps, MUMPS, Scotch | |||||||||||||||||||||||||||
21 | HPDDM | Sorbonne U, Inria PARIS | pierre@joliv.et | CPU | C++/MPI, Librairie autonome écrite en C++/MPI ; interfacée avec PETSc, FreeFEM, Feel++, Code_Aster, ... | https://github.com/hpddm/hpddm | OSS:: LGPL v* | C, C++, Fortran, Python | MPI, Multithread-OpenMP | in-house format | Continuous integration, Test - Unit, Test - Verification | PC1 - ExaMA, PC2 - WP4 - Portable, scalable numerical building blocks and software | DDM, multi-precision, calcul tensoriel, multi-rhs, krylov reuse, randomization, low-rank, interface | Freefem++, Feel++, PETSc, MUMPS, PaStIX | |||||||||||||||||||||||||
22 | |||||||||||||||||||||||||||||||||||||||
23 | PROMISE | Sorbonne U | Fabienne.Jezequel@lip6.fr | NOT YET | http://promise.lip6.fr/ | OSS:: LGPL v* | Python | Continuous integration | PC1 - ExaMA | multi-precision | CADNA | ||||||||||||||||||||||||||||
24 | CADNA | Sorbonne U | Fabienne.Jezequel@lip6.fr | NOT YET | http://cadna.lip6.fr/ | OSS:: LGPL v* | C++, Fortran | MPI, Multithread-OpenMP, GPU | Continuous integration | PC1 - ExaMA | multi-precision | ||||||||||||||||||||||||||||
25 | Samurai | IP Paris | CEA, IPP | Loic Gouarin | | https://github.com/hpc-maths/samurai | OSS::BSD | C++, C++14, C++17 | MPI, Multithread | HDF5 | Continuous integration, Test - Unit, Test - Verification, Test - Validation, Packages - Other | PC1 - ExaMA, PC2 - WP1 - High-level approaches for developing efficient and composable parallel software, PC2 - WP3 - Runtime Systems, PC2 - WP4 - Portable, scalable numerical building blocks and software, PC3 - WP2 - Entrées sorties, stockage | adaptation de maillage | PETSc | |||||||||||||||||||||||||
26 | Zellij | Université de Lille | Inria Lille | el-ghazali.talbi@univ-lille.fr | GPU | https://github.com/ThomasFirmin/zellij | OSS: Cecill-* | Python | MPI | checkpoint restart | metaheuristic, iterative methods | ||||||||||||||||||||||||||||
27 | pBB | Université de Lille | Inria Lille | nouredine.melab@univ-lille.fr | HYBRID | https://gitlab.inria.fr/jgmys/permutationbb | OSS: Cecill-* | C++ | MPI, GPU , Multithread, Chapel | checkpoint restart | iterative methods | ||||||||||||||||||||||||||||
28 | TRUST Platform | CEA | CEA | pierre.ledac@cea.fr | HYBRID | Possibilité de benchs CPU-only, GPU-only et vrai HYBRID | https://github.com/cea-trust-platform | OSS:: | C++ | MPI, GPU | HDF5, MED, VTK | Continuous integration, Test - Unit, Test - Verification | PC1, PC2, PC3, PC5 | Couplage multi-physique | |||||||||||||||||||||||||
29 | MEDCoupling | CEA, EDF | CEA | NOT YET | Pas en première phase de benchmark. A venir ensuite quand couplage mis en place. | https://github.com/cea-trust-platform | OSS:: | C++ | MPI | HDF5, MED | Continuous integration, Test - Unit, Test - Verification | PC1, PC2, PC3, PC5 | Couplage multi-physique | ||||||||||||||||||||||||||
30 | ICoCo coupling interface | CEA | CEA | NOT YET | Pas en première phase de benchmark. A venir ensuite quand couplage mis en place. | https://github.com/cea-trust-platform/icoco-coupling | OSS:: | C++ | MPI | MED | PC1, PC2, PC3, PC5 | Couplage multi-physique | |||||||||||||||||||||||||||
31 | EUROPLEXUS | CEA, JRC European Commission, EDF, ONERA, Safran Tech, | CEA | No | Inutile à benchmarker seul dans ExaMA. | Fortran, C++ | MPI | MED, VTK | Continuous integration, Test - Unit, Test - Verification | PC1, PC2, PC3, PC5 | Couplage multi-physique | ||||||||||||||||||||||||||||
32 | MANTA | CEA + consortium in development (see EUROPLEXUS) | CEA | olivier.jamond@cea.fr | CPU | Accès au code et repo à voir avec le contact Olivier Jamond (solutions transitoires à ce stade au niveau CEA) | C++ | MPI | MED, VTK, Gmsh and asssociated formats, MFront | Continuous integration, Test - Unit, Test - Verification | PC1, PC2, PC3, PC5 | Couplage multi-physique | Energy, mini-apps, proxy-apps | ||||||||||||||||||||||||||
33 | MFEM | CEA | NOT YET | A venir avec la thèse AMR peut-être en 2025 | https://github.com/mfem/mfem | OSS:: | C++ | MPI, Multithread | VTK, Gmsh and asssociated formats | Continuous integration, Test - Unit, Test - Verification | PC1 - ExaMA | FE exascale framework, AMR | wave propagation in heterogeneous media, PBI deterministic | ||||||||||||||||||||||||||
34 | PLEIADES | CEA, EDF, Framatome | CEA | NOT YET | Plus tard quand PLEIADES-HPC sera prêt | Fortran, Python, C++ | MPI, Multithread | MED, HDF5, XML, MFront | Continuous integration, Test - Unit, Test - Verification | PC1 - ExaMA, PC5 - Demonstrateur | couplage multi-échelle, couplage multiphysique | Couplage multi-physique | mini-apps | ||||||||||||||||||||||||||
35 | tensorflow | | Pytorch utilisé par les collègues, mais plus riche et plus stable, mais approche plus complexe ? | C++, C++17, Python | NN/Autoencoder, GAN | ||||||||||||||||||||||||||||||||||
36 | scikit-learn | | Sera développé dans PC3 plutôt | C++ | NN/Autoencoder, GAN | ||||||||||||||||||||||||||||||||||
37 | pytorch | | C++ | NN/Autoencoder, GAN | PBI stochastic | ||||||||||||||||||||||||||||||||||
38 | melissa | INRIA | | Développé par Bruno Raffin ==> Sera benchmarké dans PC3, Développé par Bruno Raffin ==> Sera benchmarké dans PC3. On sera utilisateurs seulement., Développé par Bruno Raffin ==> Sera benchmarké dans PC3. On sera utilisateurs seulement. | https://gitlab.inria.fr/melissa | BSD3 | python, C/C++, fortran90 | DA-stochastic: Ensemble | |||||||||||||||||||||||||||||||
39 | croco | Inria, IRD, CNRS, Univ Paul Sabatier, SHOM, Ifremer | Laurent.Debreu@inria.fr | | Softs volumineux | https://gitlab.inria.fr/croco-ocean | Fortran | MPI, GPU , Multithread-OpenMP | netcdf, HDF5 | checkpoint restart | DA-deterministic: ndVAR | proxy-apps | |||||||||||||||||||||||||||
40 | OpenTURNS | Airbus, EDF, IMACS, ONERA, Phimeca | | https://openturns.github.io | OSS:: LGPL v*, OSS:: GNU LGPL version 3 | C++, Python | MPI | HDF5 | Continuous integration, Test - Unit, Test - Validation | PC1 - ExaMA | iterative methods | Sensitivity analysis, U propagation, Robust optimisation | |||||||||||||||||||||||||||
41 | Dakota | | C++ | Sensitivity analysis, U propagation, Robust optimisation | |||||||||||||||||||||||||||||||||||
42 | wavenet | DeepMind | | https://github.com/benmoseley/seismic-simulation-wavenet | MIT | python, pytorch, python, pytorch, tensorflow | PC1 | PBI stochastic | proxy-apps | ||||||||||||||||||||||||||||||
43 | deeponet | | https://github.com/lululxvi/deeponet | CC BY-NC-SA 4.0 | python, matlab, python, matlab, pytorch, tensorflow | PC1 | PBI stochastic | ||||||||||||||||||||||||||||||||
44 | Arcane Framework | CEA, IFPEN | CEA | lydie.grospellier@cea.fr | HYBRID | Possibilité de benchs CPU-only, GPU-only avec le même binaire. | https://github.com/arcaneframework/framework | OSS: apache-2 | C++, C# | MPI, Multithread, Multithread-TBB, GPU , Task based - Runtime | XML, HDF5, Ensight, Json | checkpoint restart | Continuous integration, Test - Unit, Test - Verification | maillage non structuré, adaptation de maillage, AMR, discretisation Elt fini/Elt Spectraux/Vol Finis, couplage multiphysique | Couplage multi-physique | Environment, Energy, Physics | |||||||||||||||||||||||
45 | MaHyCo | CEA | CEA | jean-philippe.perlat@cea.fr | HYBRID | Possibilité de benchs CPU-only, GPU-only avec le même binaire. | C++ | GPU , MPI | , XML | checkpoint restart | Continuous integration | PC1 - ExaMA | maillage non structuré | Couplage multi-physique | Exact methods | Physics | Arcane Framework | ||||||||||||||||||||||
46 | Specx | Inria | Inria | berenger.bramas@inria.fr | | https://github.com/berenger-eu/specx | OSS:: LGPL v* | C++17 | Task based - Runtime | Continuous integration, Test - Unit, Test - Verification | PC2 - WP3 - Runtime Systems | Feel++ | |||||||||||||||||||||||||||
47 | |||||||||||||||||||||||||||||||||||||||
48 | |||||||||||||||||||||||||||||||||||||||
49 | |||||||||||||||||||||||||||||||||||||||
50 | |||||||||||||||||||||||||||||||||||||||
51 | |||||||||||||||||||||||||||||||||||||||
52 | |||||||||||||||||||||||||||||||||||||||
53 | |||||||||||||||||||||||||||||||||||||||
54 | |||||||||||||||||||||||||||||||||||||||
55 | |||||||||||||||||||||||||||||||||||||||
56 | |||||||||||||||||||||||||||||||||||||||
57 | |||||||||||||||||||||||||||||||||||||||
58 | |||||||||||||||||||||||||||||||||||||||
59 | |||||||||||||||||||||||||||||||||||||||
60 | |||||||||||||||||||||||||||||||||||||||
61 | |||||||||||||||||||||||||||||||||||||||
62 | |||||||||||||||||||||||||||||||||||||||
63 | |||||||||||||||||||||||||||||||||||||||
64 | |||||||||||||||||||||||||||||||||||||||
65 | |||||||||||||||||||||||||||||||||||||||
66 | |||||||||||||||||||||||||||||||||||||||
67 | |||||||||||||||||||||||||||||||||||||||
68 | |||||||||||||||||||||||||||||||||||||||
69 | |||||||||||||||||||||||||||||||||||||||
70 | |||||||||||||||||||||||||||||||||||||||
71 | |||||||||||||||||||||||||||||||||||||||
72 | |||||||||||||||||||||||||||||||||||||||
73 | |||||||||||||||||||||||||||||||||||||||
74 | |||||||||||||||||||||||||||||||||||||||
75 | |||||||||||||||||||||||||||||||||||||||
76 | |||||||||||||||||||||||||||||||||||||||
77 | |||||||||||||||||||||||||||||||||||||||
78 | |||||||||||||||||||||||||||||||||||||||
79 | |||||||||||||||||||||||||||||||||||||||
80 | |||||||||||||||||||||||||||||||||||||||
81 | |||||||||||||||||||||||||||||||||||||||
82 | |||||||||||||||||||||||||||||||||||||||
83 | |||||||||||||||||||||||||||||||||||||||
84 | |||||||||||||||||||||||||||||||||||||||
85 | |||||||||||||||||||||||||||||||||||||||
86 | |||||||||||||||||||||||||||||||||||||||
87 | |||||||||||||||||||||||||||||||||||||||
88 | |||||||||||||||||||||||||||||||||||||||
89 | |||||||||||||||||||||||||||||||||||||||
90 | |||||||||||||||||||||||||||||||||||||||
91 | |||||||||||||||||||||||||||||||||||||||
92 | |||||||||||||||||||||||||||||||||||||||
93 | |||||||||||||||||||||||||||||||||||||||
94 | |||||||||||||||||||||||||||||||||||||||
95 | |||||||||||||||||||||||||||||||||||||||
96 | |||||||||||||||||||||||||||||||||||||||
97 | |||||||||||||||||||||||||||||||||||||||
98 | |||||||||||||||||||||||||||||||||||||||
99 | |||||||||||||||||||||||||||||||||||||||
100 |