COSMORPH
Julian Merten � Marie Skłodowska-Curie fellow | INAF-OAS Bologna
AstroFIt2 annual meeting | INAF-headquarters Roma | 23.10.2018
The standard model of cosmology
The standard model | Data representation and processing | Mass mapping | Learning structure
Big problems
Hikage et al. (2018); arXiv:1809.09148
The standard model | Data representation and processing | Mass mapping | Learning structure
Intermediate (cluster) problems
The standard model | Data representation and processing | Mass mapping | Learning structure
Small problems
The standard model | Data representation and processing | Mass mapping | Learning structure
What could it all mean
The standard model | Data representation and processing | Mass mapping | Learning structure
The era of galaxy redshift surveys
The data to improve our understanding of nonlinear structure is coming
on-going
soon (2020’ish)
future (2025’ish)
Together with Planck, JWST, eRosita (space); DESI, PFS (ground-spec); CMB stage 3.5 and 4
The standard model | Data representation and processing | Mass mapping | Learning structure
Data representation
Then you would probably not represent such data via e.g. 1-d functional forms of the density profile in the halo regime or 2pt correlation functions on larger scales.
The goal is then to find a more complete representation of the simulated data, ideally with minimal compression.
Let’s accept for a moment that all our reference theoretical model derive from numerical simulations
The standard model | Data representation and processing | Mass mapping | Learning structure
Learning simulations
data {x}
mechanism {f(x)}
labels {y}
image preparation
image characterisation
(representation)
(compression)
classification / regression
training {g(x,y)}
(fitting)
(learning)
prediction
Disclaimer: x does not have to be a mass map, but it will be the focus of this talk.
The standard model | Data representation and processing | Mass mapping | Learning structure
(Big) Data processing
This work is supported by an NVIDIA academic grant:
The standard model | Data representation and processing | Mass mapping | Learning structure
Exploiting many-core architectures
NVIDIA CUDA C �programming guide
https://docs.nvidia.com/
The standard model | Data representation and processing | Mass mapping | Learning structure
My project
Image data
Cosmology
I Optimal shape catalogues for weak lensing
II Mapping all structure in the sky
III Understand how structure organises itself
The standard model | Data representation and processing | Mass mapping | Learning structure
Euclid PSF modelling
with Lance Miller, Chris Duncan and the Euclid PSF team
The standard model | Data representation and processing | Mass mapping | Learning structure
Measuring shapes with deep learning
Springer et al. (2018) | arXiv:1808.07491
The standard model | Data representation and processing | Mass mapping | Learning structure
Introduction to mass mapping
Clowe et al. (2006); arXiv:0608407
The conversion of observational measurements into an actual map of the distribution of matter in the sky.
Why mass maps?
The standard model | Data representation and processing | Mass mapping | Learning structure
Wide fields - adaptive resolution - multi-tracer
Perfectly adaptive method in the Euclid era, which interactively incorporates all data where available into a single reconstruction
Input data
5deg
The standard model | Data representation and processing | Mass mapping | Learning structure
Mass mapping challenges
Combination
Topology
Runtime
The standard model | Data representation and processing | Mass mapping | Learning structure
Inhomogeneous input
Real CLASH data field
Real KiDS data field
The standard model | Data representation and processing | Mass mapping | Learning structure
Mesh-free domain via RBFs
In reality slightly more complicated due to polynomial support
JMM (2016) | arXiv:1412.5182
Fornberg & Flyer (2015) | A primer on radial basis
functions with applications to the geosciences
The standard model | Data representation and processing | Mass mapping | Learning structure
Results -- cluster lensing
Tested via ray-tracing through a full hydro cluster simulation
JMM (2016) | arXiv:1412.5182
WL: 25 gal arcmin^-2
SL: 10 systems 1 < z < 3.6
Special thanks to:
M. Meneghetti, E. Rasia, �S. Borgani
The standard model | Data representation and processing | Mass mapping | Learning structure
Results -- multi-tracer
With Korbinian Huber in Munich, as well as Celine Tchernin and Matthias Bartelmann in Heidelberg
Huber et al. (in prep.)
The standard model | Data representation and processing | Mass mapping | Learning structure
Results -- Runtime
SawLens2
ks93_raw
task | speed-up |
Invert dense matrices | 5-10 |
Find optimal shapes for RBF-FD nodes | 500 |
Build B_lk = a_i*b_j*A_ij*B_il*C_jk | 120 |
Total | 200-400 |
This is actually significantly better than the naive
40 GFLOPS (1 XEON core) vs. 6000 GFLOPS (NVIDIA Pascal)
real
G15: 10deg x 6deg, HH blind
Currently: 3 x 3 deg^2 with 1 arcmin mean resolution at O(1sec)
Example Euclid: Full field in 30 mins; lots of bootstraps possible
The standard model | Data representation and processing | Mass mapping | Learning structure
Where to go from here
The standard model | Data representation and processing | Mass mapping | Learning structure
A worked example: Dustgrain-pathfinder
‘Observationally’ degenerate cosmological models of f(R) modified gravity with massive neutrinos.
Dustgrain pathfinder simulations
with M. Baldi, C. Giocoli
Light-cone realisations with MapSim deliver convergence maps for different source redshift.
Giocoli et al. (2018), arXiv:1806.04681
The standard model | Data representation and processing | Mass mapping | Learning structure
Classic Dustgrain-pathfinder
Peel et al. (2018), arXiv:1805.05146
The standard model | Data representation and processing | Mass mapping | Learning structure
Semi-classic Dustgrain-pathfinder
Merten et al. (2018), to be submitted
Classification
Fully-connected neural network
Characterisation
99 ‘common’ features: �14 bins for Power spectrum, Peak counts and first three Minkowski functions
PDF, eleven percentiles, mean, variance, skewness, kurtosis
The standard model | Data representation and processing | Mass mapping | Learning structure
Convolutional neural networks
AlexNet, Krizhevsky et al. (2012); NIPS 25, 10971105
Zeiler et al. (2013); arXiv:1311.2901
The standard model | Data representation and processing | Mass mapping | Learning structure
Convnets in practice
high-level
Python
rigid
low-level
Python / CUDA
flexible
bit-level
libcuDNN / CUDA
Merten et al. (2018) submitted
The standard model | Data representation and processing | Mass mapping | Learning structure
Dissecting Dustgrain-pathfinder
Merten et al. (2018), to be submitted
The standard model | Data representation and processing | Mass mapping | Learning structure
Dreaming of mass maps
Merten et al. (2018) to be submitted
The standard model | Data representation and processing | Mass mapping | Learning structure
Where to go from here
The standard model | Data representation and processing | Mass mapping | Learning structure
Summary