1 of 102

Group Update Spring 2020

Jonathan Nikoleyczik

1

2 of 102

PdfMaker

  • Ibles asked me to turn PdfMaker in a CMake project
  • Should allow me to streamline some shape varying NP studies
    • Xenon1T paper studied effects of Ly and Qy on B8 and low mass WIMP limits
    • Want to be ready / able to handle shape varying effects early in LZ
  • New method for shape varying NP:

  • Build up bank of monoenergetic peak pdfs then sum

2

3 of 102

EFT Uncertainty

  • Built some Si WIMP spectra
    • Doing some literature searches to see if they’re correct
    • Working on getting thresholds for Si into the PLR
  • Some progress on the PLR front but no new limits
    • Still building infrastructure

3

4 of 102

Previous Slides

4

5 of 102

Asymptotic Approximation to Discovery Significance

  • Cowan says that the asymptotic distribution of background only test statistics should follow a ½ (δ+ꭓ2)
  • Found that this is not always the case, not sure why? Could be because of limits on parameter ranges?
  • Naively I would’ve expected this to be well approximated by the asymptotic distribution but I can’t get the approximation to match the observed TS distribution

5

6 of 102

Fitting the asymptotic approximation

6

Cowan’s: ½ (δ+ꭓ2)

Floating Normalization:� a(δ+ꭓ2)

Floating Normalization and Relative Delta function:� a(bδ+ꭓ2)

7 of 102

EFT Uncertainty - Energy only discrimination

  • For LXe TPCs our discrimination varies by more than an order of magnitude so it should be applied as a function of energy
  • Other detectors likely can have a constant discrimination factor

7

8 of 102

EFT Uncertainty limit curves

  • Applying efficiency and discrimination gets a limit very close to a full 2D PLR
  • Worse by a factor of ~5 at low masses because NR efficiency is higher than ER equivalent efficiency at low S1

8

9 of 102

Adding PLR to EFT Uncertainty code

  • Need to choose a calculator
    • Are we okay with Asymptotic approximations (I mean this in a slightly different sense than above)?
    • This would be asimov datasets instead of computing S+B medians
    • Should be much faster than computing a full set of toys
    • The number of events are all low
  • Could we get the same (or similar) result using Feldman Cousins? Or is that simplifying too far?

9

10 of 102

Energy only PLR of LZ

  • Take ER efficiency, ER and NR background curves from sensitivity paper
  • Apply scaling to NR backgrounds to quote electron equivalent �Eee = 0.173ENR1.07 (from LUX discrim paper)
  • Apply ER efficiency to energy spectra (same eff to both ER and NR since NR is scaled)

10

11 of 102

Some Cross Checks

  • LZ shifted NR efficiency (green) shows reasonable agreement with ER efficiency (red)
  • 6 GeV WIMP peak matches B8 background
  • Can run these through LZStats with no issues
  • Limits (5.6 tons 1000 livedays):
    • 6 GeV: 11.08 events → 3.4x10-44 cm2
    • 40 GeV: 50.89 events → 1.97x10-47 cm2�
  • Limits LZ Sensitivity Paper:
    • 6 GeV: 8.61 events → 1.03x10-45 cm2
    • 40 GeV: 4.70 events → 1.27x10-48 cm2

11

12 of 102

Limit Curve

12

Reasons for discrepancies:

Different threshold esp important at low masses

Less discrimination btw ER and NR (dominates effects at high masses)

13 of 102

More automatic plots

13

Red: Fit to the S+B or B-only model

Blue: Box and whisker plot of all toy Unconditional fits to the S+B or B-only models

14 of 102

Contours are now actually 1, 2, and 3 σ

14

This was harder than I thought it would be... General multidimensional quantiles.

Note: here the levels are the 2D gaussian equivalent areas. Not 68%, 95%, 99.8%. Rather 39.3%, 86.4%, 98.8%

15 of 102

Updated Pull Distributions to be more useful

15

Solid: Unconditional fits

Dashed: Conditional fits

Blue: B-only

Red: S+B

16 of 102

Next Steps

Last few plots to be added:

  • Update the pulls plots so be more informative
  • Straight residuals rather than pulls (unnormalized)
  • Plot dataset where particular variables are binned
    • 1D case overplot the data while cutting inside the binned range
    • 2D plots have to be gifs? Different contour styles?

Integration into LZStats:

  • Want to merge code without any changes to LZStats proper first
  • Then rewrite scripts to use these functions
  • Integrate plotting features into LZStats

Documentation...

16

17 of 102

Current Plot Suite

17

Model split out by component

2D fit results (generated values on x-axis and fit value on y-axis)

18 of 102

Current Plot Suite

18

Full model with X and Y projections

Pie Charts in any given dimension

19 of 102

3D models (S1, logS2, time)

19

Still working on the formatting (doesn’t work 100% of the time)

20 of 102

Planned Plots - Knut’s Thesis

20

Profile LL vs POI (easy) along with 90% limit vs POI (harder)

Profiled NP vs POI along with toy MC uncertainty

21 of 102

LZStats features cont’d

  • Pie plots generated automatically (now in ROOT!)
  • Background-only p-values now included by default if min POI=0
  • Improved plotting scripts

21

22 of 102

NMM workspace - Xe 124 counts issue

  • Including Xe124 since the NMM spectrum goes out to higher energies (PDFs go out to 150 S1)
    • Need to know a rate of Xe 124
    • Integral of PDF is 0.00134 but has no units
      • If it’s in cts/kg/day then we’d expect 7500 events in 1000 livedays
      • If it’s in cts/ton/yr then we’d expect 0.02 events in 1000 livedays
  • Daniel Niam calculated 2300 events in 1000 livedays
  • Xenon1T had ~160 cts/ton/yr at 50-75 keV
  • This analysis cuts off at 150 S1 -> ~23 keV (so we don’t even see the primary peak at 60 keV)

22

23 of 102

New LZStats features

  • CI limit calculation (uses Gaussian workspace to save time)
  • Limit calculations can happen on the fly for any toy
    • Allows for tests of “what limit would we have set if the background only toy had been our dataset”
  • Plot limit curve vs mass in ~3 lines

23

24 of 102

-2 sigma issue - new problem

  • Our observed data line is almost always better than the median

24

Old method

New method

25 of 102

Checks

  • TS distribution
  • Toys

25

26 of 102

Toys

26

A random S+B toy at high POI test value unconditional fit

A random B-only toy at high POI test value unconditional fit

27 of 102

Toys

27

A random S+B toy at high POI test value conditional fit

A random B-only toy at high POI test value conditional fit

28 of 102

Observed datasets

28

Unconditional fit

Conditional fit

29 of 102

Simplifying LZStats

  • Current structure
    • Build workspace (and run checks of models before hand)
    • Run LZStats
    • Rerun analysis (merge results)
    • Make plots
  • Proposed / In progress structure
    • Build workspace
    • Run LZStats in a testing mode
      • Checks validity of workspace, makes plots of every possible variable
    • Run LZStats in high stats mode
      • Calls the correct analysis scripts internally
      • Options to run GoF, Yellin, F-C, etc scripts from YAML file
      • Should be no need to run extra analysis scripts but still available if necessary

29

30 of 102

30

POI

P-value

Pink and red: r_tot->GetLowerLimitDistribution()

31 of 102

Example plots

31

32 of 102

Yellin Method on real data

  • Ran my script on MDC3 data and saw some strange behavior
    • Sometimes all events have some signal probability and sometimes none do
    • Even the two B8 events sometimes have no signal probability
    • Using the RooDataSet::AddColumn and RooDataSet::reduce methods
  • For the mass values that do work correctly I get limits (in 1D only) comparable to FC
  • With Cori down haven’t been able to finalize the Yellin 2D script (needs more work than I was expecting)
    • Also need to do some speedup, currently have some very nested for loops

32

33 of 102

Implementing a fix to the -2σ problem

  • Aaron identified and answered the problem very clearly here: slides
  • Can compute p-values vs POI for every toy
  • Allows you to compute distributions of both upper and lower limits
    • This can give a different way to compute the median and bands

33

Lower Limits

Upper Limits

Median and ±1 sig

UL

34 of 102

Prepping LZStats pregeneration of plots for SR1

  • Want a few simple scripts which can generate the full suite of PLR plots
  • Needs improvement (or from scratch):
    • Data with model contours (or maybe pie charts) in all observable spaces
    • P-values versus WIMP mass (might want to include Look Elsewhere Effect)
    • Some GoF metric
    • Fit residuals
    • Feldman Cousins comparison
    • Plot like previous slide:
      • distribution of ULs and LLs for each toy and comparison to observed data

34

35 of 102

Yellin Max Gap method

  • Choose sets of points and evaluate the signal CDF between them - defines the gap size
  • Evaluate poisson probabilities of observing the given number of events inside
  • Find the point where Poisson probability matches the observed probability
    • This is the limit
  • Ran on some test datasets
    • Limit (1D) S1: 41.8 events
    • Limit (1D) logS2: 6.93 events

35

Number of signal events

Red - Poisson Probability of getting 0 events in a gap

Black - Observed probability of signals inside the “maximum” gap

Maximum Probability

36 of 102

Max Patch Method

  • Same thing as the gap method but with rectangles
    • Pick pairs of points and evaluate the 2D CDF
  • Calculating the poisson probability becomes a function of both number of expected events and the patch size
    • Having some difficulty with this step but I think I’m close…

36

37 of 102

LZStats Updates

  • Moved to CMake
    • Changes not merged yet (waiting on checks from Luke and Ibles)
    • Lots of useful feedback from Luke
  • Yellin Max Patch script in progress
    • Plan for this to be a “default” tool
    • Should be able to use same (or similar toys)
  • Changes to make LZStats work with LUX PLR style workspaces
    • Used to generate or overwrite your observed data
  • Working on pregeneration of toys
    • Made much easier with CMake

37

38 of 102

Goals for the semester

  • Complete long term PLR goals
    • Additional quick checks and statistical tests computed automatically
    • Pregeneration / loading of Toys
    • Study ever more complicated models...
  • Prep for SR1

38

39 of 102

Recent PLR work

  • Time dependent e-lifetime
    • Now have a time dependent FV, when e-lifetime is too poor we shouldn’t use the full FV, limit z
    • Chris N handed off a function of valid depth versus e-lifetime

39

40 of 102

Run4 EFT workspaces in LZStats

  • Wrote a script that converts LUX-PLR workspaces into LZStats compatible workspaces
  • Ran a 50GeV O1 N workspace through LZStats and got a similar result to the LUX PLR

40

41 of 102

P-value distributions

  • Currently LZStats only computes one integral
  • But this integral could be computed for every ALT test statistic value
  • Allows for comparison of two different models
    • I thought this would solve the wdwm+ndnm -> wdnm problem but it does not

41

42 of 102

Combining p-values of two models

Remember:

Could try to do something like:

This is done on the right only median p-value shown on plot

Real problem with this method: the datasets are generated from different models!

42

43 of 102

The real solution to the WDNM case

  • Need a separate generation model (B) from a fit model (A)
  • So the likelihood then becomes:��
  • Opens its own can of worms… lots of fit errors (fit errors have negative TS which influences the limit)
  • Solution: throw out bad fits

43

44 of 102

LZStats Improvements

  • LZStats is currently built using make but LZ coding practice and most modern software is built using cmake
    • Changing a makefile project to a cmake project is not straightforward
    • Makefiles specify paths directly but cmake “finds” the necessary paths for you
    • Makes the upgrades of LzBuild versions less painful (going through the process of upgrading inside LZStats now)
  • Also kind of selfish because I want to use a fancy IDE which most of them require cmake projects
  • Comparing run speeds with different methods of creating a model (sum then multiply vs multiply then sum, etc)

44

45 of 102

Combining p-values of different runs of LZStats

  • Run LZStats with Model 1 and Model 2
    • There should be a way to test for statistcal differences between model 1 and model 2
  • Looking to compare Yitong’s No Model No Data case to the With Model With Data Case should be able to combine those to say how does the limit change in the No Model With Data and the With Model No Data case

45

46 of 102

Benefits of adding time dependence to ER searches

  • Last week I noticed that there is not a huge benefit to adding time dependent ER sources to an NR search
  • Real benefit would come in an ER search
    • Repeated results for a NMM (ER) search
  • No time: 43.46
  • Time (Flat): 43.82
  • Time (Exp): 40.53
  • All results are still well within the 1 sigma error bars, but there is an ~10% improvement on the median when the exponential decay is considered

46

47 of 102

Errors on limits based on small number of toys

  • The problem: running LZStats with variable POI ranges can give different numbers of toys for each point
  • Solution: calculate the error on the p-value at every point
    • This error is non-trivial because there is an error on the median and error on the p-value integral

47

48 of 102

Morphing PDFs

Varying electron lifetime as a function of time from 300 us to 900 us lifetime�Equal numbers of 40 GeV WIMPs and Rn222 for comparison

48

49 of 102

  • Behaves as expected
    • Beta and B8 rates stay mostly constant
    • Can see decay (dashed green) pop out relative to the other backgrounds (sum shown in blue) as a function of time
  • Trying to get a limit as a function of initial Ar37 concentration
    • Running into high computation times (~ 60 s/toy)
    • Might need to finally figure that one out

49

Increasing the initial concentration of Ar37

50 of 102

Results?

  • All of my limits are worse than Scott’s (should be directly comparable)
  • Very strong time dependence
    • High activities are taking ~10 days to reach the same number of stats as other toys
  • Do I have enough toys?
    • Is there a way to know how many toys it takes to compare at a certain level? (how does the std. Dev. of mean depend on number of toys?)

50

51 of 102

Time dependent workspaces

  • Time dependence modeled as a histogram with random amount of livetime
    • Using flat or exponential rates for now
    • Can easily expand to annual modulation studies
  • Likelihood function is a S1 logS2 part times a time part (everything is separable)
  • Working on having the band morph with changing livedays
    • Scott would like to explore having a time dependent electron lifetime (which would impact band width)

51

52 of 102

An example workspace

Only B8 and Rn222

52

53 of 102

Running Inside LZStats

  • Runtime: ~0.16 s/toy

53

54 of 102

Time dependent workspaces

  • Integrating the number of events is very computationally expensive
  • Looking at caching this (the integral is actually a constant once final run time is defined)

54

55 of 102

How time dependence was done in LUX

  • Uses histogram over step functions
  • Continuous losses accounted for by calling livetime a fraction (seconds/hour)

55

56 of 102

Analytical Band Modeling - Method

  • Slice up ER and NR band as a function of S1
  • Compute mean, standard deviation, skew, and normalization
  • Create a fit function
  • Compute global fit using initial values from computed variables

56

Standard Deviation

Mean

Skew

Norm

57 of 102

Analytical Band Modeling - Results

57

Black line - “True” band

Histogram - model

58 of 102

Shape Varying NPs

  • Need a “continuum” of bands as a function of NPs of interest
  • Can then compute efficiency PDF
  • Greg’s analytic computation of bands?

58

59 of 102

Tritium results

59

  • 26.4 events excluded at 90% CL (sorry I don’t have the conversion to a reference cross section)
  • Tritium does have an impact on a NMM limit
    • Some toys are able to fit up to ~100 tritium events even with none injected
    • The majority of toys fit to zero tritium (note log scale of plot on right)
    • Currently no constraint term on tritium but could explore that as a possibility

60 of 102

LZLAMA -> LZStats

  • Trying to build PDFs using LZLAMA
  • Haven’t been able to compile on lzlogin
  • Got output on Cori but now trying to use ALPACA modules to convert into a PDF
  • Very inefficient - need to retest with latest release
    • Need to find ways to reduce file size and computation time
    • Output of 106 events is ~1 GB and took 4 hours to run

60

61 of 102

Pre-generation of toys

  • Low E ER group inspired by XENON result to explore adding tritium as a fit component but not as a generation component
  • Not setup to handle this in LZStats
    • Asking to have the toy generation model differ from the fit model is strange
  • Have a workaround that gets the desired results but want to make it work in the more general case

61

62 of 102

Tritium impact on LZ sensitivity

  • XENON1T best fit: 159 cts/t*yr
  • Implies: 145 cts in 60 day exposure of LZ (dominant background?)
    • Compare to 102 Rn222 backgrounds for same exposure
  • Not clear if their number includes efficiency or not?

62

Tritium PDF

63 of 102

Shape Varying NPs

  • Idea: create histograms which represent the “weight” of the ER and NR bands given current NPs
  • Create a set of these band weights with NP varied around the nominal band
  • Use moment morphing function described a few weeks ago to interpolate between NP points
  • Multiply band by ER and NR sources

63

Ex: MDC3 ER band divided by Projected detector band

64 of 102

LZStats QoL improvements

  • Updated output format (lower print level)
    • Doing a fixed scan in interval : 0.1 , 30
    • Time to perform limit scan =================================>] TOY# = 1808 AvgTime = 0.015 s/toy Remaining Time (est.) = 0.0 ss
    • Real time 0:00:27, CP time 27.260
  • Scripts to plot toy datasets
  • Improved Model Inspector

64

65 of 102

Understanding Global Observables

  • Made an LZStats version of the Frequentist calculator to test changes to the global observable generation
  • Lots more slides being added here: slides
  • Next steps:
    • Make the frequentist fit to external constraint rather than the observed data (want to eliminate the dependence on the observed data for sensitivity studies)
    • Determine why the fit values are becoming much tighter than the constraint function (look at difference between Poisson and constraint functions)

65

66 of 102

Projected Sensitivity vs Livetime

66

67 of 102

Sensitivity study issues

  • Significantly increased evaluation time compared to MDC3
    • Jobs over 400 livedays hit wall time
  • Lower event rates compared to table in sensitivity paper
    • Need to cross check the Rn and Kr numbers that were input to the sensitivity paper
    • Complicated by changes in NEST

67

68 of 102

If we have time - Moment Morphing of PDFs

  • Input some reference PDfs
  • Create an object that interpolates between them
  • Easy to do with WIMP because we have the functionality to produce PDFs as a function of mass
  • Next step: extend to shape varying NPs (same principle applies)

68

69 of 102

LZStats Work

  • Experimenting with different PLR calculators
    • We use Frequentist but there is also a Hybrid
  • Hybrid calculator trade-offs
    • Global observables behave “correctly” - constant at mean value
    • Runs much slower - not sure why (appears to be doing less work) only difference I see is multigen can’t be used because the nuisance parameters vary
    • Can’t handle unconstrained variables
      • Needs a “prior” distribution for the nuisance parameters

69

5/27/20

70 of 102

Test workspace results - Hybrid Calculator

70

Global observables

Nuisance parameters

5/27/20

71 of 102

Test workspace results - Frequentist Calculator

71

Global observables

Nuisance parameters

5/27/20

72 of 102

LZStats work

  • Working on LZStats tutorial
    • Everyone should join, it’ll be fun!
    • This Friday 9 MT (10 CT)
  • Built a generic model to test LZStats behavior
    • Full set of slides here: slides

72

5/20/20

73 of 102

The Model

RooAddPdf::EventModel[ mu_sig * pdf_sig + mu_bg1 * pdf_bg1 + mu_bg2 * pdf_bg2 ]

BG2

BG1

SIG

BG2 overlaps significantly with sig but BG1 is well resolved

5/20/20

74 of 102

Constraint functions

RooGaussian::constraint_bg1[ x=mu_bg1 mean=a_bg1 sigma=3 ]

BEFORE LZSTATS: mu_bg1 = 10 L(-100 - 100)

AFTER LZSTATS: mu_bg1 = 42.9237 +/- 2.93583 L(13.5654 - 72.282)

a_bg1 = 50 C L(-INF - +INF) - NOTE:Unchanged by running LZStats

a_bg2 = 1 C L(-INF - +INF)

RooGaussian::constraint_bg2[ x=mu_bg2 mean=a_bg2 sigma=3 ]

BEFORE LZSTATS: mu_bg2 = 5 L(-100 - 100)

AFTER LZSTATS: mu_bg2 = 4.3437 +/- 1.53807 L(-11.037 - 19.7244)

a_bg2 = 1 C L(-INF - +INF) - NOTE:Unchanged by running LZStats

5/20/20

75 of 102

Best fit results

The mean here is way off of the constraint function. Should be 50! Somehow using observed as new mean?

5/20/20

76 of 102

Comparing bands between ALPACA and LZStats

  • Solid lines were generated by Greg using NEST
  • Dashed lines were generated by me, using LZNESTUTILS
  • Had thought these would be identical
    • Is the difference significant
  • Script to make bands will now be in WS so easy to add FC style cut and count scripts

76

5/13/20

77 of 102

Global Observables Cont’d

  • Keeping global observables positive has been a challenge
    • Tried with Gaussian (red), Poisson (blue), and bifurcated gaussian (black)
  • Problem comes from event generation at a low level of roostats/roofit
  • Methods for generating events in RooFit (RooFit documentation)
    • Accept / Reject
    • Inversion
    • Hybrid
  • Somehow this requires an integral
    • Which is not well defined for all cases

77

78 of 102

More LZStats improvements

  • Loading of pregenerated toys in progress
    • Need to understand what variables need to be set as well
  • More plotting / verbose output
  • Global observables?
    • They’re not constant and their form as a function of mu is important

78

5/6/20

79 of 102

Globals vs best fits

This just feels wrong...

79

Chosen global observable for a toy

Best fit rate for a toy

5/6/20

80 of 102

Global Observables

  • Currently allow global observables to go negative (gaussians with center at predicted value and some spread)
  • IF global observables are used to generate toys then they could have negative PDFs
  • Exploring ways to keep global observables positive
    • Poisson
    • LogNormal
    • Bifurcated gaussian

80

Gaussian constraint (red)

Poisson constrain (blue)

Mean value (green)

5/6/20

81 of 102

Bugfix in LZStats

  • We forgot to set the flag the turns on the Poisson (Extended) term in LZStats
    • Didn’t catch it before because we were using constrained parameters
  • To troubleshoot I built some tools
    • Save the best fit values for all parameters to a TTree
    • Save the toy datasets
    • Plot all of these in a script
  • Next step, loading Toys

81

82 of 102

PLR on MDC3 data

  • Weird things are happening with both the TS distribution and the p-value
  • Needed to make tools to visualize why this is happening
    • Added functionality to LZStats to save the fit results for nuisance parameters for both conditional and unconditional fits

82

83 of 102

Fit values

Looking at the best fit values of nuisance parameters for many different toys and we see this

Where does the structure come from? Why do so many toys get best fits near very extreme values of this parameter?

Because it’s unconstrained? Deeper RooStats bug?

83

4/21/20

84 of 102

More PLR work

  • Troubleshooting failing jobs at low WIMP masses
    • Mostly fit errors so trying other minimizers or changing settings of current ones
    • Investigating just stopping them if they take too long
  • Studying the background model applied to WS data
    • Talk from the PLR group meeting - slides

84

4/14/20

85 of 102

85

Unweighted

86 of 102

86

Weighted

87 of 102

PLR Updates (MDC3 focus)

  • Implemented a new smoothing function for histograms (gaussian)
  • PdfMaker is now yaml configurable
    • Not “final” as there are still some hard coded values
    • Export but don’t import NEST parameters
  • Built PDFs for MDC3 WIMP search
  • Working on confirming the models
    • Better understand plot on right
  • Almost ready to run a limit

87

4/8/20

MDC3 WS data + model (“Xenon1T” style)

Unconstrained fit to the data

88 of 102

Using hist2workspace in LZStats

  • Running it does not work in default LZStats
    • Need to make it more workspace agnostic
    • Remove all mentions of things like S1 S2 since
  • When it does get loaded fits are very slow (~10-30 s / toy)
    • Most minimization methods do not converge or give errors
    • Could be because of degeneracies or wrong limits / starting values
  • Testing other minimizers which could improve fit time

88

4/1/20

89 of 102

Quantifying LZStats speed

  • Compared different minimizers with hist2workspace and vanilla_wimp
  • Some minimizers just didn’t work all together
  • Takes about 5 times longer to fit the hist2workspace
    • Could try to optimize the workspace
  • All ROOT 6.20.00

89

4/1/20

vanilla_wimp

hist2workspace

Normal NLL function

Normal NLL function

Average Minimization time

Fraction with fit errors

Comment

Average Minimization time

Fraction with fit errors

Comment

Minuit migrad

3.20 ± 1.00

0.00%

45.4 ± 18.9

2%

Minuit migradimproved

11.08 ± 3.50

100.00%

Status 1?

42.5 ± 45.4

100%

All fits gave status 4000 (failed?)

Minuit2 migrad

2.24 ± 0.70

0.00%

11.0 ± 2.9

5%

Minuit2 simplex

0.40 ± 0.12

100.00%

Status 1?

1.6 ± 0.2

76%

Most have status 5

Minuit2 minimize

2.23 ± 0.70

0.00%

11.3 ± 3.2

6%

GSLMultiMin bfgs

3.54 ± 1.12

0.00%

15.8 ± 7.4

0%

GSLMultiMin bfgs2

2.42 ± 0.77

0.00%

11.4 ± 2.3

1%

GSLMultiMin steepestdescent

7.24 ± 2.28

0.00%

35.5 ± 2.3

0%

90 of 102

flamedisx

  • Finding Dark Matter Faster with Explicit Profile Likelihoods
  • GPU based analytical response model
  • Uses tensorflow to optimize tensor multiplication
    • Works in 6D (S1, S2, 3D space, and time)
  • Have it running on nersc, looking into how to modify code to work on LZ data and NEST-like models

90

4/1/20

91 of 102

LZStats Work

  • Reading ROOT release notes and saw the comment “Depending on the compiler, on the instruction set supported by the CPU and on what kind of PDFs are used, PDF evaluations will speed up 5x to 16x” for version 6.20.00 (LZ is running 6.16.00 by default)
    • Benchmarking this new version looks alright in most tests so far
    • Speed improves by about 15% with no changes to the code
  • Also mentions a tool to make workspaces from a simple script
    • Similar to what I wanted to do with yaml files but already exists in ROOT by default
    • Exploring getting a vanilla_wimp workspace running using that
    • Complications from fact it’s built for CMS
  • Pregeneration of toys is still on my mind

91

3/25/20

92 of 102

Foam Cutting, etc.

  • 11 Pieces of foam cut radially
    • Need 96 total
    • Each piece then needs 2 more cuts
  • Not allowed at PSL anymore
  • Getting ready for LZStats on MDC3

92

3/18/20

93 of 102

Foam

93

94 of 102

Foam cutting at PSL

  • Three templates made
    • None of them are exactly the same size
    • Templates that they are cut from are imperfect (mostly because the acrylic is imperfect)
  • Fitting this to a jig tomorrow

94

3/11/20

95 of 102

95

96 of 102

Work On Site and LZStats

  • All sensor cables inside breakout box
    • All except gas lines and grid cables connected to the flanges
    • Will check them out today
  • Getting bottom bellows up to the breakout box
    • Need to install PMT standpipe

LZStats:

  • PdfMaker has yaml configuration working
    • Stuck on how to implement and save NEST configuration options (NEST has lots of free parameters)
    • Will commit soon

96

2/26/20

97 of 102

Underground Work

  • Bottom breakout box assembled and ready for leak check
  • Sensor breakout leak check in progress, hope to pull cables into breakout soon
    • Map below
  • Stay extended until at least end of Feb

97

2/19/20

98 of 102

LZStats

  • YAML config merged into master branch
  • Working on getting yaml config into workspace generation
    • Separate packages for LZStats and workspace generation so might need to duplicate some code
  • Some bugs to work out (all found by Yitong, thanks!)
    • Different branches of LZNESTUTILS cause issues
    • RooFit is setting low error parameters to constant and not floating them - solved?

98

99 of 102

PLR Work

  • Ibles has recently made some pretty big changes to the structure of LZStats
    • He’s removed the wimp from the main package to make it more general
    • New PdfMaker (used to call NEST and get data into a pdf)
  • I implemented yaml on LZStats
    • Currently only sets the statistics options (NToys, POI ranges, etc)
    • Working on adding yaml functionality to the PdfMaker

99

2/12/20

100 of 102

On site work

  • All top array cables are connected and resistances are okay
    • Austin (Brown) is repeating the tests done on the surface starting today
    • Only real issue is the one skin PMT with a ground braid touching the ICV
  • This week working on building the bottom breakout box

100

2/12/20

101 of 102

Work on site

  • Sensor breakout box assembly nearly complete
    • Last section to go on is a large valve which will need to be rigged in place
    • Also a tap got stuck in a flange yesterday need to get that out
    • Can begin leak checking soon
  • Getting ready to begin top cable install
    • Set back by delays with the vacuum cross
    • Modified cross might be installed as early as tonight
    • Means we could begin testing and routing cables on Tuesday

101

1/23/20

102 of 102

LZStats

  • Pregeneration of toys
    • Have a temporary solution in place but would like to make sure it works in the general case
    • While working on that I found a convenient way to save the truth parameters from generation
  • Workspace visualizer
    • Wanted a way to see how changing parameters adjusts the model
    • Each nuisance parameter can be adjusted on the fly
    • Can refit the model to the data holding different parameters fixed

102

1/23/20