ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
ItemReviewing personDocumentQuestion (Q) Comment (C)DescriptionProject ResponderProject ResponseClosed/Open - reviewer Additional Reviewer Feedback
2
1GLCPrimary ReportQWhat is the thermal stability of the echelle grating / optical bench in 12 and 24 hours ? JW/AHI just pulled up the last 24 hours of temperaure data for the bench and the peak to valley tempearture ranges vary from 6-9 mK over the last 24 hours. About half that is trend. Addendum: see plots at the link showing change in temperature of the echelle grating per day for all KPF data and KPF Era 4. The 10 mK threshold in these plots is for reference and is not part of the operational strategy. https://keckobservatory.atlassian.net/wiki/spaces/KPF/pages/2569240622/KPF+Operations+Review+-+Plots+for+RIX+Responses
3
2GLCPrimary ReportQDid the operating temperature of the spectrograph change after installation of the thermal enclosure ? AHThe temperature of the spectrometer bench is influenced by multiple sources and sinks: the cryostat cooling systems cooling inside the vacuum chamber, the hallway temperature, the presence of the thermal enclosure (passive after SM3; actively controlled after SM4), and the cryostat cooling systems (CCR or LN2; the latter provided some cooling of the hallway). Active thermal control in the enclosure was turned on just after SM4. The average temperature before this (KPF Era 3) was about 1 deg C lower than after (KPF Era 4; the current era). See plot here: https://keckobservatory.atlassian.net/wiki/spaces/KPF/pages/2569240622/KPF+Operations+Review+-+Plots+for+RIX+Responses
4
3GLCPrimary ReportQHow are you controlling the temperature inside the enclosure ? Are you always "in control" i.e. a heater power not at 0% nor 100% ? JWNow that control is enabled, we are controlling two sets of heating pads: one set on the ceiling, one on the walls (both near and far walls). The wall control has been active since install, but the near and far walls have different temperatures and the control sensor is on the far wall (less influence from the hallway). That heaterhas stayed in range. The ceiling heater has saturated (hit 100% a couple of times since we began using it. If fact, it saturated the afternoon of Jan 17 and is still saturated as I write this.
5
4GLCPrimary ReportCGenerally when measuring RV standards, one tries to average out the stellar acoustic modes, either by using long exposure time (~15 minutes) or by averaging several observations to cover about the same time. If I understand correctly in some cases you have only single, few seconds observations. These will carry (among other things) the imprint of these oscillations. Therefore the direct comparison with the data from other spectrographs might be misleading (certainly with HARPS & ESPRESSO, where acoustic modes are averaged out). LW / HIThank you, noted! We are in the process of updating Table 2.2 and surrounding text to clarify this for each star and each facility. --- Response:
In general, for standard stars, we try to either cover an oscillation mode, or take several exposures to average over the mode. In Table 2.2, we list the KPF ERA 2.0 RVs which cover between 2 and 6 minutes of observation for every star. We bin the RVs nightly for the figures and RMS calculations. Figure 2.9 has a typo for the total exposure time. The captions of Figure 2.8-2.12 list exposure times and typical number of observations for all KPF eras. KPFERA 2.0 offers the best comparison to other instruments. The details of the observations sets for KPF era 2 are:

In figure 2.8, HD185144 the exposure times per set of 3 observation are 16s at standard readout, (16 +47)*2  = 126 seconds. Each observation set covers ~ 2 minutes.
In figure 2.9, HD166620 the exposure time is incorrectly listed as 12 seconds. The correct details are a single observation with exposure time of 315 seconds, with a few observations at 500 and 600 seconds.
In Figure 2.10, HD 10700, the exposure times per set of 5 observation are 12s at standard readout, (12 +47)*5  = 295 seconds. Each observation set covers ~ 5 minutes.
In Figure 2.11, HD 55575, the exposure times are 11 seconds with sets of 3 (11 +47) *2  = 348, so 6 minutes of time are covered.
In Figure 2.12, HD 34411, the exposure times are typically 90 seconds in sets 4-6. (90+47)*5 is 372s or 6 minutes.
6
5GLCPrimary ReportCDo I understand well that you do not monitor the fiber entrance ? I understand that you have aligned it, but misalignments might occour due to mechanical or thermal factors, how do you detect and correct for them ? Anyway, monitoring closely the fiber entrance and the centering of the star on top of it is in my opinion strongly advisable.JWBecause the Fiber Viewing Cameras (FVCs) have proven hard to interpret, we do not monitor the star-to-fiber alignment at all. The alignment check we do (rastering the star across the fiber in 2D and monitoring flux) is time consuming and requires both photometric and good seeing conditions.
7
6GLCPrimary ReportQWe see exquisite precision in KPF transit measurements, i.e. a single pointing. Do you have example of RV variations in a single night with many different pointing on the same RV standard ?HIYes, on Sep 22nd 2024, we visited 3 standard stars with 5-6 visits. Each visit had several observations to attempt to average over oscillation modes. The RMS ranged from 0.3 to 0.7 m/s. There is evidence for some structure in the RVs.
8
7GLCPrimary ReportQThere are 4 heating stages between the cold plate and the CCD, how do you make sure there is no interference / resonance between these stages, that cause oscillations in the CCD temperature ? JWI'm not sure if/how the design addresses this, but we don't see this in practice. We do see it on the etalon, where control points interfere with one another.
9
8GLCPrimary ReportQA drift of 200m/s for a 0.5K temperature change in the CCD, as reported in page 94, seems excessive. What is the fixed point of the CCD ? The center ? One corner ?SHOur best measurement of the RV response to thermal fluctuations in the CCD show a ~75 m/s drift for a ~4K change in the detector temperature (~2 cm/s/mK). The 200 m/s thermal response shown in Figure 4.2.6 is from the spectrometer temperature varying by ~0.5 K (measured above the GREEN camera), not the CCD.
10
9GLCPrimary ReportQWhat is the value of the detector CTE (Charge Transfer Efficiency) ? SH
11
10GLCPrimary ReportQDo you measure the CCD RON routinely ? Does it change with time ? JW/AHYes, read noise is an output of the DRP and we can plot it over time. We see variations, mostly on green under the current conditions. Addendum: see plots at the link below. https://keckobservatory.atlassian.net/wiki/spaces/KPF/pages/2569240622/KPF+Operations+Review+-+Plots+for+RIX+Responses
12
11GLCPrimary ReportCAre you using an active anti-vibration system ? In case you upgrade to a Cryotel cooler I advise to include the anti-vibration system in the package.AHThe CCDs are isolated from vibrations in two ways. First, the spectrometer bench sits on five Minus K 800CM-1 isolators inside the vacuum chamber. This system is tuned to suppress frequencies above 0.5 Hz. We can provide more information about the design if desired. Second, the cryostat cold chains include braided copper straps that significantly reduce transmitted vibrations through the cold head. We tested the CCRs (which themselves are designed for low vibrations, relative to other CCRs) using a test cryostat and dummy CCD with a fast interferometer measurement system. This showed variable oscillations of 40 nm amplitude peak-to-peak perpendicular to the CCD face with a ~1 sec period. (We can provide the test report, if desired.) These vertical displacements are expected to average down for longer exposures. We don’t have measurements about horizontal displacements, but they are likely of a similar scale. For a replacement cooling system, it seems advisable to perform similar vibration tests. Gaspare -- could you provide information about the anti-vibration systems that you're familiar with?A viable AVC (Active Vibration Cancellation) device is at this link. A report of a vibration test on HARPS can be found here: https://www.eso.org/sci/publications/messenger/archive/no.192-mar24/messenger-no192-38-40.pdf
13
12GLCPrimary ReportCWhy do you need the agitator for the star light ? You could potentially gain a little in throughput without it.SHWe have not explicitely shown the noise 'ceiling' of star light is improved when the agitator is on, but initial laboratory testing of broad band sources imply the agitator is helping at the sub-% level. As multiple science cases of KPF involve reaching a stellar spectral SNR of >1000, we believe the agitator ensures we can reach these levels and not be dominated by speckle noise.
14
13SLPrimary ReportQEtalon: Is there an existing routine maintenance and monitoring plan for the etalon subsystem? e.g. schedule for SuperK PCF replacement, monitoring/alerts on the heater system, etc JWYes, now that the Etalon is properly telemetered, it is in the Alarm system. There are alarms on each of the heater loops which indicate if the heater power goes to 0% or 100%. We do not replace the SuperK on a schedule, but instead wait for a failure or degradation. If we start seeing a pattern, we may change to a regularly scheduled refurbishment.
15
14SLPrimary ReportQThAr: Definitely a useful calibration source -- especially when the LFC and etalon are offline or unreliable on a given day. However, given the known contamination issues with the Green Scientific ThAr lamps and the challenge of procuring uncontaminated lamps, has the team considered the viability of other HCLs?BJYes, KPF is equipped with a Uranium–Neon (U–Ne) hollow cathode lamp, and data are collected routinely. However, due to limited available development cycles within the DRP team, this source has not yet been fully characterized or incorporated as a primary wavelength calibration source in the pipeline.
16
15XDPrimary ReportQFor the KPF data of ultra short period planets (TOI-6324b and TOI-6255b), what are the residual RMS after planet fit, and what are the stellar spectral type ? What is the mean activity level of those stars (log R'hk) ? HIBoth TOI-6324 and TOI-6255 are M dwarfs, and both are most likely thick disk members so both are very quite and old. We definitely do not have Rhk for both system, I’m not even sure we turned the Ca HK spectrograph on for these two targets. RMS for 6324 is 1.4 m/s, 6255 was 1.5 m/s.
17
16XDPrimary ReportQIn the section on standard stars, it is not clear if all the outliers are related to thermal instability of the green CCD coupled to a thermal instability of the etalon (used as simultaneous reference I guess), or if only a few outliers are link to that problem, and the other outliers are of unknown origins.BJOnly a subset of the RV outliers can be directly linked to thermal instabilities of the green CCD and periods of etalon instability. This is a two-month period in 2025. Other outliers are not obviously correlated with these events and are likely due to a combination of calibration gaps, DRP version heterogeneity, and intermittent pipeline failures. Thermal instability is a major contributor, but it does not explain all observed outliers.Cosed
18
17XDPrimary ReportQDo you observe several standard stars during the same night, and if one of them shows an outliers, does all the other standards show an outlier as well (even if not the exact same value) ?BJYes. Multiple standard stars are typically observed per night, and when outliers are caused by instrument- or calibration-level effects, they generally appear as correlated deviations across several standards, though not with identical amplitudes. Isolated outliers affecting only one standard are also observed and are likely due to target-specific or processing-related effects rather than global instrument behavior.CosedI would advise for a global reprocessing. Although this can be very time consuming, it is very difficult to isolate the the root cause of outliers
19
18XDPrimary ReportCI would encourage to start diagnosing the most significant outliers, as those are likely the easiest to undertsand, and this could give some clues to understand the majority of outliersBJWe agree. The largest RV outliers are the most informative and are often tied to discrete instrumental or calibration events. We are prioritizing these cases to identify root causes, with the expectation that this will clarify the origin of the broader population of outliers and inform targeted DRP improvements.Cosed
20
19XDPrimary ReportQCould you give details about how do you assess than an LFC spectrum is usable?BJThe DRP includes limited automated, SNR-based QC checks for LFC data (e.g., monitoring order-level flux and SNR statistics used in wavelength solution and drift stages). However, the observed diversity in LFC spectral characteristics, including chromatic flux variations, continuum structure, and saturation, has made it difficult to define robust, global pass/fail thresholds. As a result, these automated checks are currently used as indicators rather than hard acceptance criteria, and final usability assessment still relies on manual review of QLP diagnostics.Open
21
20XDPrimary ReportQDid you try to use a black body minus filter after the etalon (in the divergent beam !!!) to given more weight to the blue edge? Are you sure that you system is well aligned. We were loosing signifancant flux on the HARPS-N etalon because with aging, the fiber was damaged.SHWe have multiple commercial spectral flattening filters in the beam to try and balance the blue and red end of the spectrum, but have not explored more exotic filtering strategies (largely due to budget constraints). For a 'fresh' supercontinuum source with clean fibers, we've found this has historically produced a nicely flat spectrum, but we welcome suggestions on more custom solutions.CosedWe do not use any custom made filters on HARTPS, HARPS-N ESPRESSO
22
21XDPrimary ReportCI would encourage the use of Uranium-Neon lamp, as less problematic than the new Th oxide HCL, and having very similar spectral content. Given the problem with LFC and Etalon, it is mandatory for the team to develop a robust wavelength solution scheme based on HCL, that can provide ~50 cm/s precision on the long term (HARPS-N calibration analysis (Dumusque+ 2025), we see similar precision on HARPS). This should be a Top Tier Requirement as ESPRESSO demonstrated extreme stability without the use of an LFC.BJWe agree with the recommendation. Uranium–Neon appears to be a cleaner and more reliable HCL option, and developing a robust HCL-based wavelength calibration path is essential. While it may not fully replace the LFC in the long term, it should be elevated to a top-tier priority to ensure reliable wavelength solutions when the LFC or etalon are unavailable.Open
23
22XDPrimary ReportQRegarding poor performance on faint targets, is it just a SNR problem, or something else? How does the ETC prediction and real observation compare? How does differ the measurement of the SNR as the RMS in a stellar continum region with the SNR estimated from sqrt(flux^2+ron^2)? In saturated telluric lines in the red end, do you reach flux = 0 or something different? If not, it could be due to background contamination.LWFor Kepler-172 (Teff = 5394 K, vmag=14.70, exp_time = 1200 seconds), KPF-etc predicts typical RV errors in each order of > 20 m/s, but a total predicted RV error of 3.8 m/s. Intriguingly, the per-measurement intrinsic error is 4.5 m/s, and the RMS of the RVs for this target is 20 m/s. I modified the KPF Operations review overleaf to reflect this comparison. See plot of telluric lines on https://keckobservatory.atlassian.net/wiki/spaces/KPF/pages/2569240622/KPF+Operations+Review+-+Plots+for+RIX+ResponsesOpen
24
23XDPrimary ReportQIn its quick look version of the DRP that I suppose runs at Keck, what are the quality control information available on calibrations? Are does sufficient to assess the performance of KPF in terms of RV precision? On ESO PRV instrument, the ESPRESSO DRP running on the fly during calibration sequence gives very useful information to technical staff. A failing calibration is very often due to an instrumental issue that requires an intervention. Perhaps we could discuss during the review the quality control performed by the ESPRESSO DRP (similar to the original HARPS and HARPS-N instruments).BJThe KPF QLP DRP currently runs at Caltech and processes files within minutes of readout, producing near-real-time diagnostics (e.g., SNR, SED, saturation flags, and per-exposure RVs) that are available for inspection on Jump. We also maintain a mechanism to manually flag individual calibration exposures as “junk” so they are excluded from downstream processing. However, these tools currently provide limited operational leverage because they are not routinely monitored during calibration acquisition, and WMKO does not run or review the QLP products in real time on the mountain. As a result, failing calibrations are often identified only after the fact, rather than triggering immediate intervention.Open
25
24GLCPrimary ReportQ pg. 145, fig. 6.9: The RV difference between orderlets seems above photon noise. Have possible systematics among orderlets been investigated ?BJOrderlet RVs are derived independently and combined using uncertainty-based weighting. The observed orderlet offsets are largely constant and are consistent with known orderlet LSF asymmetries and the different sensitivity of Gaussian fits to individual lines (wavelength solution) versus Gaussian fits to the summed CCF. These offsets should not strongly impact relative RV performance under the assumption that this behavior is consistent, but percent-level changes in orderlet flux ratios or LSF behavior can introduce m/s-level shifts and likely contribute to inflated combined RV uncertainties and/or systematics. See plot on https://keckobservatory.atlassian.net/wiki/spaces/KPF/pages/2569240622/KPF+Operations+Review+-+Plots+for+RIX+Responses (RV units are km/s)
26
25GLCPrimary ReportQIf the "Green" ThAr lamps suffer too much from contamination, have you tried other suppliers, like e.g. Juniper (or others) ? SHTo the best of our knowledge, no other vendors produce these lamps regularly anymore.http://photronthailand.com/product/p858a-hollow-cathode-lamp-hcl-thorium-argon-gas-fill/ https://www.shelyak.com/produit/el0037-ampoule-thorium-argon/?lang=en - CFB: photron thailand doesn't sell these, despite what their website says. I've tried.
Not sure they are better quality than Green though.
27
26GLCPrimary ReportCGiven the documentation shown, the wavelength calibration might be a prime candidate for the RV instability. Right now I understand KPF is relying mostly on the LFC for wavelength calibration, however this source has proven to be quite unreliable so far. I would suggest to rely on more conventional wavelength calibration strategies (e.g. ThAr + FP) while addressing in parallel problems with the LFC. This suggestion has implication both on the hardware side (test alternative ThAr suppliers, alternative HCL types - U Ne, stabilize the FP) and the software side (develope a robust recipe to routinely compute ThAr wavelength calibrations).BJWe agree with this assessment. Wavelength calibration is a leading candidate contributor to the observed RV instability, and the current reliance on an intermittently reliable LFC is a vulnerability. In parallel with efforts to improve LFC performance, we believe it is necessary to strengthen a conventional calibration path based on hollow cathode lamps and a stabilized Fabry–Pérot, including U–Ne as a primary option. This has clear implications for both hardware (qualification of alternative HCLs, improved FP stability) and software, where the DRP must support a robust, routinely validated HCL-based wavelength solution capable of delivering long-term RV stability.
28
27GLCPrimary ReportCBridging available LFC wavelength solutions with FP spectra works only under the assumption that the FP is stable within the measurement error. From what I understand from the report this assumption is not valid. BJWe agree that bridging LFC wavelength solutions using the FP/etalon is only valid when the etalon is stable at the required level. For KPF, we believe the etalon is stable for the majority of operations, but there was a documented interval where its thermal control was degraded and the stability assumption did not hold. During those periods, etalon-based bridging is not reliable and alternative calibration paths (e.g., HCL-based wavelength solutions) are required.
29
28GLCPrimary ReportQDo you measure the LFC spectral background ? Is it changing significantly with time ? Do you subtract / fit this background ?GGWe do not yet explicitly track or subtract the LFC spectral background. Our current background subtraction routines (scattered light, sky background) are far from mature, and significant work is needed on this front. We agree that tracking LFC spectral background will provide useful diagnostic information.
30
29GLCPrimary ReportCThe LFC is very sensitive to environmental conditions (despite statements from MENLO). For example on HARPS we have increased substantially the up-time of the LFC once we started circulating a coolant through the bradeboards (they were equipped to do so) maintaining a stable temperature of the system.SHWe have yet to do detailed cross-comparisons of the environmental parameters with LFC operational issues, but this analysis would be a useful first step. We originally explored installing the LFC within a separate thermally-isolated (uncontrolled) enclosure, but it was not obvious at the time the Menlo system required this. SL -- We have found the same sensitivity to environmental conditions to be true with the NEID LFC. While our cleanroom is thermally stable, we have found the LFC to be quite sensitive to rapid humidity changes (which we have less control over) or thermal changes if work is being done in the cleanroom.
31
30GLCPrimary ReportQpg. 111: <<lack of rapid support and response by Menlo>> do you have a maintenance contract with MENLO ? I ask because in our experience the response is fast, although sometimes the solution requires weeks.JWWe are in the process of putting that in place. We initially were waiting on Menlo to finish "delivering" the comb by providing the blue light and some semblance of reliability. Now it is just waiting on beaurocratic process on our end I think. My biggest complaint about the Menlo responses is lack of detail. I will send a request detailing what I'm seeing in the telemetry and the response will come back as nothing more than a note saying it is fixed with no details. We are not building up a troubleshooting knowledgebase from the responses.AS -- This 100% matches our experience and MAROON-X has a maintenance contract with Menlo since day one. There is an extreme resistance from Menlo to provide an explanation of 'what happened' and 'how its fixed' after an issue has been resolved. Part of the problem seems that a lot of the issues are on the software implementation and requires using low-level software that Menlo says end users should never use since you can break components (bypassing safety mechanism).
32
31XDPrimary ReportQAs developement part of the DRP, you want to use the etalon as a bridge BJThe etalon is used as a bridging reference between absolute wavelength solutions (e.g., from LFC or HCLs) when those solutions are available. This approach assumes the etalon is sufficiently stable over the relevant timescales; when that assumption breaks down, the bridging strategy is no longer valid and alternative calibration paths (e.g. HCL) are required.Cosed
33
32XDPrimary ReportCSky substraction is only important when the Sun light reflected on the moon as a velocity similar to the observed target, and when the wether conditions are poor so that moon light is reflected by clouds. This is quite rare, and it is possible to do it at the CCF level if a relative efficiency between the science and sky fiber is known.SHWe have plans for both a direct (spectral) and CCF-based sky subtraction approach, though agree this may be not needed many of the PRV-centric applications for KPF (Roy & Halverson et al. 2020 show the impact can be at the cm/s-level, but likely not at Mauna Kea). That said, KPF does perform a significant amount of faint star science, where sky subtraction can be more critical.Cosed
34
33XDPrimary ReportCUsing flat-relative extraction simplifies significantly the extraction process and should be considered given the current problems of teh DRP.GGWe agree, and we plan to implement flat-relative extraction in a future version of the DRP. When mature, the DRP will have capabilities for box extraction, optimal extraction, and flat-relative extraction. These various extraction methods will provide useful points of comparison for troubleshooting problems.Open
35
34XDPrimary ReportCFor RV precision, micro tellurics and detector anomalies (except in the case of a very bad CTI for the detector, like five 9s for teh HARPS-N detector, six nines like on HARPS and ESPRESSO are OK) are not the main systematics that you observe. I would more investigate if you have illumination issues. A good way to see those is to measurement the widths of the orders in cross dispersion in a raw frame. To test for CTI, you can observe consecutive etalon frames (when stable) while progressively reducing the flux to a factor of 1000. You should see strong RV departures correlated with flux ratio.SHWe have conducted variants of this test using both LFC and Etalon data, and most recently using SoCal spectra with a dynamic range of ~100 in flux. For the Solar spectra we see no measureable change in the RVs, though the impact should be worse for the Etalon / LFC since they are not continuum-dominated spectra. We will revisit these data once the current data reduction pipeline is caught up with processing.Open
36
35XDPrimary ReportCNot sure to understand figure 4.1. Are the 2 subplots inverted compared to the description ?AHGood catch. You are correct. The top panel of Figure 4.1 shows proper performance of the tip-tilt system. The bottom panel shows performance when one axis is not operating.Cosed
37
36XDPrimary ReportCThe ESPRESSO pipelines allows to correct well for CTI of teh detector. We encourage the DRP team to see how this is done in teh ESPRESSO pipeline.BJWe agree, correcting for detector CTI is important for long-term RV stability, and the ESPRESSO pipeline provides a mature and well-tested implementation. Reviewing and adapting the ESPRESSO approach would be valuable for the KPF DRP. However, we do not believe CTI is currently the dominant limitation on KPF RV performance and it is therefore not the highest-priority issue at this time.Cosed
38
37SLPrimary ReportCLFC -- Even with the Menlo maintenance contract (which I recommend), given the time zone difference between Arizona and Germany, we have notably increased LFC uptime with some basic knowledge sharing with Menlo (e.g. training to allow our team to make small adjustments to the photo diode locking threshold as the PCF ages). That said, formal knowledge sharing has been challenging at times. Most of this knowledge sharing has come from learning while supporting Menlo on-site maintenance visits.JWAgreed. The time zone difference is essentially perfectly out of phase. We will push for more training during upcoming visits. The last couple of on site visits have coincided with servicing missions which made it hard to devote people to the LFC. We have not yet fully executed the maintenance contract and have not had any associated training that would nominally come with it.
39
38CFBPrimary ReportQFig 2.10 shows tau Ceti RVs with significant trends across eras. What is the RMS if those trends are fit and removed?HIThe RMS for Era1, after removing a linear trend is 2.62 m/s and Era2 is 1.13 m/s. It is unclear why this is the only standard star that shows these trends.
40
39CFBPrimary ReportQ2.2.4 states that bright targets are unaffected by the increased RN in CCDs. But, it sounds like this is not true read noise, but rather correlated noise in the amplifiers. If that is true, are you sure that bright targets (which would traditionally be limited by photon, not RN) are not also being affected?BJYou are correct. The excess noise observed in the CCDs is not purely uncorrelated read noise, but includes correlated amplifier noise. As a result, even bright targets that are photon-limited can be affected, particularly through subtle impacts on line shapes, extraction, and RV precision. We therefore agree that bright targets are not fully immune to this noise source.
41
40CFBPrimary ReportQ2.4.3 discussed "ready for night" checks. Are these done daily, or only when KPF is scheduled for observations?JW2.4.3 is about proposed operations. What we do now is that the summit day crew runs a script called "testAll" for any instrument which is in use tonight or which is "standby" (basically available for ToOs). This means that this is run for KPF daily except when KPF is taken off sky for servicing missions. testAll for KPF checks software and telemetry and the alarm system. If something comes up as error, the summit staff calls the SA for the night. This is essentialy a second check that no alarms are active (beyond the text, email, or slack notifications) and a check that software is running properly. This test can be executed at any time so it is robust to the instrument being actively used (i.e. for automated cals).
42
41CFBPrimary ReportQWhat were the lab measurements of RN prior to shipping to the observatory? 4.2.4 only references the STA test reports.AHWe have measurements of RN by the manufacturer with 100 kHz read speed. These were 3.3 e- RMS (Green Amp 1 and Amp 2) and 3.0-3.1 e- RMS (Red Amp 1 and Amp2). Both CCDs are now operated at 200 kHz though. (In both cases there was one poorly perofrming amplifier. So we switched from 4-amp, 100kHz to 2-amp 200 kHz because of one poorly performing amplifer on each chip.)
43
42CFBPrimary ReportQPower cycling the archons is a recurring theme. Why is this necessary? For comparison, the NEID archon has not been power cycled since 2022 (although we have disconnected and reconnected the external software link multiple times). I would be extremely hesitant to allow observers to power cycle the detector system, as was suggested happened on page 75 (Section 4.2.5)JWThis is no longer common, in fact the last time it happened may have been for that event in Section 4.2.5. We had some trouble with "Start State Errors" for the detectors which may be related to the TTL signals to trigger the detector and the shutter which is used for precise exposure timing. We now have purely software based recovery for these start state errors which happen roughly 1 in 400 exposures (on either system) which means something of order 1 in 200 exposures are affected (one of the two detectors experiences the error). To be clear, power cycling was never an observer function, it would only be executed by an SA.
44
43CFBPrimary ReportQWhat is the nominal vacuum pressure on the cryostats with the ion pumps in operation? With the ion pumps off?AHThe cryostats have pressures around 10^-8 mbar with the ion pumps on. See a time series of cryostat pressure measurements at the link below. If they weren’t on, the pressure in the cryostats would rise to the level of the main vacuum chamber, which is about 10^-4 mbar after equilibrium a few weeks after closing the vacuum chamber at the end of a service mission. The main reason for the ion pumps is that they completely suppress convection and stabilize the CCD temperatures to about 1 mK when the cooling systems are cooperating. Without the ion pumps, we’ve seen detectable CCD temperature fluctuations. (I tried to find examples, but wasn’t able to put a quantitive estimate to this.) We separately measured that the thermal sensitivity of the CCDs is ~4 cm/s per mK based on larger amplitude thermal excursions. https://keckobservatory.atlassian.net/wiki/spaces/KPF/pages/2569240622/KPF+Operations+Review+-+Plots+for+RIX+Responses
45
44CFBPrimary ReportQWhat temperature monitoring is installed in the tunnel/room where KPF is installed?JWInitially there was inconsistent coverage of the hallway tempearture. There were sensors on components, but those were poor samples of the ambient air. Our ambient air temperature probe was mounted on the vac cart which was likely influenced by the heat leaking from the vac cart (see description in doc about how pump waste heat is poorly coupled to glycol). At various times, I have added several sensors to get ambient air temperature: 1) on top of the thermal enclosure, 2) in front of the thermal enclosure, 3) near the etalon, 4) at the hallway entrance. This provides 3 locations along the length of the hallway and one offset vertically.
46
45CFBPrimary ReportQIs there a known impact on RVs by cycling the pressure in the main vacuum chamber? Does this trigger a new 'era', or when vacuum pressure is restored to nominal, can the RV streams be analyzed without an offset?AHYes, this generally does trigger an RV offset an a new Era. The reason is that at ambient (or even a few mbar) pressure, condensation rapidly accumulates on the cryostats. Thus, a complete warm-up/cool-down cycle of the cryostats is needed. This is likely the reason for the offset.
47
46CFBPrimary ReportQGate valve response time seems slow. Can this be adjusted? Can this be made more robust by moving the air supply to a compressed bottle? Shouldn't require much air to close the valve.JWThis has been one of those tasks which has been planned forever, but has not risen to the top of the priority list given everything else that is going on. We are thinking about a similar solution: a bottle which will provide pressure in case facility air goes out. Note that the GV is open when air is present, but closes when pressure drops.
48
47CFBPrimary ReportQDoes glycol loss at the AO bench affect "KPF usable" status?JWSort of. If glycol at the AO bench is down, the CRED2 guider will not reach operating temperature. While still functional, the noise characteristice will be poor (see 4.4.7).
49
48CFBPrimary ReportQHow variable is the overall instrument drift vs wavelength? How much can be determined from red LFC light alone, and extrapolated to the blue spectral regions? Is there enough LFC signal in red side of blue arm to measure this?LHAs measured by the etalon, the WLS shifts are consistent across each detector at the sub m/s level over monthly timescales (note: drifts are computed in pixel space, and are converted to RV space in post). A figure of the measured shifts within each order on both chips will be added to the RIX plots page, binned over the wavelength dimension. Assuming an identical (or linearly correlated) change in the wavelength solution over the orders seems to work well. If we were to use only the red portion of the blue arm and assume the other orders follow suit, we'd make ~50 cm/s errors, as a first order guess from that plot. A challenge is that this drift-based approach does not easily fit into the current DRP calibration scheme. It fits better into a global (time-series) based modeling of the WLS. https://keckobservatory.atlassian.net/wiki/spaces/KPF/pages/2569240622/KPF+Operations+Review+-+Plots+for+RIX+ResponsesOpenNot sure I buy the comment that this doesn't fit into the wavcal scheme. As far as I can tell, the current scheme is a linear interpolation between pre and post LFCs. That seems easily modifiable to use only part of the spectrum to measur the shift, and then apply that shift to the master solution
50
49CFBPrimary ReportQ4.5.1 states that the PCF was damaged due to facility power surges. Elaborate please. How is the LFC system isolated from facility power? There are numerous components in there that are at risk if power surges are seen by the LFC PDU. LFC should have its own line conditioning and surge protection, and its own UPS.JWI believe that the language in the doc intends to refer to surges in optical power, not AC power. That said, the LFC rack has self contained power distribution. We plug one cable from the rack directly in to one of our UPS backed AC plugs. As a result, we do not have power use monitoring or control as we do with other KPF components which connect to remotel controlled PDUs. One instance of damage to the LFC (not the PCF) related to power is that during SM2 (Nov 2024), when the facility ATS (Automated Transfer Switch) replacement was done, we powered off all of KPF including shutting down the LFC (as per procedure from Menlo) and unplugged it from the wall prior to facility work. When we plugged the LFC rack back in, it did not come up. Tilo eventually determined that a power supply had gone bad and had to be replaced.
51
50CFBPrimary ReportQ4.2.14 What is the facility response to an unplanned power outage?JWFor KPF, it is minimal as everything in the basement is backed by UPS and generator. I will set components in the AO area to a low power mode (turn off non-critical components) to minimize heat build up as glycol there will be off. If the outage is longer than 30 minutes, the gate valve will close. We have, in the past, set up a portable air compressor to open it on one occasion, but we need a better solution there. If the power is out for ~6 hours, the turbo pump is in danger of reaching temperatures which will trigger an automatic shut off. We are working on better ventiallation paths in the cart which should help there.I worry that without significantly improved sensors and alarms, this response is insufficient. Every unplanned power outage at WIYN that lasts more than a few tens of seconds results in eyes on the instrument (typically within minutes). Followup phone calls of the "wake everyone up, on site and off site" variety are common. 90% of the time this is overkill. But on more than one occasion this response has saved NEID from a thermal cycle.
52
51CFBPrimary ReportQ4.6.16 Several stage failures occurred shortly after delivery, presumably well before expected MTBF on these components. Has that issue been resolved to the point where standard spares are sufficient to account for future issues?JWI believe we have standard spares for every stage -- though we need to bring KPF spares in to a proper inventory system to double check this. The summit engineers and techs and I are planning a day dedicated to KPF spares as part of an upcoming monthly "safety day". The primary stage which was problematic was the tip tilt stage which had 2 failures. This was traced to a poorly designed mirror mount which invaded the exclusion volume and caused mechanical interference. This is now replaced with a mirror mount which does not violate the exclusion zone, so we do not expect similar failures.
53
52XDPrimary ReportQIt seems that in Fig 4.19, you compare in the bottom right panel stellar data from different DRP version. Do you expect to have consistency in RV between different versions ? In the case of teh ESPRESSO pipeline, a full reprocessing is always necessary. We always see differences between pipeline version, even now that the pipeline is very advanced. Although the relative RVs are very similar at the level of a few cm/s, we can see an absolute RV difference of a few m/s.AHOur assessment is that heterogeneous DRP versions is one cause of RV variability, but it is not the dominant one. Minor version updates usually don't reset the zeropoint beacuse they don't involve fundamental changes to extraction or RV computation. Some of the zeropoint changes from major releases are masked because the plotted data is shifted to a common zeropoint for the different KPF Eras. (KPF was opened up during each service mission and we don't expect or see a common zero point before/after.) Having said all that, we agree that a full reprocessing is needed. This has been slowed by 1) the need to develop the code, 2) multiple months needed to process data, 3) data storage limitations so that we don't keep copies of the 2D/L1/L2 for different DRP versions on the Caltech server. We would like to deploy the DRP on a big computer with 1000s of cores and setting this up is a development task in the next few months. (The review committee might comment on this.) It would also help if the Keck Observatory Archive had a recently processed version of the data available for users to download so that the Caltech server doesn't serve the "data of record" to the vast majority of KPF users. Open8 years of ESPRESSO reprocessing take ~a month on 120 cores, all included. The full reduced data is ~70 TB (GTO + all public). This is not something that requires a very powerfull server and a lot of storage. Note that the servers we use currently are now 8 years old, so really not top of the line. An ESPRESSO raw file is 240 MB, Reduced files (16) per observation with some redondant info (~30%) are 472 MB.
54
53XDPrimary ReportQIn Fig 4.23 although we see differential behavior between SCi and CAL after the thermal transient, I see also a differential before the transient, for the first 4 days on the ritgh plots. Are you sure that the observed wiggles are due to the thermal transient ? Are these differencial between SCI and CAl for all the spectral orders and 2 CCDs ? If yes, do you see differences between the green CCD that had the thermal transiant and the red CCD that was stable? I would look at the differential order by order to have a clearer picture.SHYes, the SCI and CAL measurements shown in in the bottom panels of 4.23 are only for the GREEN detector (all orders). Your point about the first four days is a good one. The differential RMS is lower, but only by a factor of ~2. We will explore this further, along with the RED camera RVs.OpenThermal enclosure not stabalized (on top of the transient)
55
54XDPrimary ReportQIn the WLS approach defined in Section 6.6.1, you trust blindly the LFC. However, I am not sure that you investigated enough when an LFC spectrum can be used. As the etalon is used for interpolating between LFC calibrations and the etalon can drift, any difference between the etalon and the LFC is associated to the etalon and corrected for. Although the HARPS LFC have been "stable" over the last ~three years, we still see that some WLS with apperently "good" LFC spectra are introducing spurious RV departure compared to an HCL WLS. Even if not as precise in absolute, I would striongly encourage to develop and HCL independant WLS. Trying to use all sources at the same time (LFC, etalon, Thar when etalon is not working...) is a nitghmare to handle. To reach extreme preicision, simplicity is essential.BJWe agree that our initial decision to treat the LFC as the primary foundation of the wavelength solution warrants re-evaluation. While the LFC offers excellent formal precision, its operational reliability and spectral variability have limited its effectiveness as a stable reference in practice. Experience to date indicates that relying primarily on the LFC has introduced avoidable complexity and sensitivity in the wavelength calibration. We therefore believe the calibration strategy should be revisited, with greater emphasis placed on developing a robust, independent HCL-based wavelength solution and using the LFC as a complementary, rather than foundational, reference.Open
56
55XDPrimary ReportQDo you illuminate the CAL fiber during stellar observations or are you afraid of contamination? If afraid, did you really measure the contamination and its impact depending on stellar magnitude? Having a simultanous reference would ease data reduction, as you could directly measure the drift relative to the WLS, without requiring all the etalon calibration to interpolate.AHThe requirement for inter-orderlet contamination was 10^-4 for nearest neighbor orderlets. We measured the contamination to be about 2 x 10^-4 by illuminating only the CAL fiber and measuring the flux in the adjacent SCI orderlet. We do indeed use the CAL fiber for simultaneous etalon exposures when on sky most of the time. There are some challenges that we’re still working through including getting selecting the appropriate ND filter for the etalon light given the expected exposures time (which might vary because the exposure time allows for SNR-based exposure termination for stellar exposures). We plan to use these etalon “simulcals” to correct for drift, but haven’t had the personnel to implement the algorithm. Finally, we’ve found the the etalon (and LFC) drift measurements depend on SNR (see Fig. 4.39) due to some combination of the brighter-fatter effect, non-linearity, and CTI. We have plans to fix these at the pixel level, but do not currently have the resources to implement them.OpenIf problem is ND filter selection because early or later SNR stop compared to exposiure time, why not suppressing this option ? Having a simultaneous etalon calibration really ease a lot DRP as you can reduce a single SCIENCE without any other information that is time dependant (You need though a WLS frame with simultaneous etalon). This is really at the root of teh ESPRESSO (and older) DRP. We do see CTI effect when the flux ratio between the etalon during the night is an order of magnitude or more. But we can correct for that when correcting for CTI. We do not see britgher-fatter effect, non linearity effect (when less than 10% from saturation). Given the ressources, I would simplify the DPR
57
56XDPrimary ReportQHow do you measure the etalon drift? what is the reference that is used? Which algorithm do you use? line centroid drift or Bouchy method (delta flux and gradient)? The second requires the 2 spectra to have the same flux balance (once rescaled). BJWe measure etalon drift relative to the LFC-based wavelength solution. Each time an LFC wavelength solution is available, we generate a new etalon CCF mask tied to that solution. Etalon exposures are then wavelength-calibrated using the same interpolation scheme applied to all observations, and the etalon CCF mask selected is the one generated closest in time to the first LFC exposure used in the interpolation (typically within the same calibration sequence). In this framework, the measured etalon drift represents the residual instrument drift relative to the linear interpolation between LFC wavelength solutions.OpenI would liek to discuss this further. I am not sure to understand well teh concept
58
57XDPrimary ReportCA good way to monitor HCL health is to measure the flux ratio with respect to a reference. We modify lamp voltage as soos as the value goes of by 10%. This allows to increase considerably the lifetime of a lamp, with up to 6-8 years on HARPS. I would also advise to restart THAR cross calibration with a "gold" reference (once a month), that even if not implemented now could be very useful in the future. Not having such cross calibration is the limiting factor to long-term stability on HARPS (~1 m/s). I would also advise to restart UrNe calibration as those seems better that Thorium oxyde HCLs. We are currently implementing such a lamp at HARPS-N with a tailored line slection (that would also not be optimised for KPF though as HARPS-N stops at 680).SHAgreed. We have not been optimally using our suite of HCLs to maximize longevity or reference traceability. Other teams (NEID) have shown UNe to be a viable option for much of the VIS spectrum, and these lamps are still being produced from more reputable vendors (Photron).Open
59
58CFBPrimary ReportCFigure 5.2 would be more useful for comparing with other facilities if it were cast in terms of person hours, since labor costs vary widely from location to location.JWI found the appropriate data and have generated a version of that plot which shows both. Plot is available here: https://keckobservatory.atlassian.net/wiki/spaces/KPF/pages/2569240622/KPF+Operations+Review+-+Plots+for+RIX+Responses
60
59CFBPrimary ReportQSection 5.7 states "we rely on the observers to inform us if there are indications of issues which can be revealed by the data products ... if the etalon spectrum becomes more chromatic ...". Do KPF observers have sufficient experience to make this evaluation? How would a "first time" user be expected to handle this? Or, a repeat user who is not a member of the KPF/CPS team? Does every member of the KPF/CPS team who might find themselves observing have sufficient expertise to make these evaluations? Do observers have access to reference spectra and instructions for making the required comparisons?JWThat section is about our standard operations, so none of this was designed around KPF. I guess what I can say is that observers for other instruments can and have done this. I have personally had useful and productive feedback with grad students (i.e. not highly experienced observers or instrument builders). In one instance, a grad student came to me during an observing run to ask about an effect they were seeing in their MOSFIRE data. I explained that I thought it was instrument flexure. We eventually hosted that grad student on a ~2 month visit to the observatory (through our Keck Visitng Scholrs Program) and she and I ended up digging in to this and she wrote an SPIE paper on the topic. For KPF in particular, the answers are probably no, inexperienced observers probably do not have the knowledge or expertise to thoroughly evaluate these things. For other instruments we consider it something that their training as an scientist should include, but for a PRV instrument, this is clearly more complex than for other instruments and that assumption may be broken.
61
60CFBPrimary ReportQsection 5.12.1: In the KPF-CC operations modality, where do the observers come from?JWDuring this transition period, the observers come from the science teams and CPS coordinates observing duties. The UCLA based KPF-CC team has proposed that the Keck OAs (called Telescope Operators at many other facilities) do the observing, but there is resistance to this at WMKO. I am of the opinion that if the scheduler is performing well, we should be able to make a simple algorithm to pick the target based on conditions (input by a human), the current time, and a history of what has been observed so far tonight -- in essence, we can build a software based observer. It is not yet clear what the final answer here will be.
62
61CFBPrimary ReportQDebugging the pipeline crashes & behavior is clearly a problem. But I'm struggling to understand why this is hard. If the pipeline (for example) is crashing when running pre Aug 2023 data, is the crash message not helpful in identifying the problem? The inability to reprocess large chunks of data with a uniform code base appears to be a major hinderance to actually completing the future development aspirations that are outlined in Chapter 6BJProcessing older data typically does produce deterministic, well-logged errors that are actionable. The harder class of failures is different: intermittent run-time failures where the pipeline silently fails on a specific file, or a master calibration step fails without a useful traceback or logged exception, and then the identical command succeeds on rerun. Because these events are non-deterministic and often leave no diagnostic footprint, they’re difficult to reproduce.We see similar silent failures in slurm jobs in the NEID pipeline. Almost all are resource allocation errors (e.g., memory overruns). Not sure if that experience is relevant to how things are running on shrek or at HQ.
63
62CFBPrimary ReportQIs the HW cause of the streaked images in 6.4.2 understood?BJStreaked frames are detected automatically a few seconds after the exposure starts. When detected, the affected frame is read out immediately and a new exposure is taken, allowing operations to continue without manual intervention, although the underlying hardware cause is not yet understood.
64
63CFBPrimary ReportCThere are several known issues with applying the Horne algorithm to fiber fed data, particularly without the wide flat (but even with it, there are problems). The aliasing that was described in 6.3.2 is very familiar from early HPF pipeline days. See conference proceedings by Kaplan, Bender, et al.GG
65
64CFBPrimary ReportCI am puzzled by the reported difficulties in installing the pipeline at WMKO. Isn't it built & distributed in Docker? Shouldn't that be trivial to spin up on any machine?BJThat was the original intent. While the DRP is containerized, it still relies on substantial host-side configuration, including numerous environment variables and access to two external databases running outside the container. We are not currently using an orchestration tool such as Docker Compose; adopting one could simplify deployment by containerizing the database services.
66
65CFBPrimary ReportQIn figure B1, I see what appear to be HV cables running to the ion pumps on the detector cryostats, near to some of the cabling. I presume that in the electrical noise testing, one thing you tested was to turn off the ion pump HV supplies?JWYes. Ion pumps were one of the usual suspects we rounded up at first, but all tests of turning off the pumps, turning off the controllers, and disconnecting the cables proved to have no substantial impact.Cosed
67
66CFBPrimary ReportCThe near-realtime processing of incomming frames at Caltech is potentially useful, but requires resources. I encourage the team to evaluate how frequently this product is being used in real time in a way that affects current on-going observations at the telescope, and whether reducing the DRP requirement to a 'post observing run only' mode would provide simplification and reduce the demand on resources that appear over stretched already.HIWe encourage all KPF Observers to utilize the real-time data processed outputs. They are especially useful for asteroseismology projects and for planet atmosphere measurements. The alternative is to use an IRAF display on the observing VNC windows to verify that raw data is being written to disk, but no signal to noise feedback is available. To minimize strain on the system, the second pass of processing that occurs after morning calibrations does not include 2D image processing, nor L1-level spectral extraction. The wavelength solution is updated, the cross correlation functions are recomputed and QLP plots are reproduced. We could reduce compute load by not running real time processing on nights/days but real time processing is also used to monitor instrument health.
68
67
69
68
70
69
71
70
72
71
73
72
74
73
75
74
76
75
77
76
78
77
79
78
80
79
81
80
82
81
83
82
84
83
85
84
86
85
87
86
88
87
89
88
90
89
91
90
92
91
93
92
94
93
95
94
96
95
97
96
98
97
99
98
100
99