See Mario’s slides. Overall message: many components are being built, the next year is integration. There has been some schedule slip, but well within projected envelope.
Solar System – since last year, have clarified all the data products and processing, finalizing the fact that there will be a daily posting to the Minor Planet Center.
Questions: MPCORB updated every night? Yes. Fast-moving NEOs go straight to the MPC without needing to go through MOPS and then to the NEOCP.
Goal: by the end of 2019 to have Linking basically complete (near requirements) since this is the highest risk/complexity.
Siegfried is now talking about the updated estimated discovery yields. NEO from Granvik model, MBA’s from H distribution and orbits from MPC. Survey Strategy baseline2018a/b. Conservative models (alpha = 0.36 below known) gives ~5 million MBAs, optimistic (alpha = 0.56) gives ~200 M.
Peak discovery could be as high as 10^5 asteroids per night (10^6 in the alpha=0.56), though typically it’s a few thousand. Concern about how the MPC could handle peak nights, but backup/failure mode is that the new next-day MPCORB is not used, but eventually MPC and LSST both catch up.
“Unattributed” objects are those that do not meet the 2 (~5-sigma) obs/obj x 3 nights that leaves 30% of objects in the survey. (But does not include precovery.) So, 30% of the things you see in the night can’t be attributed using the linking. In the alert stream, what’s going to be unknown solar system objects.
Working on Pytrax (HelioLinC from Holman et al. 2018). Even a simple version works already for MBAs (and beyond) but not yet for NEOs. This is a big deal! With more work on the NEO problem, HelioLinC will meet LSST requirements.
Lynne now talking about the Survey Strategy update. Looking for input from the community. Survey Cadence Optimization Committee (SCOC) is the one that will make the final decisions, but it doesn’t exist yet. Lynne provides input to the SCOC as guided by the Science Advisory Committee (SAC). COSEP (Community Observing Strategy Evalulation Paper) is the ongoing discussion on this including the Metrics Analysis Framework (MAF) metrics (like “how many MBAs discovered?”) and their science motivation.
Rolling cadence patterns are still up for discussion. 2 vs. 1 snap requires studying both but then deciding later when we have actual on-the-sky images (and verifiable cosmic ray rejection).
Big goal: how do we rank the survey strategies? Pass/fail thresholds? Then fractional science gain beyond threshold?
Need work on the input populations. Is size distribution important for survey strategy or just completeness as a function of H?
Why does the old scheduler predict 10% fewer asteroids while all the new survey strategies are basically the same? Probably has to do with how old scheduler did chunks on the sky. So that’s good. Question: do the different survey strategies sample different orbital element spaces? Lynne: don’t know but probably not. Number of observations per object is very similar in the new schedulers. Similar results for TNOs.