SSSC Sprint 2 Day 3

Everyone continued work from yesterday.

Unconference sessions were about TOM (led by Tim Lister) and Commissioning Data (led by Mario Juric). Commissioning data will be coming in ~2 years!

We summarized what we worked on and learned at the end.

A successful Sprint! Thanks to all the participants and thanks again to our sponsors: the Adler Planetarium, the Planetary Society, the B612 Foundation, and the LSST Corporation.



SSSC Sprint 2 Day 2: unconference sessions on alerts and funding

We had two simulatenous unconference sessions, one on Alerts and one on Funding. We learned that the alerts are relatively well defined with existing examples that are similar to what will be produced (e.g., the Zwicky Transient Facility alert stream).

For funding, we identified some challenges with getting funding for the preparatory work that will enable the first LSST science. Typical PI-led science-focused grants (e.g., NASA ROSES SSO and SSW) are often so small that, especially in the LSST-scale era, they will need to be quite narrowly focused, although they could develop tools along the way that will be generally applicable and sharable. Until LSST data is available, these will need to heavily include other existing datasets. NASA ROSES PDART is a mechanism for getting funding for software tool development (without an explicit science goal) and there is one example that we’re aware of that went in to the 2019 call. NSF AAG is less restrictive on grant content, but potentially a harder sell. Ideally, there would be a specific set of LSST preparation funding either from NSF or possibly private donors through the LSST Corporation, but this is looking less likely and, so far, unclear, especially given the short timeframe from now until start of science. We’ll likely need to distill all the work into proposal-sized chunks. The time frame for this is actually short: applying for grants now, getting rejected once, then getting accepted, then getting the money, then starting the work takes ~2.5 years and there’s only ~3 years until full LSST data (and only ~4 years until an order-of-magnitude increase in the number of known solar system objects)! There was discussion that LSST SSSC Sprint 3 in summer 2020 would have a strong proposal-writing component. (And LSST SSSC Sprint 4 could be a hack week on real data from commissioning!) The SSSC should be a resource to develop collaborations that will result in great science and, ideally, shared tools.

The group generally over the day worked on better understanding metrics, populations, collaboration, etc.

What’s next for the SSSC?

Meg gave an overview this morning on where the SSSC has been over the last year and what we need to work on for the next year. Key next steps include elections for co-chair and other working group leads, updating the COSEP both text and metrics, start community software effort (based on Software road map and upcoming Software paper by Henry Hsieh), etc.

We’re now working on various projects. Unconference sessions have been chosen on seeking funding and LSST SS alerts.

LSST Citizen Science and Visualization

Dr. Laura Trouille (VP of Citizen Science; Zooniverse, Adler, Northwestern) “LSST and The Crowd”. Project Builder Platform (PBP) makes Citizen Science projects like WordPress. There will be an automated way to send LSST images to the PBP. There’s now a communication channel between the APIs. Can be auto-updated with filters. Images should work. Metadata should work, but might need some work. Not hard to translate into multiple languages with “Translations Interface” and can invite volunteers to do it too. Can also organize multiple projects into a single organization. Quite impressive!

Combining human and machine classification, Supernova Hunters (Wright et al. 2018) shows that the combination are better than either. Now Galaxy Zoo has an updated workflow where a Machine Learning is used to prioritize which objects need to be classified by humans in an interactive automated way. Right now its a project by project basis, but may be a more developed version of the human-machine infrastructure may be available later.

PBP can be investigated at . There are and will be a variety of supporting infrastructure and help. Even when machine learning works well after a short human classification period, there is some desire to continue having humans interact with the data in order to find rare or unusual outcomes, even when the machine thinks it has correctly classified.

Next up, Mark SubbaRao and Aaron Geller talking about “Some cool Viz tools” as related to Visualization in the LSST era. Worldwide Telescope with new python interface (pyWWT). Can do a lot of things within a Jupyter notebook.

Aaron talking about Glue. Glue is a standalone, but also interfaces with a python shell. Allows for interactive multi-plot data selection. Concern about when there are many (>10^5 points). Zooming,

Bokeh part of python can create a standalone HTML. Multi-plot interactivity. For more complicated things might need to know some JavaScript. Concern about when there are many (>10^4 points). Could be solved with “datashader”.

D3 uses JavaScript on the web allowing for rich interaction. Hardest to get into, but can do almost anything in 2D.

These can all interact with the Zooniverse system. Current focus on scatter-plot like diagrams.

Overview of LSST Project progress

See Mario’s slides. Overall message: many components are being built, the next year is integration. There has been some schedule slip, but well within projected envelope.

Solar System – since last year, have clarified all the data products and processing, finalizing the fact that there will be a daily posting to the Minor Planet Center.

Questions: MPCORB updated every night? Yes. Fast-moving NEOs go straight to the MPC without needing to go through MOPS and then to the NEOCP.

Goal: by the end of 2019 to have Linking basically complete (near requirements) since this is the highest risk/complexity.

Siegfried is now talking about the updated estimated discovery yields.  NEO from Granvik model, MBA’s from H distribution and orbits from MPC. Survey Strategy baseline2018a/b. Conservative models (alpha = 0.36 below known) gives ~5 million MBAs, optimistic (alpha = 0.56) gives ~200 M.

Peak discovery could be as high as 10^5 asteroids per night (10^6 in the alpha=0.56), though typically it’s a few thousand. Concern about how the MPC could handle peak nights, but backup/failure mode is that the new next-day MPCORB is not used, but eventually MPC and LSST both catch up.

“Unattributed” objects are those that do not meet the 2 (~5-sigma) obs/obj x 3 nights that leaves 30% of objects in the survey. (But does not include precovery.) So, 30% of the things you see in the night can’t be attributed using the linking. In the alert stream, what’s going to be unknown solar system objects.

Working on Pytrax (HelioLinC from Holman et al. 2018). Even a simple version works already for MBAs (and beyond) but not yet for NEOs. This is a big deal! With more work on the NEO problem, HelioLinC will meet LSST requirements.

Lynne now talking about the Survey Strategy update. Looking for input from the community. Survey Cadence Optimization Committee (SCOC) is the one that will make the final decisions, but it doesn’t exist yet. Lynne provides input to the SCOC as guided by the Science Advisory Committee (SAC). COSEP (Community Observing Strategy Evalulation Paper) is the ongoing discussion on this including the Metrics Analysis Framework (MAF) metrics (like “how many MBAs discovered?”) and their science motivation.

Rolling cadence patterns are still up for discussion. 2 vs. 1 snap requires studying both but then deciding later when we have actual on-the-sky images (and verifiable cosmic ray rejection).

Big goal: how do we rank the survey strategies? Pass/fail thresholds? Then fractional science gain beyond threshold?

Need work on the input populations. Is size distribution important for survey strategy or just completeness as a function of H?

Why does the old scheduler predict 10% fewer asteroids while all the new survey strategies are basically the same? Probably has to do with how old scheduler did chunks on the sky. So that’s good. Question: do the different survey strategies sample different orbital element spaces? Lynne: don’t know but probably not. Number of observations per object is very similar in the new schedulers. Similar results for TNOs.


2nd LSST Solar System Readiness Sprint – Intro

Welcome to the 2nd LSST Solar System Readiness Sprint! We’re here at Adler Planetarium in the very nice Samuel C. Johnson Star Theater on Tues, June 4, 2019. Participants include Meg Schwamb, Mario Juric, Henry Hsieh, Steve Chesley, Michael Kelley, Tim Lister, Wes Fraser, Lynne Jones, Siegfried Eggl, Geza Gyuk, Matt Wiesner, Mark SubbaRao, Aaron Geller, Cliff Johnson, and myself (Darin Ragozzine).

The program for the Sprint is here:

where you can see that today were getting introduced to various concepts and then breaking into working groups.

The meeting is being transmitted on Blue Jeans and the slides will be posted, so I’ll be focusing my liveblogging on the in-room discussion. I’ll work on sending that information soon. Let the Sprint begin!

We’d like to thank Adler Planetarium, the Planetary Society, the B612 Foundation, and the LSST Corporation for support. They enabled this valuable meeting and we appreciate them!