Skip to content

Commit

Permalink
Prototype into Develop 9/19 (#26)
Browse files Browse the repository at this point in the history
Almanac, Exposure log, Narrative Log, backend classy, observation gaps,
DDV Link, Accessing data from consdb to plot, etc
Co-authored-by: Steve Pothier <[email protected]>
  • Loading branch information
Vebop authored Sep 24, 2024
1 parent 7644811 commit 809427b
Show file tree
Hide file tree
Showing 21 changed files with 1,401 additions and 2,470 deletions.
809 changes: 206 additions & 603 deletions notebooks_tsqr/NightLog.ipynb

Large diffs are not rendered by default.

16 changes: 8 additions & 8 deletions notebooks_tsqr/NightLog.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,19 @@ tags:
- prototype
- exposure
parameters:
record_limit:
type: integer
description: Max number of records to output
default: 99
minimum: 1
maximum: 9999
day_obs:
type: string
description: The night to report on. (YYYY-MM-DD, TODAY, YESTERDAY)
description: >
The night to report on. (Allowed: YYYY-MM-DD, TODAY, YESTERDAY)
The report will include days upto, but not including, day_obs.
For a value of TODAY, the last observing night that will be
included in the report will be the one that started yesterday.
This is usually what you want.
default: "TODAY"
number_of_days:
type: integer
description: Number of days to show (ending in day_obs)
description: >
Number of days to show in the report.
default: 1
minimum: 1
maximum: 9
61 changes: 57 additions & 4 deletions notebooks_tsqr/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,58 @@
Times Square notebooks for (potential) use in project-wide Logging & Reporting
# Logging & Reporting Times Square Notebooks

See [offical
documentation](https://rsp.lsst.io/v/usdfdev/guides/times-square/index.html)
on creating notebooks for use by Times Square.
Notebooks found here `./notebooks/` are meant to be run in Times Square towards use in project-wide Nightly Logging & Reporting
Times-Square: <https://usdf-rsp-dev.slac.stanford.edu/times-square>

See [official Times-Square documentation](https://rsp.lsst.io/v/usdfdev/guides/times-square/index.html) on creating notebooks for use by Times Square.

## Development Guidelines

Rapid Prototyping is enabled with the branch `prototype`
Times-Square for this repository displays the `prototype` branch.

- Create a branch for your Jira Ticket in the format `tickets/dm-####` off of `prototype`
- Communicate often with team mate when you want to push changes to `prototype`
- Rebase your branch off `prototype` before merging your branch into `prototype`

Example of flow:

1. `git checkout prototype; git pull`
2. `git checkout -b tickets/dm-23456`
3. `git commit -m "work happened"; git push`
4. `git checkout prototype; git pull`
5. `git checkout tickets/dm-23456`
6. `git rebase prototype`
7. `git checkout prototype; git merge tickets/dm-23456; git push`

&nbsp;

Once Per Sprint (2 week), the developers on this repository (Steve Pothier & Valerie Becker) gather to discuss updates made to `prototype`, outstanding pull requests, and tickets that have been completed.

Once they are in agreement, they merge `prototype` into the `develop` branch and close the related Jira Tickets. Squash commit should be used here with a descriptive title and description in the PR.


## NightLog.ipynb

NightLog.ipynb is our main Logging And Reporting notebook. This notebook is meant to display completed* views of logging information.
Each separate notebook should be used to mature a logging/reporting product, and then expect to be integrated into this 'main' notebook.

\*_Completed to an alpha\beta level -- quick improvements will continue to happen during Fall-Winter 2024_

## Dashboard

Dashboard.ipynb is intended for local development purposes and debugging. Run this notebook not from RSP to evaluate your connection to an array of data sources.
_RSP is not intended to have access to all of the data sources queried here._

## Kernel

Times Square developers/maintainers have indicated that the LSST Kernel should be used in notebooks displayed there.
[RSP Stack info](https://developer.lsst.io/stack/conda.html#rubin-science-platform-notebooks)

## Backend Code

We are working our way into a non-Times-Square dependent project. Towards that effort, we are incrementally abstracting common code out of the notebooks. This code is kept in `./python/lsst/ts/logging_and_reporting/`

`almanac.py` ...
`reports.py` ...
`source_adapters.py` ....
`utils.py` is used for ...
File renamed without changes.
81 changes: 40 additions & 41 deletions notebooks_tsqr/consdb/access_consdb.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,23 +30,22 @@
"source": [
"import os\n",
"from lsst.summit.utils import ConsDbClient\n",
"from lsst.summit.utils.utils import computeCcdExposureId\n",
"import sqlalchemy\n",
"import requests\n",
"import pandas as pd\n",
"from IPython.display import display, Markdown, display_markdown\n",
"\n",
"\n",
"URL = \"https://usdf-rsp.slac.stanford.edu/consdb/\" # Need to add that part about the headers to client flow through\n",
"# URL = \"https://usdf-rsp.slac.stanford.edu/consdb/\"\n",
"# Need to add that part about the headers to client flow through\n",
"URL = \"http://consdb-pq.consdb:8080/consdb\" # Don't use this one\n",
"\n",
"os.environ[\"no_proxy\"] += \",.consdb\"\n",
"\n",
"access_token = os.getenv(\"ACCESS_TOKEN\")\n",
"headers = {\"Authorization\": f\"Bearer {access_token}\"}\n",
"\n",
"sesh = requests.Session()\n",
"sesh.headers.update(headers)\n",
"# This is how the session object should access the ACCESS Token from the headers\n",
"#sesh = requests.Session()\n",
"#sesh.headers.update(headers)\n",
"\n",
"%matplotlib inline"
]
Expand All @@ -58,17 +57,13 @@
"metadata": {},
"outputs": [],
"source": [
"display_markdown('## Attempting to access Consolidated Database', raw=True)\n",
"\n",
"try:\n",
" from lsst.summit.utils import ConsDbClient\n",
" have_consdb = True\n",
"except ImportError:\n",
" have_consdb = False\n",
"\n",
"if have_consdb:\n",
" client = ConsDbClient(URL)\n",
" display_markdown('## Consolidated Database is accessible',raw=True)\n"
"from lsst.summit.utils import ConsDbClient\n",
"client = ConsDbClient(URL)\n",
"print(client)\n",
"#import sqlalchemy\n",
"#connection = sqlalchemy.create_engine('postgresql://[email protected]/exposurelog')\n",
"#print(connection)\n",
"display_markdown('### Consolidated Database is accessible',raw=True)"
]
},
{
Expand All @@ -87,7 +82,6 @@
"metadata": {},
"outputs": [],
"source": [
"# Add consdb\n",
"day_obs_int = int(day_obs.replace('-', ''))\n",
"\n",
"visit_query1 = f'''\n",
Expand All @@ -114,31 +108,33 @@
" WHERE v.day_obs = {day_obs_int} and q.visit_id = v.visit_id\n",
"'''\n",
"\n",
"if have_consdb:\n",
" # Potentially print some schema information for debugging\n",
" \n",
"# Potentially print some schema information for debugging\n",
"try:\n",
" print(client.schema()) # list the instruments\n",
" print(client.schema('latiss')) # list tables for an instrument\n",
" print(client.schema('latiss', 'cdb_latiss.exposure_flexdata')) # specifically flexdata table\n",
" \n",
" try:\n",
" visits_latiss = consdb.query(visit_query1).to_pandas()\n",
" visits_lsstcc = consdb.query(visit_query2).to_pandas()\n",
" visits_lsstccs = consdb.query(visit_query3).to_pandas()\n",
"except requests.HTTPError or requests.JSONDecodeError:\n",
" print(client.schema()) # list the instruments\n",
" print(client.schema('latiss')) # list tables for an instrument\n",
" print(client.schema('latiss', 'cdb_latiss.exposure_flexdata'))\n",
"\n",
"try:\n",
" visits_latiss = client.query(visit_query1).to_pandas()\n",
" visits_lsstcc = client.query(visit_query2).to_pandas()\n",
" visits_lsstccs = client.query(visit_query3).to_pandas()\n",
"\n",
" except requests.HTTPError or requests.JSONDecodeError:\n",
" # Try twice\n",
" visits_latiss = consdb.query(visit_query1).to_pandas()\n",
" visits_lsstcc = consdb.query(visit_query2).to_pandas()\n",
" visits_lsstccs = consdb.query(visit_query3).to_pandas()\n",
"except requests.HTTPError or requests.JSONDecodeError:\n",
" # Try twice\n",
" visits_latiss = client.query(visit_query1).to_pandas()\n",
" visits_lsstcc = client.query(visit_query2).to_pandas()\n",
" visits_lsstccs = client.query(visit_query3).to_pandas()\n",
"\n",
" quicklook = consdb.query(quicklook_query).to_pandas()\n",
"quicklook = client.query(quicklook_query).to_pandas()\n",
"\n",
"else:\n",
" # Assumes at the USDF\n",
" connection = sqlalchemy.create_engine('postgresql://[email protected]/exposurelog')\n",
" visits_latiss = pd.read_sql(visit_query1, connection)\n",
" quicklook = pd.read_sql(quicklook_query, connection)\n",
"# Assumes at the USDF\n",
"#visits_latiss_try = pd.read_sql(visit_query1, connection)\n",
"#quicklook_try = pd.read_sql(quicklook_query, connection)\n",
"\n",
"if len(visits_latiss) > 0:\n",
" print(f\"Retrieved {len(visits_latiss)} visits from consdb\")\n",
Expand Down Expand Up @@ -256,14 +252,17 @@
]
},
{
"cell_type": "markdown",
"cell_type": "code",
"execution_count": null,
"id": "15",
"metadata": {},
"outputs": [],
"source": [
"# Access DDV\n",
"Summit = https://summit-lsp.lsst.codes/rubintv-dev/ddv/index.html \n",
"\n",
"USDF = https://usdf-rsp-dev.slac.stanford.edu/rubintv/ddv/index.html"
"usdf = 'https://usdf-rsp-dev.slac.stanford.edu'\n",
"service_loc = os.environ.get('EXTERNAL_INSTANCE_URL', usdf)\n",
"DDV = f\"{service_loc}/rubintv-dev/ddv/index.html\"\n",
"display_markdown('## Access DDV part of RubinTV', raw=True)\n",
"DDV"
]
},
{
Expand Down
158 changes: 158 additions & 0 deletions notebooks_tsqr/consdb/assorted_plots.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "0",
"metadata": {},
"outputs": [],
"source": [
"# Parameters\n",
"day_obs = '2024-06-26'\n",
"instruments = 'latiss, lsstcomcamsim, lsstcomcam'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import matplotlib.pyplot as plt\n",
"from IPython.display import display_markdown"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2",
"metadata": {},
"outputs": [],
"source": [
"### 'Get Consdb access'\n",
"from lsst.summit.utils import ConsDbClient\n",
"\n",
"URL = \"http://consdb-pq.consdb:8080/consdb\" # Don't use this one\n",
"os.environ[\"no_proxy\"] += \",.consdb\"\n",
"\n",
"client = ConsDbClient(URL)\n",
"display_markdown('### Consolidated Database is accessible',raw=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3",
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline # After all imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4",
"metadata": {},
"outputs": [],
"source": [
"# Put Plot in backend\n",
"plt.style.use('seaborn-v0_8-bright')\n",
"def plot(y, x):\n",
" # plot\n",
" fig = plt.figure(figsize=(6, 6))\n",
" ax = fig.subplots()\n",
" ax.scatter(x, y)\n",
"\n",
" plt.show()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5",
"metadata": {},
"outputs": [],
"source": [
"day_obs_int = int(day_obs.replace('-', ''))\n",
"instrument_list = [ins.strip() for ins in instruments.split(',')]\n",
"\n",
"for instrument in instrument_list:\n",
" print(\"------------------------------------------------------------\")\n",
" print()\n",
" display_markdown(f'# Instrument: {instrument}',raw=True)\n",
" #################### Put in Backend\n",
" ccdvisit1_quicklook = f'''\n",
" SELECT * FROM cdb_{instrument}.ccdvisit1_quicklook\n",
" '''\n",
"\n",
" visit1 = f'''\n",
" SELECT * FROM cdb_{instrument}.visit1\n",
" '''\n",
"\n",
" # Join Visit1 and ccdVisit1 to access data and day obs\n",
" visits = client.query(visit1).to_pandas()\n",
" quicklook = client.query(ccdvisit1_quicklook).to_pandas()\n",
"\n",
" visits = visits.join(quicklook, on='visit_id',lsuffix='',rsuffix='_q')\n",
" #################### Put in Backend - end\n",
"\n",
" # If we see data exist in psf, zero, then we should pare down like visits_today below\n",
" try:\n",
" visits_w_psf = visits[visits['psf_area'].notna()]\n",
" time = visits_w_psf['obs_start']\n",
" display_markdown(f'Number of visits with psf_area populated {len(visits_w_psf)}', raw=True)\n",
" display_markdown('## psf_area vs obs_start', raw=True)\n",
" plot(time, visits_w_psf['psf_area'])\n",
" except KeyError as err:\n",
" display_markdown(f\"Psf_area not a column in {instrument} dataframe\",raw=True)\n",
" display_markdown(f\"key error for {err}\", raw=True)\n",
"\n",
" try:\n",
" visits_w_zero = visits[visits['zero_point'].notna()]\n",
" time = visits_w_zero['obs_start']\n",
" display_markdown(f'Number of visits with zero_point populated {len(visits_w_zero)}', raw=True)\n",
" display_markdown('## zero_point vs obs_start', raw=True)\n",
" plot(time, visits_w_zero['zero_point'])\n",
" except KeyError as err:\n",
" display_markdown(f\"Zero_point not a column in {instrument} dataframe\", raw=True)\n",
" display_markdown(f\"key error for {err}\", raw=True)\n",
"\n",
" # Pare down to only day obs\n",
" visits_today = visits[(visits['day_obs'] == day_obs_int)]\n",
" display_markdown(f\"How many visits today? {len(visits_today)}\", raw=True)\n",
"\n",
" ra = visits_today['s_ra']\n",
" dec = visits_today['s_dec']\n",
" display_markdown(f\"How many ra? {len(ra)}\", raw=True)\n",
" display_markdown(f\"How many dec? {len(dec)}\", raw=True)\n",
"\n",
" display_markdown(f'Ra Dec should be populated for {instrument}', raw=True)\n",
" plot(ra, dec)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "LSST",
"language": "python",
"name": "lsst"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading

0 comments on commit 809427b

Please sign in to comment.