Skip to content

Commit

Permalink
Tutorials table of contents. Fix some subsections in notebooks
Browse files Browse the repository at this point in the history
  • Loading branch information
israelmcmc committed Dec 28, 2023
1 parent 84432c9 commit afe8f0d
Show file tree
Hide file tree
Showing 3 changed files with 89 additions and 20 deletions.
17 changes: 13 additions & 4 deletions docs/tutorials/DataIO/DataIO_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,16 @@
"tags": []
},
"source": [
"# Example 1: Standard binned analysis\n",
"# Data format and handling"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"## Example 1: Standard binned analysis\n",
"### Import the BinnedData class from cosipy"
]
},
Expand Down Expand Up @@ -482,7 +491,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Example 2: Some available options for the standard binned anlaysis\n",
"## Example 2: Some available options for the standard binned anlaysis\n",
"### In the previous step the unbinned data is saved to an hdf5 file with the read_tra() method.\n",
"### Here we will load the unbinnned data from file instead of running read_tra() again.\n",
"### We will also make binning plots."
Expand Down Expand Up @@ -715,7 +724,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Example 3: Combining multiple unbinned data files\n",
"## Example 3: Combining multiple unbinned data files\n",
"### When combining data files, first combine the unbinned data, and then bin the combined data. \n",
"### As a proof of concept, we'll combine the crab data set 3 times, and as a sanity check we can then compare to 3x the actual data. "
]
Expand Down Expand Up @@ -914,7 +923,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Example 4: Making data selections.\n",
"## Example 4: Making data selections.\n",
"### Only time cuts are available for now.\n",
"### The parameters tmin and tmax are passed from the yaml file. \n",
"### In this example we will select the first half of the data set. "
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -385,7 +385,7 @@
"tags": []
},
"source": [
"# 0. Files needed for this notebook\n",
"## 0. Files needed for this notebook\n",
"\n",
"From wasabi\n",
"- cosi-pipeline-public/COSI-SMEX/DC2/Responses/SMEXv12.511keV.HEALPixO4.binnedimaging.imagingresponse.nonsparse_nside16.area.h5\n",
Expand All @@ -405,7 +405,7 @@
"id": "6c259412",
"metadata": {},
"source": [
"# 1. Read the response matrix"
"## 1. Read the response matrix"
]
},
{
Expand Down Expand Up @@ -495,7 +495,7 @@
"id": "26d6eb3a",
"metadata": {},
"source": [
"# 2. Read binned 511keV binned files (source and background)"
"## 2. Read binned 511keV binned files (source and background)"
]
},
{
Expand All @@ -520,7 +520,7 @@
"data_bkg = BinnedData(\"inputs_511keV_DC2.yaml\")\n",
"data_bkg.load_binned_data_from_hdf5(\"511keV_scatt_binning_DC2_bkg.hdf5\")\n",
"\n",
"# signal + background\n",
"## signal + background\n",
"data_511keV = BinnedData(\"inputs_511keV_DC2.yaml\")\n",
"data_511keV.load_binned_data_from_hdf5(\"511keV_scatt_binning_DC2_event.hdf5\")"
]
Expand All @@ -532,7 +532,7 @@
"tags": []
},
"source": [
"# 3. Load the coordsys conversion matrix"
"## 3. Load the coordsys conversion matrix"
]
},
{
Expand Down Expand Up @@ -561,15 +561,15 @@
"id": "31ec05ad-90b7-4fad-9ad0-98cfd6483d41",
"metadata": {},
"source": [
"# 4. Imaging deconvolution"
"## 4. Imaging deconvolution"
]
},
{
"cell_type": "markdown",
"id": "6e88ca7f",
"metadata": {},
"source": [
"## Brief overview of the image deconvolution\n",
"### Brief overview of the image deconvolution\n",
"\n",
"Basically, we have to maximize the following likelihood function\n",
"\n",
Expand Down Expand Up @@ -626,7 +626,7 @@
"id": "e0a2582e",
"metadata": {},
"source": [
"## 4-1. Prepare DataLoader containing all neccesary datasets"
"### 4-1. Prepare DataLoader containing all neccesary datasets"
]
},
{
Expand Down Expand Up @@ -694,7 +694,7 @@
"id": "2a662f5e",
"metadata": {},
"source": [
"## 4-2. Load the response file\n",
"### 4-2. Load the response file\n",
"\n",
"The response file will be loaded on the CPU memory. It requires a few GB. In the actuall analysis, the response will be much larger, ~TB. \n",
"\n",
Expand Down Expand Up @@ -753,7 +753,7 @@
"id": "b1a0269e",
"metadata": {},
"source": [
"## 4-3. Initialize the instance of the image deconvolution class\n",
"### 4-3. Initialize the instance of the image deconvolution class\n",
"\n",
"First we prepare an instance of ImageDeconvolution class, and then, resister the dataset, parameters for the deconvolution. After that, you can start the calculation."
]
Expand Down Expand Up @@ -1021,7 +1021,7 @@
"id": "f764066e",
"metadata": {},
"source": [
"## 4-5. Start the image deconvolution"
"### 4-5. Start the image deconvolution"
]
},
{
Expand Down Expand Up @@ -1753,7 +1753,7 @@
"id": "9d32d0a8",
"metadata": {},
"source": [
"# 5. Analyze the results\n",
"## 5. Analyze the results\n",
"Below examples to see/analyze the results are shown."
]
},
Expand All @@ -1762,7 +1762,7 @@
"id": "f577c7ac",
"metadata": {},
"source": [
"## Log-likelihood\n",
"### Log-likelihood\n",
"\n",
"Plotting the log-likelihood vs the number of iterations"
]
Expand Down Expand Up @@ -1803,7 +1803,7 @@
"id": "3f085706",
"metadata": {},
"source": [
"## Alpha (the factor used for the acceleration)\n",
"### Alpha (the factor used for the acceleration)\n",
"\n",
"Plotting $\\alpha$ vs the number of iterations. $\\alpha$ is a parameter to accelerate the EM algorithm (see the beginning of Section 4). If it is too large, reconstructed images may have artifacts."
]
Expand Down Expand Up @@ -1844,7 +1844,7 @@
"id": "b3298aa5",
"metadata": {},
"source": [
"## Background normalization\n",
"### Background normalization\n",
"\n",
"Plotting the background nomalization factor vs the number of iterations. If the backgroud model is accurate and the image is reconstructed perfectly, this factor should be close to 1."
]
Expand Down Expand Up @@ -1885,7 +1885,7 @@
"id": "58e0d3a6",
"metadata": {},
"source": [
"## The reconstructed images"
"### The reconstructed images"
]
},
{
Expand Down
60 changes: 60 additions & 0 deletions docs/tutorials/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,68 @@ Tutorials
Tutorial for various components of the `cosipy` library. These are Python
notebooks that you can execute interactively.

List of tutorials and contents (WiP):

1. Data IO

- Explain the data format, binned and unbinned
- Show how to bin it in both local and galactic coordinates
- Show how to combine files.
- Show how to inspect and plot it

2. Spacecraft file

- Describe the contents for the raw SC file
- Describe how to manipulate it —e.g. get a time range, rebin it.
- Explain the meaning of the dwell time map and how to obtain it
- Explain the meaning of the scatt map and how to obtain it

3. Detector response

- Explain the format and meaning of the detector response
- Show how to visualize it
- Explain how to convolve the detector response with a point source model (location + spectrum) + spacecraft file to obtain the expected signal counts. Both in SC and galactic coordinates.

4. GRB localization (TS map)
- Explain the TS calculation
- Explain the meaning of the TS map and how to compute confidence contours
- Compute a TS map, get the best location and estimate the error

5. GRB spectral fitting (local coordinates)

- Introduce 3ML and astromodels
- Explain the likelihood. Reference previous example for data IO, SC file and response.
- Explain how the background is computed/fitted.
- Fit a simple power law, assuming you know the time of the GRB
- Show how to plot the result
- Show how to compare the result with the data

6. DC Point source spectral fitting (Crab, galactic)

- Explain why we can’t work directly in SC coordinates, as for the GRB.
- Perform the fit. Here we need less explanation, since most of the machinery was already introduced.
- DC Extended source model fitting
- Explain how the extended source response is a convolution of multiple point sources, and the meaning of the sky model map
- Describe how to pre-compute a response in galactic coordinates for all-sky. Explain also how to use it.
- Fit the normalization of a simple model, assuming you know the shape and spectrum
- Nice to have: free from spectral or shape parameters

7. Imaging
- Explain the RL algorithm. Reference the previous example. Explain difference with TS map.
- Explain the scatt binning and its advantages/disadvantages
- Fit the Crab
- Fit the 511 diffuse emission or similar.

8. Source injector
- Nice to have: allow theorist to test the sensitivity of their models

.. toctree::
:maxdepth: 1

DataIO/DataIO_example.ipynb
Point_source_resonse.ipynb
DetectorResponse.ipynb
spectral_fits/continuum_fit/grb/SpectralFit.ipynb
spectral_fits/continuum_fit/crab/SpectralFit_Crab.ipynb
image_deconvolution/511keV/ScAttBinning/511keV-DC2-ScAtt-ImageDeconvolution.ipynb

0 comments on commit afe8f0d

Please sign in to comment.