PK ÑaW-TL! L! ! calculate_response_function.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Computing wavelength response functions\n\nThis example shows how to compute the\nwavelength response function of the 335 \u00c5 channel as\nwell as explore the different properties of the\ntelescope channels.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n\nimport astropy.time\nimport astropy.units as u\n\nfrom aiapy.response import Channel"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since AIA uses narrow-band filters, other wavelengths (outside of the nominal\nwavelength attributed to each filter) contribute to the image data.\nComputing these response functions allow us to see which other wavelengths\ncontribute to the total intensity in each image.\n\nFirst, create a `aiapy.response.Channel` object by specifying the\nwavelength of the channel. In this case, we'll\nchoose the 335 \u00c5 channel, but this same workflow\ncan be applied to any of the EUV or UV channels\non AIA. This may take a few seconds the first time you do\nthis as the most recent instrument data file will\nneed to be downloaded from a remote server. Subsequent\ncalls will know that the data has been downloaded.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"c = Channel(335 * u.angstrom)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From [Boerner et al. (2012)](https://doi.org/10.1007/s11207-011-9804-8),\nthe wavelength response function is given by,\n\n\\begin{align}R(\\lambda) = A_{geo}R_P(\\lambda)R_S(\\lambda)T_E(\\lambda)T_F(\\lambda)\n D(\\lambda)Q(\\lambda)G(\\lambda),\\end{align}\n\nwhere\n\n- $A_{geo}$ geometrical collecting area\n- $R_P,R_S$ reflectances of primary and secondary mirrors, respectively\n- $T_E, T_F$ transmission efficiency of the entrance and focal-plane\n filters, respectively\n- $D$ contaminant transmittance of optics\n- $Q$ quantum efficiency of the CCD\n- $G$ gain of the CCD camera system\n\nThe `aiapy.response.Channel` object provides an interface to all of these\nproperties of the telescope. Below, we show how to plot several of these\nproperties as a function of wavelength.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Reflectance\nfig = plt.figure()\nax = fig.add_subplot(221)\nax.plot(c.wavelength, c.primary_reflectance, label=r\"$R_P$\")\nax.plot(c.wavelength, c.secondary_reflectance, label=r\"$R_S$\")\nax.set_ylabel(r\"Reflectance\")\nax.set_xlim(50, 400)\nax.set_xlabel(r\"$\\lambda$ [\u00c5]\")\nax.legend(frameon=False)\n\n# Transmittance\nax = fig.add_subplot(222)\nax.plot(c.wavelength, c.entrance_filter_efficiency, label=r\"$T_E$\")\nax.plot(c.wavelength, c.focal_plane_filter_efficiency, label=r\"$T_F$\")\nax.set_ylabel(r\"Transmittance\")\nax.set_xlim(50, 400)\nax.set_xlabel(r\"$\\lambda$ [\u00c5]\")\nax.legend(frameon=False)\n\n# Contamination\nax = fig.add_subplot(223)\nax.plot(c.wavelength, c.contamination)\nax.set_ylabel(r\"Contamination, $D(\\lambda)$\")\nax.set_xlim(50, 400)\nax.set_xlabel(r\"$\\lambda$ [\u00c5]\")\n\n# Quantumn efficiency\nax = fig.add_subplot(224)\nax.plot(c.wavelength, c.quantum_efficiency)\nax.set_ylabel(r\"Quantum Efficiency, $Q(\\lambda)$\")\nax.set_xlim(50, 800)\nax.set_xlabel(r\"$\\lambda$ [\u00c5]\")\nplt.tight_layout()\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Additionally, `aiapy.response.Channel` provides a method for calculating\nthe wavelength response function using the equation above,\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"r = c.wavelength_response()\nprint(r)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can then plot the response as a function of\nwavelength.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fig = plt.figure()\nax = fig.gca()\nax.plot(c.wavelength, r)\nax.set_xlim((c.channel + [-10, 10] * u.angstrom).value)\nax.set_ylim(0, 0.03)\nax.set_xlabel(r\"$\\lambda$ [\u00c5]\")\nax.set_ylabel(f'$R(\\\\lambda)$ [{r.unit.to_string(\"latex\")}]')\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"On telescopes 1, 3, and 4, both channels are always illuminated.\nThis can lead to \"crosstalk\" contamination in a channel from the channel with\nwhich it shares a telescope. This impacts the 94 \u00c5 and 304 \u00c5 channels\nas well as the 131 \u00c5 and 335 \u00c5 channels. This effect is included\nby default in the wavelength response calculation. To exclude this\neffect,\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"r_no_cross = c.wavelength_response(include_crosstalk=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we look at the response around 131 \u00c5 (the channel with which 335 \u00c5 shares\na telescope), we can see the effect that the channel crosstalk has on the\n335 \u00c5 response function.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fig = plt.figure()\nax = fig.gca()\nax.plot(c.wavelength, r, label=\"crosstalk\")\nax.plot(c.wavelength, r_no_cross, label=\"no crosstalk\")\nax.set_xlim(50, 350)\nax.set_xlabel(r\"$\\lambda$ [\u00c5]\")\nax.set_ylabel(f'$R(\\\\lambda)$ [{r.unit.to_string(\"latex\")}]')\nax.legend(loc=1, frameon=False)\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also incorporate various corrections to the\nresponse functions, including a time-dependent\ndegradation correction as well as a correction based\non the EVE calibration. The latter also includes the\ntime-dependent correction. As an example, to apply the\ntwo aforementioned corrections given the degradation as\nof 1 January 2019,\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"obstime = astropy.time.Time(\"2019-01-01T00:00:00\")\nr_time = c.wavelength_response(obstime=obstime)\nr_eve = c.wavelength_response(obstime=obstime, include_eve_correction=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can then compare the two corrected response\nfunctions to the uncorrected case.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fig = plt.figure()\nax = fig.gca()\nax.plot(c.wavelength, r, label=\"uncorrected\")\nax.plot(c.wavelength, r_time, label=\"degradation correction\")\nax.plot(c.wavelength, r_eve, label=\"EVE correction\")\nax.set_xlim((c.channel + [-20, 20] * u.angstrom).value)\nax.set_ylim(0, 0.03)\nax.set_xlabel(r\"$\\lambda$ [\u00c5]\")\nax.set_ylabel(f'$R(\\\\lambda)$ [{r.unit.to_string(\"latex\")}]')\nax.legend(loc=2, frameon=False)\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK aWD replace_hot_pixels.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Re-spiking level 1 images\n\nThis example demonstrates how to \"re-spike\" AIA level 1 images\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n\nimport astropy.units as u\nimport sunpy.map\nfrom astropy.coordinates import SkyCoord\n\nimport aiapy.data.sample as sample_data\nfrom aiapy.calibrate import fetch_spikes, respike"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"AIA level 1 images have been corrected for hot-pixels (commonly referred to\nas \"spikes\") using an automated correction algorithm which detects them,\nremoves them, and replaces the \"holes\" left in the image via interpolation.\nHowever, for certain research topics, this automated hot-pixel removal\nprocess may result in unwanted removal of bright points which may be\nphysically meaningful. In this example, we will demonstrate how to revert\nthis removal by putting back all the removed pixel values with the\n`aiapy.calibrate.respike` in function. This corresponds to the\n`aia_respike.pro` IDL procedure as described in the\n[SDO Analysis Guide](https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/index.html).\n\nThe header keywords ``LVL_NUM`` and ``NSPIKES`` describe the level number of the\nAIA data (e.g. level 1) and how many hot pixels were removed from the image\n(i.e. the \"spikes\"). The data containing the information of the pixel\nposition and the intensities of the removed hot pixels are available from the\n[Joint Science Operations Center (JSOC)](http://jsoc.stanford.edu/) as a\nseparate segment of the `aia.lev1_euv_12s` and `aia.lev1_uv_24s` data series\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's read a level 1 193 \u00c5 AIA image from the aiapy sample data\ninto a `~sunpy.map.Map` object.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m = sunpy.map.Map(sample_data.AIA_193_IMAGE)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The spike data are stored as separate data segments in JSOC\nas a $3\\times N$ arrays, where $N$ is the number of spikes\nremoved and the three dimensions correspond to the the 1-D pixel index\nof the spike, intensity value of the removed spikes, and the intensity value\nused in replacing the removed spike (via interpolation).\nThe spike pixel positions are given with respect to the level 1 full-disk\nimage.\n\nWe can use the `aiapy.calibrate.fetch_spikes` function to query the JSOC\nfor the spike positions and intensity values and convert the positions of the\nspikes to the 2D pixel full-disk pixel coordinate system given a\n`~sunpy.map.Map` representing a level 1 AIA image.\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"positions, values = fetch_spikes(m)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we are ready to respike the level 1 AIA image. The\n`aiapy.calibrate.respike` function performs the respike operation on the given\ninput image and returns a `~sunpy.map.Map` with the respiked image. This\noperation also alters the metadata by updating the ``LVL_NUM``, ``NSPIKES``,\nand `COMMENTS` keywords.\n\nNote that explicitly specifying the spike positions and values is optional.\nIf they are not given, they are automatically queried from the JSOC.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m_respiked = respike(m, spikes=(positions, values))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's create a cutouts of the original level 1 and \"re-spiked\" (i.e.\nlevel 0.5) images for a region with hot pixels.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"top_right = SkyCoord(30 * u.arcsec, 420 * u.arcsec, frame=m.coordinate_frame)\nbottom_left = SkyCoord(-120 * u.arcsec, 280 * u.arcsec, frame=m.coordinate_frame)\nm_cutout = m.submap(bottom_left, top_right=top_right)\nm_respiked_cutout = m_respiked.submap(bottom_left, top_right=top_right)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we can also retrieve the positions of the spikes\nas `~astropy.coordinates.SkyCoord` objects in the projected coordinate\nsystem of the image using the `as_coords=True` keyword argument. This\ngives us only those spikes in the field of view of the cutout.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"spike_coords, _ = fetch_spikes(m_cutout, as_coords=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, let's plot the two cutouts for comparison and plot\nthe positions of the spikes in both images, denoted by white\ncircles.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fig = plt.figure()\nax = fig.add_subplot(121, projection=m_cutout)\nax.plot_coord(spike_coords, \"o\", color=\"white\", fillstyle=\"none\", markersize=15)\nm_cutout.plot(axes=ax, title='Level 1 \"de-spiked\" data')\nlon, lat = ax.coords\nlon.set_axislabel(\"HPC Longitude\")\nlat.set_axislabel(\"HPC Latitude\")\nax = fig.add_subplot(122, projection=m_respiked_cutout)\nax.plot_coord(spike_coords, \"o\", color=\"white\", fillstyle=\"none\", markersize=15)\nm_respiked_cutout.plot(axes=ax, annotate=False)\nax.set_title('Level 0.5 \"re-spiked\" data')\nlon, lat = ax.coords\nlon.set_axislabel(\"HPC Longitude\")\nlat.set_axislabel(\" \")\nlat.set_ticklabel_visible(False)\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly, let's check the metadata in both the level 1 and resulting\n0.5 images to double check that the appropriate keywords have been updated.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"for k in [\"lvl_num\", \"nspikes\", \"comments\"]:\n print(f\"Level 1: {k}: {m_cutout.meta.get(k)}\")\n print(f\"Level 0.5: {k}: {m_respiked_cutout.meta.get(k)}\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK aW+Wi# i# download_specific_data.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Requesting specific AIA images from the JSOC\n\nThis example shows how to request a specific series of AIA images from the JSOC.\n\nWe will be filtering the data we require by keywords and requesting short exposure images from a recent flare.\n\nUnfortunately, this can not be done using the sunpy downloader `~sunpy.net.Fido`\nand instead we will use the [drms](https://docs.sunpy.org/projects/drms/en/stable/)_ Python library directly.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\nfrom pathlib import Path\n\nimport drms\nimport matplotlib.pyplot as plt\n\nimport astropy.units as u\nimport sunpy.map\n\nfrom aiapy.calibrate import correct_degradation, normalize_exposure, register, update_pointing\nfrom aiapy.calibrate.util import get_correction_table, get_pointing_table"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Exporting data from the JSOC requires registering your\nemail first. Please replace this with your email\naddress once you have registered.\nSee [this page](http://jsoc.stanford.edu/ajax/register_email.html)_\nfor more details.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"jsoc_email = os.environ.get(\"JSOC_EMAIL\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Our goal is to request data of a recent (of time of writing)\nX-class flare. However, we will request the explanation of\nthe keywords we want from the JSOC.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"client = drms.Client(email=jsoc_email)\nkeys = [\"EXPTIME\", \"QUALITY\", \"T_OBS\", \"T_REC\", \"WAVELNTH\"]\n\nprint(\"Querying series info\")\n# We plan to only use the EUV 12s data for this example.\nseries_info = client.info(\"aia.lev1_euv_12s\")\nfor key in keys:\n linkinfo = series_info.keywords.loc[key].linkinfo\n note_str = series_info.keywords.loc[key].note\n print(f\"{key:>10} : {note_str}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will construct the query. The X-class flare occurred\non the 2021/07/03 at 14:30:00 UTC. We will focus on the 5 minutes\nbefore and after this time.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"qstr = \"aia.lev1_euv_12s[2021-07-03T14:25:00Z-2021-07-03T14:35:00Z]\"\nprint(f\"Querying data -> {qstr}\")\nresults = client.query(qstr, key=keys)\nprint(f\"{len(results)} records retrieved.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see from the output, we have received a\na list of AIA images that were taken during the flare.\nWhat we want to do now is to filter the list of images\nto only include shorter expsoures.\nHowever, before we do this, let us check what the exposure times are.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Filter out entries with EXPTIME > 2 seconds\nresults = results[results.EXPTIME < 2]\nprint(results)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This style of filtering can be done to any column\nin the results. For example, we can filter the WAVELNTH\ncolumn to only include 171 data with short expsoures.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Only use entries with WAVELNTH == 211\nresults = results[results.WAVELNTH == 211]\nprint(results)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"
Note
**Only complete searches can be downloaded from JSOC**,\n this means that no slicing operations performed on the results object\n will affect the number of files downloaded.
\n\nWe can filter and do analysis on the metadata that was returned.\nThe issue is is that if we only want this data, you can not use\nthis \"filtered results\" to download only the data we want.\nTo do this, we will have to do a second query to the JSOC,\nthis time using the query string syntax the\n[lookdata](http://jsoc.stanford.edu/ajax/lookdata.html)_ web page.\nYou can use the website to validate the string before you export the query.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"updated_qstr = \"aia.lev1_euv_12s[2021-07-03T14:25:00Z-2021-07-03T14:35:00Z][? EXPTIME<2.0 AND WAVELNTH=211 ?]{image}\"\nprint(f\"Querying data -> {updated_qstr}\")\n# The trick here is to use the \"image\" keyword for ``seg`` to only download the\n# image data only and this gives us direct filenames as well.\nrecords, filenames = client.query(updated_qstr, key=keys, seg=\"image\")\nprint(f\"{len(records)} records retrieved. \\n\")\n\n# We do a quick comparision to ensure the final results are the same.\n# For this to work, we just need to deal with the different indexes.\nprint(\"Quick Comparison\")\nprint(results.reset_index(drop=True) == records.reset_index(drop=True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From here you can now request (export) the data.\nThis will download this specific subset of data to your\nlocal machine when the export request has been completed.\nDepending on the status of the JSOC, this might take a while.\n\nPlease be aware the script will hold until the export is complete.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"export = client.export(updated_qstr, method=\"url\", protocol=\"fits\")\nfiles = export.download(Path(\"~/sunpy/\").expanduser().as_posix())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With AIA files, it is possible to bypass the export stage.\nWe can manually construct the URLS of the the data.\nBe aware that each file will have the same filename based on the URL.\nYou will have to then use your preferred downloader to download the files.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"urls = [f\"http://jsoc.stanford.edu{filename}\" for filename in filenames.image]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we will \"prep\" the data with every feature of\n`aiapy` and plot the data sequence using `sunpy`.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"level_1_maps = sunpy.map.Map(files.download.to_list())\n# We get the pointing table outside of the loop for the relevant time range.\n# Otherwise you're making a call to the JSOC every single time.\npointing_table = get_pointing_table(level_1_maps[0].date - 3 * u.h, level_1_maps[-1].date + 3 * u.h)\n# The same applies for the correction table.\ncorrection_table = get_correction_table()\n\nlevel_15_maps = []\nfor a_map in level_1_maps:\n map_updated_pointing = update_pointing(a_map, pointing_table=pointing_table)\n map_registered = register(map_updated_pointing)\n map_degradation = correct_degradation(map_registered, correction_table=correction_table)\n map_normalized = normalize_exposure(map_degradation)\n level_15_maps.append(map_normalized)\nsequence = sunpy.map.Map(level_15_maps, sequence=True)\nsequence.peek()\n\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK aW]-\ \ skip_correct_degradation.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Correcting for instrument degradation\n\nThis example demonstrates the degradation of the filters on AIA over time.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n\nimport astropy.time\nimport astropy.units as u\nfrom astropy.visualization import quantity_support, time_support\nfrom sunpy.net import Fido\nfrom sunpy.net import attrs as a\n\nfrom aiapy.calibrate import degradation\n\n# These are needed to allow the use of quantities and astropy\n# time objects in the plot.\ntime_support(format=\"jyear\")\nquantity_support()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The performance of the AIA telescope is unfortunately degrading over time,\nleading to the resulting images becoming increasingly dim. We\ncan correct for this by modeling the degradation over time and\nthen dividing the image intensity by this correction.\n\nFirst, let's fetch some metadata for the 335 \u00c5 channel of AIA between 2010\nand 2018 at a cadence of 30 days. We choose the 335 \u00c5 channel because it has experienced\nsignificant degradation compared to the other EUV channels.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"results = Fido.search(\n a.Time(\"2010-06-01T00:00:00\", \"2021-06-01T00:00:00\"),\n a.Sample(30 * u.day),\n a.jsoc.Series.aia_lev1_euv_12s,\n a.jsoc.Wavelength(335 * u.angstrom),\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We only need the date and mean intensity columns from the\nmetadata that was returned. We select those and nothing else.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"table = results[\"jsoc\"].show(\"DATE__OBS\", \"DATAMEAN\")\ntable[\"DATAMEAN\"].unit = u.ct\ntable[\"DATE_OBS\"] = astropy.time.Time(table[\"DATE__OBS\"], scale=\"utc\")\ndel table[\"DATE__OBS\"]\n\nprint(table)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we pass the date column to the `aiapy.calibrate.correct_degradation`\nfunction. This function calculates the time-dependent correction factor\nbased on the time and wavelength of the observation.\nWe then divide the mean intensity by the correction factor to get the corrected intensity.\nFor more details on how the correction factor is calculated, see the documentation for the\n`aiapy.calibrate.degradation` function.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"correction_factor = degradation(335 * u.angstrom, table[\"DATE_OBS\"])\n# This correction can be applied to a sunpy Map as well.\ntable[\"DATAMEAN_DEG\"] = table[\"DATAMEAN\"] / correction_factor"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To understand the effect of the degradation and the correction factor, we\nplot the corrected and uncorrected mean intensity as a function of time.\nNote that the uncorrected intensity decreases monotonically over time\nwhile the corrected intensity recovers to pre-2011 values in 2020.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"plt.plot(table[\"DATE_OBS\"], table[\"DATAMEAN\"], label=\"mean\", marker=\"o\")\nplt.plot(table[\"DATE_OBS\"], table[\"DATAMEAN_DEG\"], label=\"mean, corrected\", marker=\"o\")\nplt.title(f'{(335*u.angstrom).to_string(format=\"latex\")} Channel Degradation')\nplt.legend(frameon=False)\n\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK aW; skip_psf_deconvolution.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Deconvolving images with the instrument Point Spread Function (PSF)\n\nThis example demonstrates how to deconvolve an AIA image with\nthe instrument point spread function (PSF).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n\nimport astropy.units as u\nimport sunpy.map\nfrom astropy.coordinates import SkyCoord\nfrom astropy.visualization import AsinhStretch, ImageNormalize, LogStretch\n\nimport aiapy.data.sample as sample_data\nimport aiapy.psf"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"AIA images are subject to convolution with the instrument point-spread\nfunction (PSF) due to effects introduced by the filter mesh of the telescope\nand the CCD, among others. This has the effect of \"blurring\" the image.\nThe PSF diffraction pattern may also be particularly noticable during the\nimpulsive phase of a flare where the intensity enhancement is very localized.\nTo remove these artifacts, the PSF must be deconvolved from the image.\n\nFirst, we'll use a single level 1 image from the 171 \u00c5 channel from\n15 March 2019. Note that deconvolution should be performed on level 1 images\nonly. This is because, as with the level 1 data, the PSF model is defined\non the CCD grid. Once deconvolved, the image can be passed to\n`aiapy.calibrate.register`\n(see the `sphx_glr_generated_gallery_prepping_level_1_data.py` example).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m = sunpy.map.Map(sample_data.AIA_171_IMAGE)\nfig = plt.figure()\nax = fig.add_subplot(111, projection=m)\nm.plot(\n axes=ax,\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll calculate the PSF using `aiapy.psf.psf` for the 171 \u00c5 channel.\nThe PSF model accounts for several different effects, including diffraction\nfrom the mesh grating of the filters, charge spreading, and jitter. See\n[Grigis et al (2012)](https://sohoftp.nascom.nasa.gov/solarsoft/sdo/aia/idl/psf/DOC/psfreport.pdf)\nfor more details. Currently, this only works for\n$4096\\times4096$ full frame images.\n\nNote that this will be significantly faster if you have a GPU and the `cupy`\npackage installed.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"psf = aiapy.psf.psf(m.wavelength)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll plot just a 500-by-500 pixel section centered on the center pixel. The\ndiffraction \"arms\" extending from the center pixel can often be seen in\nflare observations due to the intense, small-scale brightening.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fov = 500\nlc_x, lc_y = psf.shape[0] // 2 - fov // 2, psf.shape[1] // 2 - fov // 2\nplt.imshow(\n psf[lc_x : lc_x + fov, lc_y : lc_y + fov],\n norm=ImageNormalize(vmin=1e-8, vmax=1e-3, stretch=LogStretch()),\n)\nplt.colorbar()\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we've downloaded our image and computed the PSF, we can deconvolve\nthe image with the PSF using the\n[Richardson-Lucy deconvolution algorithm](https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution).\nNote that passing in the PSF is optional. If you exclude it, it will be\ncalculated automatically. However, when deconvolving many images of the same\nwavelength, it is most efficient to only calculate the PSF once.\n\nAs with `aiapy.psf.psf`, this will be much faster if you have\na GPU and `cupy` installed.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m_deconvolved = aiapy.psf.deconvolve(m, psf=psf)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's compare the convolved and deconvolved images.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"norm = ImageNormalize(vmin=0, vmax=1.5e4, stretch=AsinhStretch(0.01))\nfig = plt.figure()\nax = fig.add_subplot(121, projection=m)\nm.plot(axes=ax, norm=norm)\nax = fig.add_subplot(122, projection=m_deconvolved)\nm_deconvolved.plot(axes=ax, annotate=False, norm=norm)\nax.coords[0].set_axislabel(\" \")\nax.coords[1].set_axislabel(\" \")\nax.coords[1].set_ticklabel_visible(False)\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The differences become a bit more obvious when we zoom in. Note that the\ndeconvolution has the effect of \"deblurring\" the image.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"left_corner = 500 * u.arcsec, -600 * u.arcsec\nright_corner = 1000 * u.arcsec, -100 * u.arcsec\nfig = plt.figure()\nm_sub = m.submap(\n bottom_left=SkyCoord(*left_corner, frame=m.coordinate_frame),\n top_right=SkyCoord(*right_corner, frame=m.coordinate_frame),\n)\nax = fig.add_subplot(121, projection=m_sub)\nm_sub.plot(axes=ax, norm=norm)\nm_deconvolved_sub = m_deconvolved.submap(\n bottom_left=SkyCoord(*left_corner, frame=m_deconvolved.coordinate_frame),\n top_right=SkyCoord(*right_corner, frame=m_deconvolved.coordinate_frame),\n)\nax = fig.add_subplot(122, projection=m_deconvolved_sub)\nm_deconvolved_sub.plot(axes=ax, annotate=False, norm=norm)\nax.coords[0].set_axislabel(\" \")\nax.coords[1].set_axislabel(\" \")\nax.coords[1].set_ticklabel_visible(False)\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK aWPt t prepping_level_1_data.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Registering and aligning level 1 data\n\nThis example demonstrates how to convert AIA images to a common pointing,\nrescale them to a common plate scale, and remove the roll angle.\nThis process is often referred to as \"aia_prep\" and the resulting data are typically referred to as level 1.5 data.\nIn this example, we will demonstrate how to do this with `aiapy`.\nThis corresponds to the `aia_prep.pro` procedure as described in the [SDO Analysis Guide](https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/index.html)_.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import sunpy.map\n\nimport aiapy.data.sample as sample_data\nfrom aiapy.calibrate import normalize_exposure, register, update_pointing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Performing multi-wavelength analysis on level 1 data can be problematic as\neach of the AIA channels have slightly different spatial scales and roll\nangles. Furthermore, the estimates of the pointing keywords (``CDELT1``, ``CDELT2``, ``CRPIX1``,\n``CRPIX2``, ``CROTA2``) may have been improved due to limb fitting procedures. The\n[Joint Science Operations Center (JSOC)](http://jsoc.stanford.edu/) stores\nAIA image data and metadata separately; when users download AIA data, these\ntwo data types are combined to produce a FITS file. While metadata are\ncontinuously updated at JSOC, previously downloaded FITS files will not\ncontain the most recent information.\n\nThus, before performing any multi-wavelength analyses, level 1 data\nshould be updated to the most recent and accurate pointing and interpolated\nto a common grid in which the y-axis of the image is aligned\nwith solar North.\n\nFirst, let's read a level 1 94 \u00c5 AIA image from the ``aiapy`` sample data into\na `~sunpy.map.Map` object.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m = sunpy.map.Map(sample_data.AIA_094_IMAGE)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The first step in this process is to update the metadata of the map to the\nmost recent pointing using the `aiapy.calibrate.update_pointing` function.\nThis function queries the JSOC for the most recent pointing information,\nupdates the metadata, and returns a `sunpy.map.Map` with updated metadata.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m_updated_pointing = update_pointing(m)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we take a look at the plate scale and rotation matrix of the map, we\nfind that the scale is slightly off from the expected value of $0.6''$ per\npixel and that the rotation matrix has off-diagonal entries.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(m_updated_pointing.scale)\nprint(m_updated_pointing.rotation_matrix)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use the `aiapy.calibrate.register` function to scale the image to\nthe $0.6''$ per pixel and derotate the image such that the y-axis is aligned\nwith solar North.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m_registered = register(m_updated_pointing)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we look again at the plate scale and rotation matrix, we\nshould find that the plate scale in each direction is $0.6''$\nper pixel and that the rotation matrix is diagonalized.\nThe image in `m_registered` is now a level 1.5 data product.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(m_registered.scale)\nprint(m_registered.rotation_matrix)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Though it is not typically part of the level 1.5 \"prep\" data pipeline,\nit is also common to normalize the image to the exposure time such that\nthe units of the image are DN / pixel / s.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m_normalized = normalize_exposure(m_registered)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can plot the exposure-normalized map.\nNote that small negative pixel values are possible because\nCCD images were taken with a pedestal set at ~100 DN.\nThis pedestal is then subtracted when the JSOC pipeline\nperforms dark (+pedestal) subtraction and flatfielding\nto generate level 1 files.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m_normalized.peek(vmin=0)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK aW#S< instrument_degradation.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Modeling channel degradation over time\n\nThis example demonstrates how to model the degradation\nof the AIA channels as a function of time over the entire\nlifetime of the instrument.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\nimport numpy as np\n\nimport astropy.time\nimport astropy.units as u\nfrom astropy.visualization import time_support\n\nfrom aiapy.calibrate import degradation\nfrom aiapy.calibrate.util import get_correction_table"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The sensitivity of the AIA channels degrade over time. Possible causes include\nthe deposition of organic molecules from the telescope structure onto the\noptical elements and the decrease in detector sensitivity following (E)UV\nexposure. When looking at AIA images over the lifetime of the mission, it\nis important to understand how the degradation of the instrument impacts the\nmeasured intensity. For monitoring brightness changes over months and years,\ndegradation correction is an important step in the data normalization process.\nFor instance, the SDO Machine Learning Dataset\n([Galvez et al., 2019](https://ui.adsabs.harvard.edu/abs/2019ApJS..242....7G/abstract))\nincludes this correction.\n\nThe AIA team models the change in transmission as a function of time (see\n[Boerner et al., 2012](https://doi.org/10.1007/s11207-011-9804-8)) and\nthe table of correction parameters is publicly available via the\n[Joint Science Operations Center (JSOC)](http://jsoc.stanford.edu/).\n\nFirst, fetch this correction table. It is not strictly necessary to do this explicitly,\nbut will significantly speed up the calculation by only fetching the table\nonce.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"correction_table = get_correction_table()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We want to compute the degradation for each EUV channel.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"channels = [94, 131, 171, 193, 211, 304, 335] * u.angstrom"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use the `~astropy.time` subpackage to create an array of times\nbetween now and the start of the mission with a cadence of one week.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"time_0 = astropy.time.Time(\"2010-03-25T00:00:00\", scale=\"utc\")\nnow = astropy.time.Time.now()\ntime = time_0 + np.arange(0, (now - time_0).to(u.day).value, 7) * u.day"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can use the `aiapy.calibrate.degradation` function to\ncompute the degradation for a particular channel and observation time.\nThis is modeled as the ratio of the effective area measured at a particular\ncalibration epoch over the uncorrected effective area with a polynomial\ninterpolation to the exact time.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"deg = {c: degradation(c, time, correction_table=correction_table) for c in channels}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Plotting the different degradation curves as a function of time, we can\neasily visualize how the different channels have degraded over time.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"time_support(format=\"jyear\") # This lets you pass astropy.time.Time objects directly to matplotlib\nfig = plt.figure()\nax = fig.gca()\nfor c in channels:\n ax.plot(time, deg[c], label=f\"{c.value:.0f} \u00c5\")\nax.set_xlim(time[[0, -1]])\nax.legend(frameon=False, ncol=4, bbox_to_anchor=(0.5, 1), loc=\"lower center\")\nax.set_xlabel(\"Time\")\nax.set_ylabel(\"Degradation\")\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK aW5 update_header_keywords.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Updating pointing and observer keywords in the FITS header\n\nThis example demonstrates how to update the metadata in\nan AIA FITS file to ensure that it has the most accurate\ninformation regarding the spacecraft pointing and observer\nposition.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import sunpy.map\n\nimport aiapy.data.sample as sample_data\nfrom aiapy.calibrate import fix_observer_location, update_pointing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"An AIA FITS header contains various pieces of\n[standard](https://fits.gsfc.nasa.gov/fits_standard.html).\nmetadata that are critical to the physical interpretation of the data.\nThese include the pointing of the spacecraft, necessary for connecting\npositions on the pixel grid to physical locations on the Sun, as well as\nthe observer (i.e. satellite) location.\n\nWhile this metadata is recorded in the FITS header, some values in\nthe headers exported by data providers (e.g.\n[Joint Science Operations Center (JSOC)](http://jsoc.stanford.edu/) and\nthe [Virtual Solar Observatory](https://sdac.virtualsolar.org/cgi/search)\nmay not always be the most accurate. In the case of the spacecraft\npointing, a more accurate 3-hourly pointing table is available from the\nJSOC.\n\nFor this example, we will read a 171 \u00c5 image from the aiapy sample data\ninto a `~sunpy.map.Map` object.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m = sunpy.map.Map(sample_data.AIA_171_IMAGE)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To update the pointing keywords, we can pass our `~sunpy.map.Map` to the\n`aiapy.calibrate.update_pointing` function. This function will query the\nJSOC, using `~sunpy`, for the most recent pointing information, update\nthe metadata, and then return a new `~sunpy.map.Map` with this updated\nmetadata.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m_updated_pointing = update_pointing(m)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we inspect the reference pixel and rotation matrix of the original map\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(m.reference_pixel)\nprint(m.rotation_matrix)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and the map with the updated pointing information\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(m_updated_pointing.reference_pixel)\nprint(m_updated_pointing.rotation_matrix)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"we find that the relevant keywords, `CRPIX1`, `CRPIX2`, `CDELT1`, `CDELT2`,\nand `CROTA2`, have been updated.\n\nSimilarly, the Heliographic Stonyhurst (HGS) coordinates of the observer\nlocation in the header are inaccurate. If we check the HGS longitude keyword\nin the header, we find that it is 0 degrees which is not the HGS longitude\ncoordinate of SDO.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(m_updated_pointing.meta[\"hgln_obs\"])\nprint(m_updated_pointing.meta[\"hglt_obs\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To update the HGS observer coordinates, we can use the\n`aiapy.calibrate.fix_observer_location` function. This function reads the\ncorrect observer location from Heliocentric Aries Ecliptic (HAE) coordinates\nin the header, converts them to HGS, and replaces the inaccurate HGS\nkeywords.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m_observer_fixed = fix_observer_location(m_updated_pointing)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking again at the HGS longitude and latitude keywords, we can see that\nthey have been updated.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(m_observer_fixed.meta[\"hgln_obs\"])\nprint(m_observer_fixed.meta[\"hglt_obs\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that in `~sunpy.map.AIAMap`, the `~sunpy.map.Map.observer_coordinate`\nattribute is already derived from the HAE coordinates such that it is not\nstrictly necessary to apply `aiapy.calibrate.fix_observer_location`. For\nexample, the unfixed `~sunpy.map.Map` will still have an accurate derived\nobserver position\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(m_updated_pointing.observer_coordinate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, we suggest that users apply this fix such that the information\nstored in `~sunpy.map.Map.meta` is accurate and consistent.\n\nFinally, plot the fixed map.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"m_observer_fixed.peek()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK ÑaW-TL! L! ! calculate_response_function.ipynbPK aWD ! replace_hot_pixels.ipynbPK aW+Wi# i# ? download_specific_data.ipynbPK aW]-\ \ jc skip_correct_degradation.ipynbPK aW; v skip_psf_deconvolution.ipynbPK aWPt t 7 prepping_level_1_data.ipynbPK aW#S< instrument_degradation.ipynbPK aW5 update_header_keywords.ipynbPK R