Requesting specific AIA images from the JSOC#

This example shows how to request a specific series of AIA images from the JSOC.

We will be filtering the data we require by keywords and requesting short exposure images from a recent flare.

Unfortunately, this can not be done using the sunpy downloader Fido and instead we will use the drms Python library directly.

import os
from pathlib import Path

import drms
import matplotlib.pyplot as plt

import astropy.units as u

from aiapy.calibrate import correct_degradation, normalize_exposure, register, update_pointing
from aiapy.calibrate.util import get_correction_table, get_pointing_table

Exporting data from the JSOC requires registering your email first. Please replace this with your email address once you have registered. See this page for more details.

jsoc_email = os.environ.get("JSOC_EMAIL")

Our goal is to request data of a recent (of time of writing) X-class flare. However, we will request the explanation of the keywords we want from the JSOC.

client = drms.Client(email=jsoc_email)

print("Querying series info")
# We plan to only use the EUV 12s data for this example.
series_info ="aia.lev1_euv_12s")
for key in keys:
    linkinfo = series_info.keywords.loc[key].linkinfo
    note_str = series_info.keywords.loc[key].note
    print(f"{key:>10} : {note_str}")
Querying series info
   EXPTIME : Exposure duration: mean shutter open time
   QUALITY : Level 1 Quality word
     T_OBS : Observation time
     T_REC : Slotted observation time
  WAVELNTH : Wavelength

We will construct the query. The X-class flare occurred on the 2021/07/03 at 14:30:00 UTC. We will focus on the 5 minutes before and after this time.

qstr = "aia.lev1_euv_12s[2021-07-03T14:25:00Z-2021-07-03T14:35:00Z]"
print(f"Querying data -> {qstr}")
results = client.query(qstr, key=keys)
print(f"{len(results)} records retrieved.")
Querying data -> aia.lev1_euv_12s[2021-07-03T14:25:00Z-2021-07-03T14:35:00Z]
357 records retrieved.

As you can see from the output, we have received a a list of AIA images that were taken during the flare. What we want to do now is to filter the list of images to only include shorter expsoures. However, before we do this, let us check what the exposure times are.

# Filter out entries with EXPTIME > 2 seconds
results = results[results.EXPTIME < 2]
      EXPTIME  QUALITY                    T_OBS                 T_REC  WAVELNTH
3    1.999620        0  2021-07-03T14:25:05.84Z  2021-07-03T14:24:59Z       193
10   1.999624        0  2021-07-03T14:25:17.84Z  2021-07-03T14:25:11Z       193
17   1.999622        0  2021-07-03T14:25:29.83Z  2021-07-03T14:25:23Z       193
24   1.999621        0  2021-07-03T14:25:41.84Z  2021-07-03T14:25:35Z       193
31   1.999622        0  2021-07-03T14:25:53.84Z  2021-07-03T14:25:47Z       193
..        ...      ...                      ...                   ...       ...
344  0.102880        0  2021-07-03T14:34:58.50Z  2021-07-03T14:34:47Z       131
346  0.148062        0  2021-07-03T14:34:55.47Z  2021-07-03T14:34:47Z       193
350  1.483586        0  2021-07-03T14:35:01.80Z  2021-07-03T14:34:59Z        94
353  1.999624        0  2021-07-03T14:35:05.84Z  2021-07-03T14:34:59Z       193
354  0.509090        0  2021-07-03T14:35:01.15Z  2021-07-03T14:34:59Z       211

[101 rows x 5 columns]

This style of filtering can be done to any column in the results. For example, we can filter the WAVELNTH column to only include 171 data with short expsoures.

# Only use entries with WAVELNTH == 211
results = results[results.WAVELNTH == 211]
      EXPTIME  QUALITY                    T_OBS                 T_REC  WAVELNTH
88   0.578715        0  2021-07-03T14:27:25.09Z  2021-07-03T14:27:23Z       211
102  0.693402        0  2021-07-03T14:27:48.98Z  2021-07-03T14:27:47Z       211
116  0.832670        0  2021-07-03T14:28:12.86Z  2021-07-03T14:28:11Z       211
130  1.000602        0  2021-07-03T14:28:36.72Z  2021-07-03T14:28:35Z       211
144  1.199910        0  2021-07-03T14:29:00.54Z  2021-07-03T14:28:59Z       211
158  0.237294        0  2021-07-03T14:29:25.39Z  2021-07-03T14:29:23Z       211
172  0.286916        0  2021-07-03T14:29:49.34Z  2021-07-03T14:29:47Z       211
186  0.344258        0  2021-07-03T14:30:13.29Z  2021-07-03T14:30:11Z       211
200  0.414851        0  2021-07-03T14:30:37.23Z  2021-07-03T14:30:35Z       211
214  0.495807        0  2021-07-03T14:31:01.16Z  2021-07-03T14:30:59Z       211
228  0.594103        0  2021-07-03T14:31:25.07Z  2021-07-03T14:31:23Z       211
242  0.712433        0  2021-07-03T14:31:48.97Z  2021-07-03T14:31:47Z       211
256  0.855792        0  2021-07-03T14:32:12.86Z  2021-07-03T14:32:11Z       211
270  0.168142        0  2021-07-03T14:32:37.46Z  2021-07-03T14:32:35Z       211
284  0.204964        0  2021-07-03T14:33:01.42Z  2021-07-03T14:32:59Z       211
298  0.246917        0  2021-07-03T14:33:25.38Z  2021-07-03T14:33:23Z       211
312  0.296079        0  2021-07-03T14:33:49.34Z  2021-07-03T14:33:47Z       211
326  0.353420        0  2021-07-03T14:34:13.29Z  2021-07-03T14:34:11Z       211
340  0.427137        0  2021-07-03T14:34:37.22Z  2021-07-03T14:34:35Z       211
354  0.509090        0  2021-07-03T14:35:01.15Z  2021-07-03T14:34:59Z       211


Only complete searches can be downloaded from JSOC, this means that no slicing operations performed on the results object will affect the number of files downloaded.

We can filter and do analysis on the metadata that was returned. The issue is is that if we only want this data, you can not use this “filtered results” to download only the data we want. To do this, we will have to do a second query to the JSOC, this time using the query string syntax the lookdata web page. You can use the website to validate the string before you export the query.

updated_qstr = "aia.lev1_euv_12s[2021-07-03T14:25:00Z-2021-07-03T14:35:00Z][? EXPTIME<2.0 AND WAVELNTH=211 ?]{image}"
print(f"Querying data -> {updated_qstr}")
# The trick here is to use the "image" keyword for ``seg`` to only download the
# image data only and this gives us direct filenames as well.
records, filenames = client.query(updated_qstr, key=keys, seg="image")
print(f"{len(records)} records retrieved. \n")

# We do a quick comparision to ensure the final results are the same.
# For this to work, we just need to deal with the different indexes.
print("Quick Comparison")
print(results.reset_index(drop=True) == records.reset_index(drop=True))
Querying data -> aia.lev1_euv_12s[2021-07-03T14:25:00Z-2021-07-03T14:35:00Z][? EXPTIME<2.0 AND WAVELNTH=211 ?]{image}
20 records retrieved.

Quick Comparison
0      True     True   True   True      True
1      True     True   True   True      True
2      True     True   True   True      True
3      True     True   True   True      True
4      True     True   True   True      True
5      True     True   True   True      True
6      True     True   True   True      True
7      True     True   True   True      True
8      True     True   True   True      True
9      True     True   True   True      True
10     True     True   True   True      True
11     True     True   True   True      True
12     True     True   True   True      True
13     True     True   True   True      True
14     True     True   True   True      True
15     True     True   True   True      True
16     True     True   True   True      True
17     True     True   True   True      True
18     True     True   True   True      True
19     True     True   True   True      True

From here you can now request (export) the data. This will download this specific subset of data to your local machine when the export request has been completed. Depending on the status of the JSOC, this might take a while.

Please be aware the script will hold until the export is complete.

export = client.export(updated_qstr, method="url", protocol="fits")
files ="~/sunpy/").expanduser().as_posix())

With AIA files, it is possible to bypass the export stage. We can manually construct the URLS of the the data. Be aware that each file will have the same filename based on the URL. You will have to then use your preferred downloader to download the files.

urls = [f"{filename}" for filename in filenames.image]

Now we will “prep” the data with every feature of aiapy and plot the data sequence using sunpy.

level_1_maps =
# We get the pointing table outside of the loop for the relevant time range.
# Otherwise you're making a call to the JSOC every single time.
pointing_table = get_pointing_table(level_1_maps[0].date - 3 * u.h, level_1_maps[-1].date + 3 * u.h)
# The same applies for the correction table.
correction_table = get_correction_table()

level_15_maps = []
for a_map in level_1_maps:
    map_updated_pointing = update_pointing(a_map, pointing_table=pointing_table)
    map_registered = register(map_updated_pointing)
    map_degradation = correct_degradation(map_registered, correction_table=correction_table)
    map_normalized = normalize_exposure(map_degradation)
sequence =, sequence=True)
AIA 211.0 Angstrom 2021-07-03 14:27:24
/home/docs/checkouts/ ErfaWarning: ERFA function "taiutc" yielded 1 of "dubious year (Note 4)"
  warnings.warn('ERFA function "{}" yielded {}'.format(func_name, wmsg),
/home/docs/checkouts/ ErfaWarning: ERFA function "utctai" yielded 1 of "dubious year (Note 3)"
  warnings.warn('ERFA function "{}" yielded {}'.format(func_name, wmsg),
/home/docs/checkouts/ ErfaWarning: ERFA function "dtf2d" yielded 100 of "dubious year (Note 6)"
  warnings.warn('ERFA function "{}" yielded {}'.format(func_name, wmsg),

Total running time of the script: (1 minutes 35.112 seconds)

Gallery generated by Sphinx-Gallery