Project

General

Profile

Image Quality Catalog

This page describes the image quality information extracted from the DECam and Blanco telemetry databases. Several options are available to obtain this information:

  • You can download standard catalogs for the entire period or individual months from this page. The catalogs are available in a csv style format or as SQLite database files. We provide a simple module, catalog.py , to work with these catalogs in Python.
  • The standard catalogs have grown in size and now contain over 100 variables. Sometimes you need less or different information. In these cases you have two options to generate your personalized catalog. Both options are described in detail below.
    • Run the exposure_reader.py script from your Unix command line
    • Enter an SQL statement into the selection window of the Query page of the Telemetry Viewer web application

The original catalog wiki page can be found here. Please be aware that most information on the old page is obsolete.

Standard IQ Catalogs

Using the tools described below we have generated standard image quality catalogs that can be downloaded from this page.
This version of the catalogs includes several new variables:

  1. ra_offset: the exposure offset in RA [arcsec] as found by kentools
  2. dec_offset: the exposure offset in dec [arcsec] as found by kentools
  3. slewangl: the slew angle from the previous exposure in degrees
  4. slew_time: Telescope slew time in seconds (not set when the exposure queue was empty)
  5. readout_time: Panview/CCD readout time in seconds
  6. hexapod_time: Hexapod adjustment time in seconds
  7. time_between_exposures: also in seconds
  8. moonangl: Separation in degrees between requested telescope position and the moon

The information in the catalog is stitched together from different tables in the database:

  1. exposure.exposure
  2. telemetry.image_health_fp
  3. telemetry.guider_summary
  4. telemetry.tcs_telemetry
  5. telemetry.donut_ana
  6. telemetry.hexapod_lut
  7. telemetry.aos_summary
  8. telemetry.donut_summary
  9. telemetry.telescope_data

SV IQ Catalog Variables

Click here for a brief description of the variables included in the new SV IQ catalogs

Catalog Files

The catalog is a comma separate ascii file. The catalogs generated include all exposures of type object with an exposure time of at least 30 seconds, that have not lost all guider stars, and that have been delivered.

iq_jan.dat IQ Catalog for January

iq_feb.dat IQ Catalog for February

iq_march.dat IQ Catalog for March

iq_apr.dat IQ Catalog for April

iq_may.dat IQ Catalog for May

iq_june.dat IQ Catalog for June

iq_july.dat IQ Catalog for July

iq_august_5.dat IQ Catalog for August through August 5th

The following catalogs contain each months engineering regression (only the regression script) exposures. They contain extra variables in addition to the standard ones.
The June and July catalogs contain corrected trim, tweak, and hexapod position values. The others are incorrect.

april_engineering_regression.dat

may_engineering_regression.dat

june_engineering_regression.dat

july_engineering_regression.dat

The following contains all valid exposures in the database. It has been updated as of August 5th:

all_exposures.dat

There are also SQLite database versions of the same data available:

Notes about database versions:

  • The table name is "catalog"
  • The data types in the table may occasionally be incorrect. If this is the case, the data will be of type String.
  • Column names with a "-" now have "_" instead.

iq_jan.db DB for January

iq_feb.db DB for February

iq_march.db DB for March

iq_apr.db DB for April

iq_may.db DB for May

iq_june.db DB for June

iq_july.db DB for July

iq_august_5.db IQ Catalog for August through August 5th

Engineering Regressions (only the regression script):
The June and July databases contain corrected trim, tweak, and hexapod position values. The others are incorrect.

april_engineering_regression.db

may_engineering_regression.db

june_engineering_regression.db

july_engineering_regression.db

The following contains all valid exposures in the database. It has been updated as of August 5th:

all_exposures.db

How to Built Your Own IQ Catalog from the Command Line

In order to use this option you have to download the exposure_reader.py Python module. You can use this from the command line (after running chmod +x exposure_reader.py to make it executable) or if you need more options with a script or the Python interpreter. You can use this on any computer that can open a connection to the DECam database mirror at fnal.gov. If you want to use this script please contact to get the username and password.

Using exposure_reader.py from the command line

Using exposure_reader.py in Unix to make IQ Catalogs is relatively simple. After downloading the file, just run it in Unix with the name of the catalog output file as an argument:

$ exposure_reader.py mycatalog.dat

You may have to run
$ chmod +x exposure_reader.py 
first to change the permissions. Optional arguments can be used to select a date range or a range of exposure numbers. For additional options you have to use the write_exposures function directly in a script or the Python interpreter (see below).
The command line options for exposure_reader.py are:
-m or --minimum-id     for the minimum exposure id
-M or --maximum-id     for the maximum exposure id
-d or --minimum-date   for the minimum id (to include time use the format YYYY-MM-DDThh:mm:ss)
-D or --maximum-date   for the maximum id (to include time use the format YYYY-MM-DDThh:mm:ss)

For example, to select all exposures taken in January and to write the catalog to a file names jan.dat us this command:

prompt> ./exposure_reader.py -d 2013-01-01 -D 2013-02-01 jan.dat

Using the exposure_reader.py module in a Python script

The exposure_reader module includes the write_exposures function that you can call directly from a Python script or the interpreter if you need more control over the selection process.
Here is the calling signature for this function. An output file name is required; all other arguments are optional. The defaults are listed

write_exposures(file_name, id_min = None, id_max = None, date_min = None, date_max = None, flavor = 'object', delivered = True, guider_low_bound = 1, 
guider_high_bound = 2, exptime_min = 30, user_conditions = '', none_type_written = None)

delivered, guider_low_bound, guider_high_bound are standard selection criteria used for all IQ catalogs in the past. The default values reject exposures with online (SISPI) problems. The default minimum exposure time is 30 seconds. Note that no guider information is available for anything less than 20 seconds.
To write exposures to an IQ file in Python, open the Python interpreter in Unix, import exposure_reader, and run the write_exposures method to write exposures from the database. For example:
>>> import exposure_reader as expreader
>>> expreader.write_exposures(file_name = 'iq_jan.dat', date_min = '2013-01-01', date_max = '2013-02-01')

This will write all valid exposure data taken in January to the file iq_jan.dat. Only exposures of type "object", with delivered = true, guider greater than 0 and less than 3 (had guider stars and did not lose track of all of them), and an exposure time greater than 30 are considered valid.

The method write_exposures has argument file_name (must be provided), and then optional arguments id_min, id_max, date_min, date_max, and user_conditions. Dates must be in the following format: YYYY-MM-DDTHH:MM:SS with the T being the character 'T', or an empty space ' '. The user_conditions argument accepts a selection criterion in SQL WHERE syntax, i.e. in the same format you would use after 'WHERE' in the SQL query. Ex: " winddir >= 235 "

>>> import exposure_reader as expreader
>>> expreader.write_exposures(file_name = 'iq_test.dat', id_min = 201000, id_max = 209000, user_conditions = ' winddir >=235 ')

This example records all exposure data into iq_test.dat for exposures with ids between 201000 and 209000, with wind directions greater than or equal to 235

To change the base selection standards for exposures, the optional arguments in the method write_exposures can be overwritten. They are (with their defaults), flavor = 'object', delivered = True, guider_low_bound = 1, guider_high_bound = 2, and exptime_min = 30.

For example, to change the minimum exposure time to 20 in the previous example:

>>> expreader.write_exposures(file_name = 'iq_test.dat', id_min = 201000, id_max = 209000, user_conditions = ' winddir >=235 ', exptime_min = 20)

To change how data points of type "None" is written, the optional argument none_type can be overwritten with the new value.

>>> expreader.write_exposures(file_name = 'iq_test.dat', id_min = 201000, id_max = 209000, user_conditions = ' winddir >=235 ', exptime_min = 20, none_type_written = -99999)

In the above example, any data point that has value "None" is written as -99999.

write_exposures is a wrapper function that calls query_exposures to retrieve exposure information from the database and then write this to the output file.

The query_exposures Function

The actual database query is performed by the query_exposures function in the exposure_reader module. Here is the calling syntax:

query_exposures(exposure_id_min=None, exposure_id_max=None, exposure_date_min = None, exposure_date_max =None, exposure_flavor = 'object', 
exposure_delivered = True, exposure_guider_low_bound = 1, exposure_guider_high_bound = 2, exposure_exptime_min = 30, conditions = '', none_type = None)

The method query_exposures works similarly to write_exposures, but instead returns a dictionary containing lists of data. The keys in the dictionary are the variable names (properties) of the exposures which can be found here. The indices of the elements in the lists in the dictionary all correspond by exposure, meaning if the exposure number found at index 0 of the list contained under the 'expid' key is 201001, then the wind direction found at index 0 of the list contained under the 'winddir' key is the wind direction for exposure 201001. For example:

>>> import exposure_reader as expreader
>>> d = expreader.query_exposures(exposure_id_min = 201000, exposure_id_max = 201020, conditions = ' winddir >=235 ', exposure_exptime_min = 10)
>>> d['expid']
[201000, 201001, 201002, 201003, 201004, 201005, 201006, 201007, 201008, 201009, 201010]

The variable d now contains a dictionary of all exposures with ids less than or equal to 209000, greater than or equal to 201000, wind direction greater than or equal to 235, and an exposure time minimum of 20.

Using your custom SQL statement with exposure_reader

Direct SQL statements can also be used with exposure_reader, using the method sql_query. The method takes three arguments. The first is sql, which is a string containing the SQL statement being used. The result is then directly returned to the user with no processing as a list of tuples containing the data (this is how the database returns data, each tuple representing a row). The user can also optionally define the arguments file_name and header. If file_name is defined by the user, a csv file containing the data returned is written to that file. If header is defined as well, then the file written will be in the format of an IQ Catalog file. The argument header must be a list of strings, containing the names of the columns.

Examples:

>>> import exposure_reader as expreader
>>> data = expreader.sql_query(sql ='SELECT exposure.id, exposure.exposed, donut_summary.expid, exposure.date, donut_summary.dodz, telescope_data.tel_az 
FROM exposure.exposure FULL JOIN telemetry.donut_summary ON exposure.id = donut_summary.expid FULL JOIN telemetry.telescope_data ON 
telescope_data.expid = exposure.id WHERE exposure.id > 207300 AND exposure.id < 207320')

In the above example, data contains the list of tuples returned from the query.

>>> import exposure_reader as expreader
>>> data = expreader.sql_query( sql = 'SELECT exposure.id, exposure.exposed, donut_summary.expid, exposure.date, donut_summary.dodz, telescope_data.tel_az 
FROM exposure.exposure FULL JOIN telemetry.donut_summary ON exposure.id = donut_summary.expid FULL JOIN telemetry.telescope_data 
ON telescope_data.expid = exposure.id WHERE exposure.id > 207300 AND exposure.id < 207320', file_name =  'example.dat')

In the above example, data contains the list of tuples, and the data is also written to the csv file example.dat

>>> import exposure_reader as expreader
>>> data = expreader.sql_query(sql = 'SELECT exposure.id, exposure.exposed, donut_summary.expid, exposure.date, donut_summary.dodz, telescope_data.tel_az 
FROM exposure.exposure FULL JOIN telemetry.donut_summary ON exposure.id = donut_summary.expid FULL JOIN telemetry.telescope_data 
ON telescope_data.expid = exposure.id WHERE exposure.id > 207300 AND exposure.id < 207320', file_name =  'example2.dat', 
header = ['id', 'exposed', 'expid', 'date', 'dodz', 'tel_az'])

Finally, the above example has the list of tuples being written to data, as well as an IQ Catalog being created, named example2.dat, which has columns id, exposed, expid , date, dodz, and tel_az.

How to Built Your Own IQ Catalog using the Telemetry Viewer

The Query page of the Telemetry Viewer web application (http://system1.ctio.noao.edu:8080/TV/app/Q/index)can be used to create custom IQ catalogs. Here is a description how to do this as well as a few examples that you can just copy and paste into the text box on the Query page.

To select a field (column) from a table, use

SELECT column1, column2 FROM table.
It is a good idea to limit results by using WHERE, otherwise a very large amount of information will be returned. WHERE follows the previous statement, and is of the form WHERE condition. LIMIT can also be used, which limits the number of rows returned.

Examples of the above would be.

SELECT id, date FROM exposure.exposure WHERE id > 150000 AND id < 150100
SELECT id, date FROM exposure.exposure LIMIT 100;

Finally, to select multiple columns from multiple tables while making sure that the exposure ID's match, use

SELECT table1.column1, table1.column2, table2.column1 FROM table1 FULL JOIN table2 ON table1.expid = table2.expid

For example:

SELECT exposure.id, donut_summary.expid, exposure.date, donut_summary.dodz FROM exposure.exposure 
FULL JOIN telemetry.donut_summary ON exposure.id = donut_summary.expid 
WHERE exposure.id > 207300 AND exposure.id < 208000;

This can be done with any number of tables, for example:

SELECT exposure.id, donut_summary.expid, exposure.date, donut_summary.dodz, telescope_data.tel_az FROM exposure.exposure 
FULL JOIN telemetry.donut_summary ON exposure.id = donut_summary.expid FULL JOIN telemetry.telescope_data ON telescope_data.expid = exposure.id
WHERE exposure.id > 207300 AND exposure.id < 208000;

The examples above can all be used in the Telemetry viewer for quick viewing of data. Just replace the table names and column names with
that of the wanted data, and the exposure id values with the range of data wanted.

Utilities

Catalog Module

The IQ catalogs are simple csv files that can be processed with a number of tools. For your convenience,
catalog.py is a simple python module that
provides a few functions to work with IQ catalog files. Most useful is probably getdata that reads in the catalog file and converts it to a python dictionary using the column names as keys. A detailed description of the module can be found here.

Uploaded Files