Project

General

Profile

List of cron jobs running on online machines

The following cron jobs should always be running on the online (DAQ) machines:

On evb:
  • drop_caches -- clears out cached memory. Linux tries to be smart about keeping recently read files/data in cached memory so it can be easily accessed again, but it's not so great about clearing it out promptly when we run out of memory space. So, this is a simple hack to force things to clear out every 3 minutes. It runs on evb and the sebs - on evb it's used to clear out the data from the nu/triggered stream.
  • gmetric -- runs once a minute, and updates a disk usage metric for the "size_data" ganglia metric (basically: free space in the /data area)
On the sebs (seb1-10):
  • drop_caches -- the same script as described above for evb (this actually runs on all the online machines). This cron job clears out cached memory. Linux tries to be smart about keeping recently read files/data in cached memory so it can be easily accessed again, but it's not so great about clearing it out promptly when we run out of memory space. So, this is a simple hack to force things to clear out every 3 minutes. It runs on evb and the sebs - on the sebs it's used to clear out the data from the sn stream.

You can check which cron jobs are running by looking at /var/log/cron when logged in as root on the online machines (you need to be logged in as root to check or change cron jobs -- follow the link in the previous sentence to see instructions on how to do that!).

A list of all running cron jobs

This page summaries all cron jobs running on MicroBooNE DAQ machines.
As part of the effort to back up every valuable component on these machine, it is needed to be verified that all jobs and the scripts that these jobs are running are all backed up. The EVB /home, /var/spool/cron on every online machine, and /etc/cron.* on every online machine are backed up to uboonedaq-evb. In the hereby list If a script is called, the name of the script is in bold, then the cron job is also copied for reference. Scripts in bold red are not currently backed up.

Glen

  • Running on every machine:
    1. /home/gahs/slowmoncon/apps/PCStatus/ipmi_sdr_to_ganglia.py: */1 * * * * cd /home/gahs/slowmoncon/apps/PCStatus; python ipmi_sdr_to_ganglia.py --ipmitoolsdr=/home/gahs/ipmi-to-ganglia/ipmi-sdr-list --ignorelist=ignorelist-`hostname -s`.txt
  • Running on smc:
    1. /home/gahs/slowmoncon/apps/extravigilance/extravigilance.sh: 13 * * * * cd /home/gahs/slowmoncon/apps/extravigilance; ./extravigilance.sh
    2. /home/gahs/java-opc-client/startOPCClient.sh: # 26,56 * * * * cd /home/gahs/java-opc-client; ( ./startOPCClient.sh 2>&1 ) > /dev/null &
    3. /home/gahs/storcli_to_gmetric/strocli_to_gmetric.py: */10 * * * * /usr/krb5/bin/kcron /usr/krb5/bin/ksu -e /opt/MegaRAID/storcli/storcli64 show | python storcli_to_gmetric/storcli_to_gmetric.py Glenn note: plan to replace with a better script that works on all machines anyway, and store in (probably) /home/uboonesmc. It will be important for many reasons that the RAID utilities in /opt not be lost. This is taken care of by SLAM - the RAID utilities are put there by configuration management.
    4. /home/uboonesmc/setup_SMC_EPICS.sh, /home/uboonesmc/daily_alarm_summary/daily_alarm_summary.py, /home/uboonesmc/daily_alarm_summary/ecl_post.py: 01 7 * * * ( source ~uboonesmc/setup_SMC_EPICS.sh; setup python v2_7_9; cd daily_alarm_summary; python ./daily_alarm_summary.py | python ./ecl_post.py -U http://dbweb6.fnal.gov:8080/ECL/uboone -c "Slow monitoring and control” )
  • Running on smc and smc2
    1. /home/gahs/bin/pg_dumpall_script.sh: 05 20 * * ( time nice ionice -c2 -n7 bin/pg_dumpall_scrip t.sh )

Victor

  • Running on ws02
    1. /home/vgenty/update_kinit.sh: */60 * * * * /home/vgenty/update_kinit.sh >> /home/vgenty/cron_log.log 2>&1. Kirby note - is this still needed?

Nathaniel

  • Running on near1:
    1. (Commented out - can this be deleted?) /home/tagg/sn-monitor/run.sh: #0,10,20,30,40,50 * * * * /usr/krb5/bin/kcron /home/tagg/sn-monitor/run.sh
    2. /home/tagg/sn-monitor/run.sh: * * * * * /usr/krb5/bin/kcron /home/tagg/sn-monitor/run.sh
  • Running on near2:
    1. /home/tagg/RunCat/cron.sh: 0,10,20,30,40,50 * * * * /usr/krb5/bin/kcron ${RUNCAT}/cron.sh > ${RUNCAT}/cron.log 2>&1
    2. /home/tagg/Argo/server/sync_to_evd1.sh: 0,10,20,30,40,50 * * * * /usr/krb5/bin/kcron /home/tagg/Argo/server/sync_to_evd1.sh /home/tagg/Argo/logs/sync_to_evd1.log 2>&1
    3. /usr/krb5/bin/kcron rsync -a evb:~uboonedaq/RunInfoLogs/ /datalocal/RunInfoLogs-backup

Kirby

  • Running on evb:
    1. 00 01 * * * /usr/krb5/bin/kcron find /tmp/evb-home-usage.html -mmin -120 -exec 'scp {} uboonegpvm01.fnal.gov:/publicweb/k/kirby/evb-home-usage.html \;’
    2. /etc/cron.d/evb_home_usage_kirby - necessary because the "kirby" account doesn't actually have authority to read everything in /home in order to get that usage list. Please don't delete this. If it does go away, it won't stop operations, we just won't know which accounts are using up space on /home when it fills up. It needs to run as root so it can read all the home areas, so it was suggested to install it here by the SLAM team I believe.

uboonedaq

  • Running on evb
    1. /home/uboonedaq/scripts/cronJobs/gmetric.sh: in /etc/cron.d/gmetric: * * * * * uboonedaq /home/uboonedaq/scripts/cronJobs/gmetric.sh
  • Running on ws01
    1. 0 8 * * 5 source /uboonenew/setup; setup root v5_34_23 -q e6:prof; cd /home/uboonedaq/scripts/cronJobs/; python getAutoDAQelogStatistics.py -b >/home/uboonedaq/logDAQstatistics 2>&1
  • Running on ws01 and evb
    1. /home/uboonedaq/uboone-shift-tools/uboone-operations-ubooneops/sync-er: 0,10,20,30,40,50 * * * * export KRB5CCNAME=FILE:/tmp/krb5cc_uboonedaq_evb;/home/uboonedaq/uboone-shift- tools/uboone-operations-ubooneops/sync-er >/home/uboonedaq/uboone-shift-tools/log 2>&1
  • Running on near2
    1. (Commented out - can this be deleted?) #1 * * * * /usr/krb5/bin/kcron rsync -a evb:~uboonedaq/RunInfoLogs/ /datalocal/RunInfoLogs-backup
  • Running on uboonedaq-evb
    The following cron jobs are the main backup for the home and uboonenew directory of evb and all crobjob running on all online machines using rsync
    They are located here:
    /etc/cron.daily/backup_evb_home_prods
    /etc/cron.daily/backup_cron_jobs

ubooneshift

  • Running on ws01
    1. /home/ubooneshift/uboone-shift-tools/uboone-operations-ubooneops/sync-er (is this the same as uboonedaq cron job above?): 0,10,20,30,40,50 * * * * export KRB5CCNAME=FILE:/tmp/krb5cc_uboonedaq_evb;/home/ubooneshift/uboone-shift-tools/uboone-operations-ubooneops/sync-er >/home/ubooneshift/uboone-shift-tools/log 2>&1

uboonepro

Kirby note: Please make sure these are maintained otherwise we won't be able to copy any files out of evb or near1 to permanent storage.

  • Running on ws02, smc, and evb
    1. 00 12 * * * export KRB5CCNAME=FILE:/tmp/krb5cc_uboonepro_smc; kinit -A -k -t /var/adm/krb5/uboonepro_smc.keytab uboonepro/cron/ubdaq-prod-smc.fnal.gov
  • Running on near1 and evb
    1. /home/uboonepro/ubooneprokey.pem, /home/uboonepro/ubooneprocert.pem, /home/uboonepro/uboonepro_production_near1_proxy_file: 05 */6 * * * /usr/bin/voms-proxy-init -rfc -key /home/uboonepro/ubooneprokey.pem -cert /home/uboonepro/ubooneprocert.pem -valid 48:0 -voms fermilab:/fermilab/uboone/Role=Production -out /home/uboonepro/uboonepro_production_near1_proxy_file

postgres

Bonnie will request a TiBS backup for /pghome, to ensure the directories are backed up and recoverable if needed (they will just not be available on the quick backup from uboonedaq-evb, but we will be able to recover them with slightly more time). The database team (Olga and Svetlana) will take care of the cron jobs themselves.
Action item (Bonnie): request TiBS backup for /pghome

  • Running on smc
    1. 58 23 * * * find /pgdata/pg_log/ -type f -mtime +14 -exec rm {} \; > /dev/null 2>&1
    2. 25 11 * * * find /pglogs/pg_archlogs_local/ -type f -mtime +14 -exec rm {} \; > /dev/null 2>&1
    3. /pghome/postgres/check_primary_smc.sh: */30 * * * * /pghome/postgres/check_primary_smc.sh >> /dev/null
    4. /pghome/postgres/rdbae/bbc4.60-bbpe/check_bb.sh: */15 * * * * /pghome/postgres/rdbae/bbc4.60-bbpe/check_bb.sh >> /dev/null 2>&1
  • Running on smc and smc2:
    1. /pghome/postgres/krbsmc_renew.sh: 0 */8 * * * /pghome/postgres/krbsmc_renew.sh >> /dev/null 2>&1
  • Running on smc2:
    1. 00 10 * * * find /datalocal/archive -type f -mtime +7 -exec rm {} \; > /dev/null 2>&1
    2. 00 23 * * * find /datalocal/pgdata/pg_log/ -type f -mtime +14 -exec rm {} \; > /dev/null 2>&1
    3. *%{Color:Red}/pghome/postgres/check_standby_ov.sh: */30 * * * * /pghome/postgres/check_standby_ov.sh >> /dev/null
    4. /pghome/postgres/xfer_wals_to_ifdb06.sh: #*/10 * * * * /pghome/postgres/xfer_wals_to_ifdb06.sh >> /datalocal/pgdata/pg_log/xfer_wals_to_ifdb06_$(date +\%Y-\%m-\%d).log 2>&1
    5. /pghome/postgres/xfer_wals_to_smc.sh: #*/10 * * * * /pghome/postgres/xfer_wals_to_smc.sh >> /datalocal/pgdata/pg_log/xfer_wals_to_smc_$(date +\%Y-\%m-\%d).log 2>&1

uboonesmc

  • Running on smc
    1. /home/uboonesmc/setup_SMC_EPICS.sh: SETUP=./setup_SMC_EPICS.sh
    2. SMCAPPS=./slowmoncon/apps/
    3. /home/uboonesmc/slowmoncon/apps/ubdaq-prod-smc_run/run-smc.sh: */2 * * * *source $SETUP; cd $SMCAPPS; ubdaq-prod-smc_run/run-smc.sh > ubdaq-prod-smc_run/run-smc.sh.out 2>&1
    4. /home/uboonesmc/slowmoncon/apps/WeatherReader/weather_reader.sh:*/2 * * * *source $SETUP; cd $SMCAPPS/WeatherReader; ./weather_reader.sh > weather_reader.sh.out 2>&1
    5. /home/uboonesmc/slowmoncon/apps/IFBeamDataReader/beamdata.sh:*/2 * * * *source $SETUP; cd $SMCAPPS/IFBeamDataReader; ./beamdata.sh > beamdata.sh.out 2>&1
    6. /home/uboonesmc/slowmoncon/apps/GangliaReader/GangliaReader.py:*/1 * * * * source $SETUP; cd $SMCAPPS/GangliaReader; sleep 5; python GangliaReader.py > GangliaReader.out 2>&1

Root

  • Running on evb, seb01, seb02, seb03, seb04, seb05, seb06, seb07, seb08, seb09, seb10:
    1. /home/uboonedaq/scripts/cronJobs/drop_caches.sh: in /etc/cron.d/drop_caches: */3 * * * * root /home/uboonedaq/scripts/cronJobs/drop_caches.sh > /dev/null 2>&1