Project

General

Profile

Computer stress test

  • ipmitool dcmi power reading
  • ipmi-dcmi --get-system-power-statistics
# export PTS_CONCURRENT_TEST_RUNS=10

# export TOTAL_LOOP_TIME=10

# phoronix-test-suite stress-run pts/openssl pts/aobench pts/build-linux-kernel pts/cachebench pts/compress-gzip pts/ffmpeg pts/crafty pts/git pts/osbench pts/pybench pts/compilebench pts/sqlite pts/aio-stress pts/fio pts/iozone pts/stream pts/ramspeed pts/tiobench pts/pts/fs-mark pts/hdparm-read

 - Test All Options
This is our default stress test process:

# Install a smattering of various tests
phoronix-test-suite install pts/openssl pts/aobench pts/cachebench pts/compress-gzip pts/crafty
phoronix-test-suite install pts/git pts/osbench pts/pybench pts/aio-stress
phoronix-test-suite install pts/stream pts/ramspeed pts/tiobench

# Set concurrent runs 
PTS_CONCURRENT_TEST_RUNS=10
export PTS_CONCURRENT_TEST_RUNS

# Set max run time in minutes
TOTAL_LOOP_TIME=40
export TOTAL_LOOP_TIME

# Fire off the stress run
phoronix-test-suite stress-run pts/openssl pts/aobench pts/build-linux-kernel pts/cachebench pts/compress-gzip \
  pts/ffmpeg pts/crafty pts/git pts/osbench pts/pybench pts/compilebench pts/sqlite pts/aio-stress pts/fio \
  pts/iozone pts/stream pts/ramspeed pts/tiobench

This should automatically end after ${TOTAL_LOOP_TIME} minutes
This is our default stress test process:

# Install a smattering of various tests
phoronix-test-suite install pts/openssl pts/aobench pts/build-linux-kernel pts/cachebench pts/compress-gzip pts/ffmpeg pts/crafty
phoronix-test-suite install pts/git pts/osbench pts/pybench pts/compilebench pts/sqlite pts/aio-stress pts/fio pts/iozone
phoronix-test-suite install pts/stream pts/ramspeed pts/tiobench pts/primesieve

# Set concurrent runs 
PTS_CONCURRENT_TEST_RUNS=10
export PTS_CONCURRENT_TEST_RUNS

# Set max run time in minutes
TOTAL_LOOP_TIME=40
export TOTAL_LOOP_TIME

# Fire off the stress run
phoronix-test-suite stress-run pts/openssl pts/aobench pts/build-linux-kernel pts/cachebench pts/compress-gzip \
  pts/ffmpeg pts/crafty pts/git pts/osbench pts/pybench pts/compilebench pts/sqlite pts/aio-stress pts/fio \
  pts/iozone pts/stream pts/ramspeed pts/tiobench

This should automatically end after ${TOTAL_LOOP_TIME} minutes
Tests that failed to build.
  • pts/build-linux-kernel
  • pts/ffmpeg

Logbooks

always start with http://dbweb0.fnal.gov

near the top there is a link to a list of all eLogs

note that the links will divert you to dbweb4,5,6, or whatever, but these are not guaranteed to be there forever

Icarus DCS IOCs

Hi all,

I tracked down the problem with the gizmo archiving -- the archiver appears to be unhappy with having multiple epics instances running on the same server.  Epics itself does not seem to care, you can plot real-time PV data, no problem.  Curious.

I copied the gizmo PVs over to the GPS epics instance and things appear to work now.
One more tidbit of info:  We have been running multiple EPICS instances on icarus-gateway01 under the icarusdcs account (I just added you to .k5login)

I've tried to give the screen session sensible names:

[icarusdcs@icarus-gateway01 ~]$ screen -list

There are screens on:

        210017.gizmo    (Detached)

        31913.gps       (Detached)

        22836.gps-bcast (Detached)

        76988.gps-epics (Detached)

        308772.TCPPS    (Detached)

        323926.archiver (Detached)

6 Sockets in /var/run/screen/S-icarusdcs.

They all use the same port number, so I guess the proof is in the puddin'

Gizmo

Here's a wiki page for running the gizmo monitor at D0, easy to replace with icarus-gateway:

https://cdcvs.fnal.gov/redmine/projects/sbnddaq/wiki/GIZMO_and_Epics

Since the new sbndcs repo is not structured yet, the code is still in the old sbnddaq repo:

git clone ssh://p-sbnddaq@cdcvs.fnal.gov/cvs/projects/sbnddaq-readout

in the sbnddaq-readout/projects/gps directory

NB: If Linda changes the gizmo's "chirp" file (frequency setting, etc), the gizmo spy must be restarted to pick up the changes.  Usually she posts changes in the Icarus eLog.   If gizmo spy is not restarted, the front panel numbers will oscillate between the two different configuration settings, and Linda will freak out

oops, correction:

sbnddaq-readout/projects/gizmo directory

The code makes a simple ssh connection and runs the code on the gizmo, and parses the output.    When Giovanna saw it, she was apparently unhappy with its simplicity, and wanted to install a Rest Hub on the gizmo.  I don't know if that ever happened, did it?

Web access

The sbn-online web server files are located at:

/web/sites/s/sbn-online.fnal.gov/

This area is mounted on both icarusgpvm and sbndgpvm servers. 
Access is controlled through a service desk form, contact badgett@fnal.gov 
for persmissions.

This area has both development and production server areas -- any area 
ending in "-dev" is deveelopment, the others are production.   Please test 
changes in development first prior to moving to production.

The area to URL mapping is:

/web/sites/s/sbn-online.fnal.gov/htdocs => http://sbn-online.fnal.gov
/web/sites/s/sbn-online.fnal.gov/htdocs-dev => http://sbn-onlinedev.fnal.gov

Tomcat applications may be put in the webapps area.

HV for CRT

Services

Redmine Wiki

elog

Python

-bash-4.2$ virtualenv /home/nfs/savage/py/2.7.5/
New python executable in /home/nfs/savage/py/2.7.5/bin/python
Installing setuptools, pip, wheel...done.

SBN

Group account = sbnd

Computers

The four "new" servers at the D0/313 test stand are now fully ready to be used, with private network connected:

Public Private
sbn-daq01 sbn-daq01-priv
sbn-daq02 sbn-daq02-priv
sbn-daq03 sbn-daq03-priv
sbn-daq04 sbn-daq04-priv

SBND

ICARUS

ICARUS IFIX

The OPC servers are a mirrored pair:
fix-130473.fnal.gov
fix-130474.fnal.gov

And are accessible only on site or with VPN

The account we use for OPC to EPICS copy is FERMI\numi-srv-ifix-opc
which you can use to log into the above PCs. It is a Window service account that can connect using Microsoft Remote

Why log into the OPC servers?
Browse the OPC variable definitions, retrieve a list that looks like the spread sheet I sent around
Problems on the EPICS side seeing OPC variables? Launch the local viewer on the Windows PC to help localize the problem
Retrieve the "magic number" the OPS to EPICS service needs to connect. Sometimes it changes

Be careful, this account appears to have write access to OPC

I would send pix, but the PCs appear to be down right now

ICARUS Slow Controls

Quick guide for people working on slow controls:
  • Dennis Nicklaus (AD) wrote one or more IOCs for ICARUS, can help SBND
  • Sowjanya (UTenn) L3 manager DCS for SBND, provides students and post docs
  • Wei Tang (UTenn PD) sets up various EPICS instances per request and makes CSS pages
  • Ivan Lepetic (IIT GS) sets up archiver and databases, general database contact
  • Krishan Mistry (Manchester) helping Ivan and Gray
  • Gray Putnam (UChicago) writing web visualization for online monitor data, both detector data and slow controls
  • Andrew Mogan, Gray Yarbrough (UTenn GS) help with misc. stuff like testing power supplies, assembling slow control rack monitoring boxes
  • me, supervising and doing things that fall thru the cracks

Management

  • sbn-docdb 432 - Short Baseline Neutrino - Monthly Status Files
  • sbn-docdb 246 - Short Baseline Neutrino - Project Controls Files * wbs 2.7 and 4.5

Projects

DUNE

protoDUNE

ICEBERG

Logbook = https://dbweb6.fnal.gov:8443/ECL/dune/E/index

Account = dunecet.

  • protodune-daq01 - 2TB data disk installed here.
  • protodune-daq02 - I would run fts here. 2TB data disk nfs mounted here.

Looks like data appears here /data1/dropbox_staging.

/home/nfs/dunecet/artdaq/iceberg/daqlogs

Note that the dunecet home area is nfs mounted.

[dunecet@protodune-daq01 daqlogs]$ df -h .
Filesystem Size Used Avail Use% Mounted on
if-nas-0.fnal.gov:/pdune/daq 1.0T 425G 600G 42% /home/nfs

Fermilab

General login servers = flxi03.fnal.gov, flxi04.fnal.gov, flxi05.fnal.gov

DZero Control Room Displays

Group account = display
Computers = d0display01

OPC