Using NOvASoft on the GPVM nodes » History » Version 26

Version 25 (Jan Zirnstein, 04/03/2014 04:34 PM) → Version 26/38 (Jan Zirnstein, 04/03/2014 04:41 PM)


h1. Using NOvASoft on the NOvA Interactive nodes

h2. Available nodes

NOvA has a small pool of computing nodes that have been configured with our experiment's software, disk resources and user accounts. The nodes are part of the "General Purpose Computing Facilities" (GPCF) at Fermilab and can be accessed both from Fermilab as well as offsite.

Other Intensity Frontier (IF) experiments have similar pools of computing resources and can be accessed in a similar manner to that described here for NOvA.

To login to the NOvA interactive nodes, you login to "". This name performs some limited load balancing between all the computers to ensure that not everyone ends up on the same machine. In general you will will be able to log in using a command like:


You will be logged into a machine with a name like "" (i.e. novagpvm01, novagpvm02, etc...) and from there you will be able to access all of the standard NOvA offline resources.

Load balancing is designed to help everyone be a good neighbor, so think twice before bypassing it.
Load balancing does not work for all clients, see the Kerberos notes below.
If you need to bypass the load balancing system you can directly log into any one of the machines in the NOvA offline cluster via its full name (i.e.
This is most useful if for some reason you managed to leave something running on a machine and need to go back and check on it
(Note: Don't leave long CPU intensive things running on the interactive nodes!)

Currently there are 10 interactive nodes (as of 20JUL2012):


Any one of these nodes will give you access to both the FermiGrid and local batch clusters.

If you have just received notification that you have an account on one of these machines, your login shell is likely bash. If you prefer a different login shell you need to submit a ServiceDesk ticket to get it changed.

h2. Kerberos

Users must have a valid kerberos ticket to access Fermilab computing at the time an attempt to log into a Fermilab machine. The ticket is obtained by executing the following command at a terminal prompt:
$ kinit principal@FNAL.GOV
where principal is the user's kerberos principal. If a user is attempting to access the repository from a non-Fermilab machine, the following lines must be in the users' .ssh/config:
Host *
ForwardAgent yes
ForwardX11 yes
ForwardX11Trusted yes
# GSSAPIAuthentication yes
# GSSAPIDelegateCredentials yes
GSSAPITrustDns yes
# GSSAPIKeyExchange yes
# ServerAliveInterval 60
8/2/13 - RJT - I found that I needed to comment out the lines as shown above.

You may also need to add the following in the case of connection issues:
StrictHostKeyChecking no
Some new users may find out that the krb5.conf post online does not work on their Mac OSX 10.8 system, in that case, you may need to ask a current mac researcher for his krb5.conf on his machine.
In case of trouble when connecting via ssh (permission denied error) the reason can be in the OpenSSH client, the following client is compatible with Fermilab Kerberos authentification:
OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008

Some users have experienced problems using the option "GSSAPIKeyExchange yes".
~/.ssh/config: line 8: Bad configuration option: GSSAPIKeyExchange
~/.ssh/config: terminating, 1 bad configuration options
This problem goes away if this option is removed from their .ssh/config

Some Windows/PUTTY users are unable to connect to the nova-offline load balancing address, and must connect directly to particular nodes.

h2. Setting up NOvASoft

When using the nova-offline computing cluster at Fermilab, the full suite of NOvA software has already been installed and properly configured. To use the software there are a set of setup scripts that you should use to configure your session.

There are a couple of methods to use these setup scripts. The recommended method is to put the following code snippet into your login files (your .bashrc or .bash_profile if you are using bash as your shell):

function setup_nova setup_novaoffline() {
echo ""
echo "NOvASoft"
echo ""
echo "Setting SRT_DIST, EXTERNALS"
echo "Sourcing generic setup_novasoft script for SVN control"
source /grid/fermiapp/nova/novaart/novasvn/srt/
export EXTERNALS=/nusoft/app/externals
source $SRT_DIST/setup/ "$@"
cd /nova/app/users/YOUR_USER_NAME_HERE
echo "Working directory: $PWD"


Then to setup the software you can login and type:

setup_nova setup_novaoffline

If you want to setup a specific release you will append the tag name [[History of Tagged Releases]] to the setup command:
setup_nova setup_novaoffline -r <tag-release>

When you are done developing a piece of software and you are ready to run it, consider setting up novasoft in maxopt mode. This sets certain compiler flags, and uses the maxopt version of precompiled externals, to optimize the execution of your code. Read: It will run at least twice as fast. Do enable this build option you would do:
setup_nova setup_novaoffline -b maxopt

The above commands will set your $PATH and $LD_LIBRARY_PATH variables as well as the variables that define the locations of the necessary external packages.

While the public release of the code is located in /grid/fermiapp/nova/novaart/novasvn/releases/development/, the average user should _never_ make any files in that directory. It has limited space and is only for code releases. Instead, use the disk space described in the next section.

h2. Disk Space

The user space for those logging into these nodes is
This is where users should store their test releases as well as any analysis files.

There is a special location reserved for files that are generated by the different analysis groups. This area is:
And has subdirectories for each of the analysis or working groups.

In addtion, the production group maintains the large shared disk that stores our raw data, processed data and Monte Carlo samples that are used for general consumption by the experiments. You should use this area only if directed by your analysis/production coordinator (since filling this area can cause our production and reprocessing projects to fail).

This area is:

When operating on the grid, the /nova/data and /nova/ana directories cannot have executables run from them, executables can only be run from the /nova/app directories.