Using NOvASoft on the GPVM nodes » History » Version 11

« Previous - Version 11/38 (diff) - Next » - Current version
Andrew Norman, 07/19/2012 05:34 PM

Using NOvASoft on the NOvA Interactive nodes

Available nodes

NOvA has a small pool of computing nodes that have been configured with our experiment's software, disk resources and user accounts. The nodes are part of the "General Purpose Computing Facilities" (GPCF) at Fermilab and can be accessed both from Fermilab as well as offsite.

Other Intensity Frontier (IF) experiments have similar pools of computing resources and can be accessed in a similar manner to that described here for NOvA.

To login to the NOvA interactive nodes, you login to "". This name performs some limited load balancing between all the computers to ensure that not everyone ends up on the same machine. In general you will will be able to log in using a command like:


You will be logged into a machine with a name like "" (i.e. novagpvm01, novagpvm02, etc...) and from there you will be able to access all of the standard NOvA offline resources.

If you need to bypass the load balancing system (the load balancing is designed to help everyone be a good neighbor, so think twice before bypassing it) you can directly log into any one of the machines in the NOvA offline cluster by going directly to it via its fully qualified name (i.e. or it's short form name (i.e. This is most useful if for some reason you managed to leave something running on a machine and need to go back and check on it (Note: Don't leave long CPU intensive things running on the interactive nodes!)

Currently there are at least 5 interactive nodes (we have requested 5 more -- 20JUL2012):


Any one of these nodes will give you access to both the FermiGrid and local batch clusters.

If you have just received notification that you have an account on one of these machines, your login shell is likely bash. If you prefer a different login shell you need to submit a ServiceDesk ticket to get it changed.


Users must have a valid kerberos ticket to access Fermilab computing at the time an attempt to log into a Fermilab machine. The ticket is obtained by executing the following command at a terminal prompt:

$ kinit principal@FNAL.GOV

where principal is the user's kerberos principal. If a user is attempting to access the repository from a non-Fermilab machine, the following lines must be in the users' .ssh/config:
Host *
ForwardAgent yes
ForwardX11 yes
ForwardX11Trusted yes
GSSAPIAuthentication yes
GSSAPIDelegateCredentials yes
GSSAPITrustDns yes

In case of trouble when connecting via ssh (permission denied error) the reason can be in the OpenSSH client, the following client is compatible with Fermilab Kerberos authentification:
OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008

Setting up NOvASoft

There is a setup script provided to get the environment ready for using NOvASoft on these machines. To use it, one does (for tags S11.04.30 and later including development):

$ source /grid/fermiapp/nova/novaart/novasoft/setup/setup_novasoft_nusoft.(c)sh

If you are using tagged release S11.04.09 and earlier do:

$source /grid/fermiapp/nova/novaart/novasoft/releases/development/setup/setup_novasoft_ifcluster.(c)sh

The above commands will set your $PATH and $LD_LIBRARY_PATH variables as well as the variables that define the locations of the necessary external packages.

While the public release of the code is located in /grid/fermiapp/nova/novaart/novasoft/releases/development/, the average user should never make any files in that directory. It has limited space and is only for code releases. Instead, use the disk space described in the next section.

Disk Space

The user space for those logging into these nodes is


This is where users should store their test releases as well as any analysis files.

Any data or Monte Carlo files for general consumption by the experiment should be stored in


When operating on the grid, the /data directories cannot have executables run from them, executables can only be run from the /app directories.