Using NOvASoft on the NOvA Interactive nodes

Available nodes

NOvA has a small pool of computing nodes that have been configured with our experiment's software, disk resources and user accounts. The nodes are part of the "General Purpose Computing Facilities" (GPCF) at Fermilab and can be accessed both from Fermilab as well as offsite.

Other Intensity Frontier (IF) experiments have similar pools of computing resources and can be accessed in a similar manner to that described here for NOvA.

To login to the NOvA interactive nodes, you login to "". This name performs some limited load balancing between all the computers to ensure that not everyone ends up on the same machine. In general you will will be able to log in using a command like:


You will be logged into a machine with a name like "" (i.e. novagpvm01, novagpvm02, etc...) and from there you will be able to access all of the standard NOvA offline resources.

Load balancing is designed to help everyone be a good neighbor, so think twice before bypassing it.
Load balancing does not work for all clients, see the Kerberos notes below.
If you need to bypass the load balancing system you can directly log into any one of the machines in the NOvA offline cluster via its full name (i.e.
This is most useful if for some reason you managed to leave something running on a machine and need to go back and check on it
(Note: Don't leave long CPU intensive things running on the interactive nodes!)

Want to know which node is the least busy? Check the Ganglia monitoring pages (may only work onsite/through the vpn).

Currently there are 15 interactive nodes (as of 24 SEP 2017):


Any one of these nodes will give you access to both the FermiGrid and local batch clusters.

If you have just received notification that you have an account on one of these machines, your login shell is likely bash. If you prefer a different login shell you need to submit a ServiceDesk ticket to get it changed.


Users must have a valid kerberos ticket to access Fermilab computing at the time an attempt to log into a Fermilab machine. The ticket is obtained by executing the following command at a terminal prompt:

$ kinit principal@FNAL.GOV

where principal is the user's kerberos principal. If a user is attempting to access the repository from a non-Fermilab machine, the following lines must be in the users' .ssh/config:
Host *
ForwardAgent yes
ForwardX11 yes
ForwardX11Trusted yes
GSSAPIAuthentication yes #For some users these lines need to be commented
GSSAPIDelegateCredentials yes #For some users these lines need to be commented
GSSAPITrustDns yes
GSSAPIKeyExchange yes #For some users these lines need to be commented
ServerAliveInterval 60 #For some users these lines need to be commented

You may also need to add the following in the case of connection issues:

StrictHostKeyChecking no

Some new users may find out that the krb5.conf post online does not work on their Mac OSX 10.8 system, in that case, you may need to ask a current mac researcher for his krb5.conf on his machine.
In case of trouble when connecting via ssh (permission denied error) the reason can be in the OpenSSH client, the following client is compatible with Fermilab Kerberos authentification:
OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008

Some users have experienced problems using the option "GSSAPIKeyExchange yes".
/.ssh/config: line 8: Bad configuration option: GSSAPIKeyExchange
/.ssh/config: terminating, 1 bad configuration options
This problem goes away if this option is removed from their .ssh/config

Mac OS X 10.12 Sierra adopts OpenSSH_7.2p2, which drops support for the configuration "GSSAPITrustDns". A minimal set of configurations is:

Host *
     ForwardX11 yes
     ForwardX11Trusted yes
     GSSAPIAuthentication yes
     GSSAPIDelegateCredentials yes
Besides, without "GSSAPITrustDns", the "" doesn't work. One has to specify a particular machine to log in, such as "".

Some Windows/PUTTY users are unable to connect to the nova-offline load balancing address, and must connect directly to particular nodes.

Setting up NOvASoft

When using the nova-offline computing cluster at Fermilab, the full suite of NOvA software has already been installed and properly configured. To use the software there are a set of setup scripts that you should use to configure your session.

There are a couple of methods to use these setup scripts. The recommended method is to put the following code snippet into your login files (your .bashrc or .bash_profile if you are using bash as your shell):

function setup_nova {
  source /cvmfs/ "$@" 
  cd /nova/app/users/YOUR_USER_NAME_HERE

Then to setup the software you can login and type:


If you want to setup a specific release you will append the tag name History of Tagged Releases to the setup command:

setup_nova -r <tag-release>

When you are done developing a piece of software and you are ready to run it, consider setting up novasoft in maxopt mode. This sets certain compiler flags, and uses the maxopt version of precompiled externals, to optimize the execution of your code. Read: It will run at least twice as fast. Do enable this build option you would do:

setup_nova -b maxopt

The above commands will set your $PATH and $LD_LIBRARY_PATH variables as well as the variables that define the locations of the necessary external packages.

While the public release of the code is located in /grid/fermiapp/nova/novaart/novasvn/releases/development/, the average user should never make any files in that directory. It has limited space and is only for code releases. Instead, use the disk space described in the next section.

Disk Space

User areas

When your account is created, two disk areas will be created for you.

  • /nova/app/users/<your kerberos principal>

This area should be used primarily for code (including novasoft test releases), scripts, etc. Do not store data files here! (You should put data files in the /nova/ana area mentioned next because you have a relatively small quota on /nova/app---50GB by default---and data files will fill it up quickly. You can check how much of your quota you have used with quota -s and look for `/nova/app`.)

/nova/app is backed up daily. You can find the previous few snapshots in /nova/app/.snapshot.

  • /nova/ana/users/<your kerberos principal>

This area is much larger than your /nova/app reservation (2TB by default). It's better suited to data files because of its size. You should put code in the /nova/app area mentioned above because the /nova/ana area is mounted such that nothing can be executed from it.

Working group and official files

Official files generated by the different analysis groups also have areas underneath the `/nova/ana` area; see the subdirectories marked with the analysis or working group names.

In addtion, the production group maintains the large shared disk that stores our raw data, processed data and Monte Carlo samples that are used for general consumption by the experiments. You should use this area only if directed by your analysis/production coordinator (since filling this area can cause our production and reprocessing projects to fail).

This area is:


Note that none of the /nova areas are accessible from grid jobs. Files produced by Production may be accessed using SAM datasets, and the and scripts are designed to run art or CAFAna jobs (respectively) over such datasets. Other files should be placed in /pnfs areas and accessed using xrootd. See the submission documentation Submitting_NOvA_ART_Jobs (art) or CAFAna_on_the_grid (CAFAna). If you are confused about how to run your jobs after reading the documentation, feel free to ask in #nova-offline (for general grid usage or art jobs) or #cafana (for CAFAna related questions) on Slack.