Interactive Computing Resources » History » Version 99

Version 98 (Ruth Pordes, 07/11/2017 04:13 AM) → Version 99/110 (Ruth Pordes, 07/11/2017 04:14 AM)

h1. Interactive Computing Resources


h2. %{color:Darkred}Fermilab%

h3. {{include(Getting Accounts and Logging In at Fermilab)}}

h3. {{include(Kerberos Tips and Info)}}

h3. {{include(UPS Tips and Info)}}

h3. %{color:Darkorange}Hardware Resources%

Fermilab hosts ten general-purpose login nodes for interactive DUNE use, plus one computer reserved for compiling and linking DUNE software, and a SLF7 test computer. The table below lists their characteristics.

Information dated May 9, 2016.

|_. Node Name |_.OS Version |_. CPU Cores |_. RAM |_. Swap |_. Notes |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 6.7 | 4 | 12 GB | 2 GB | |
| | SLF 7.x | 1 | 2.9 GB | 3.1 GB | Testing only -- do not keep critical data on this node |
| | SLF 6.7 | 16 | 32 GB | 5 GB | Only for building code |

You can find general "Do's and Don'ts for Interactive Computing":'s_and_Don'ts (written for NOvA but applicable to DUNE too)

h3. %{color:Darkorange}Home Directories%

On the interactive Linux machines, your home area will be served by a network-attached storage device (NAS), and served over NFS, so that all interactive Linux machines see the same home area. In fact, your home area is also the same on other experiments' interactive Linux computers as well. Your home area will be mounted as


and the environment variable $HOME will translate to this directory pathname.

Snapshots of the contents of your home area will be taken at 8 AM, 10 AM, 2 PM, and 4 PM Central Time. You can find these
snapshots in


Snapshots have a lifetime of 7 days. You can recover accidentally deleted files yourself by looking first in the snapshot area. Nightly tape backups are also performed. If you need to access files on the tape backup, fill out a "Service Desk Ticket":

The default quota for NAS home directories is 2 GB. You can request a quota increase via a NAS/BlueArc storage increase request ticket using the "Service Desk":

The default permissions for the NAS home directories is (using trj a an example):

drwx--s--x 73 trj 3000 22528 Jul 13 15:20 /nashome/t/trj

The execute bits are set for the group members and others, but the read bits are not set. This means that only the owner (and the system managers) can list the files in your home directory. But group members and others can access files in your home directory, though they need to know their names. You may share files with your collaborators and others by setting the file permission bits using chmod (example: chmod g+r <file> will allow members of your group to read a file). You may also set the permissions on subdirectories of your home directory so that group members and others can list the files in that directory.

Home directories are not mounted on gpGrid worker nodes.

If you have a Fermi Domain Windows computer, you can mount your home directory as a network drive using the name \\homesrv01\<firstletter>\<kerberosprincipal>

Before April 2016, users had their home directories on AFS. Here's a link to [[legacy AFS home area documentation]].

h3. %{color:Darkorange}Storage: BlueArc%

The interactive computers listed above have mount points for Fermilab's BlueArc storage -- /dune/data, /dune/data2, /dune/app, and shared software mount points such as /grid/fermiapp. /dune/app and /grid/fermiapp have a small number (5 to 7) of daily snapshots -- look in /dune/app/.snapshot and /grid/fermiapp/.snapshot, which are useful in recovering accidentally deleted files.

You should be able to make your own directory under /dune/data/users, /dune/data2/users, and /dune/app/users.

h3. %{color:Darkorange}Storage: dCache%

Moving forwards, we would like users to make more use of the dCache disk system which is larger and costs less than BlueArc to maintain and upgrade, though it is not appropriate to store code and executable programs on dCache. These can be accessed on the dunegpvm* machines via the NFS mounts /pnfs/dune and the older /pnfs/lbne. Instructions and best-practices advice are available here:

The old lbnegpvm* machines are now decommissioned. Files in the BlueArc disk areas /lbne/app, /lbne/data, /lbne/data2 may now be found in /dune/app/dune/data, and /dune/data2. Users on the dunegpvm machines are also members of the lbne group so that files with older ownership settings can be read and written on the dunegpvm machines.

h3. %{color:Darkorange}Other hardware resources:%

A small cluster called FNALU hosts accounts that have home directories in the new NAS storage area, and is available to members of all experiments. Currently it consists of (at least) these machines:,, and All are single-core machines with limited memory. is a convenience name that points to the recommended login node if you want to test logins, look at your home area, and do lightweight work such as editing web pages with a text editor, but these machines are not recommended for any heavier use.

h3. %{color:Darkorange}VNC (better X window connections)%

Normally X-Protocol graphical traffic is sent back and forth between one of the dunegpvm's and your desktop or laptop computer via an SSH tunnel. You can enable this by using the -X or -Y options to ssh when logging in. The -Y option is for "trusted" X11 connections which at least was historically needed to enable ROOT to send windows back to your own computer.

The X protocol is slow for some uses, especially when running the LArSoft event display. A more efficient solution, especially when running from home or over a long network connection, is to use a VNC connection. Instructions for setting this up and using it are available at this link: [[Using VNC Connections on the dunegpvms]]

h3. %{color:Darkorange}Professional web pages%

Please see the Knowledge Base article for information on how to apply for and use web-accessible space,
as well as how to maintain the content and the use policies.

h2. %{color:Darkred}CERN%

Interactive computing resources are available at CERN:
* LXPlus interactive login machines available through ssh and described at

* OpenStack Virtual Machine infrastructure on which sustained services can be run. Each person with a CERN computing account can subscribe to the OpenStack resources and have up to 5 VMs active at any time. DUNE itself has Project resources. Access to create and manage a VM can be made through a request to join the CERN e-group dune-comp-vm. This VM infrastructure has access to the DUNE/ProtoDUNE data areas under EOS, the software distribution service CVMFS.

* The CERN Neutrino Platform (CENF) has a computing resource available for use with the ProtoDUNE efforts.

h3. %{color:Darkorange}Getting Accounts and Logging In at CERN, including the Neutrino Platform Cluster%

The instructions for getting CERN accounts are here: Once you have an account you manage which resources you have access to through this portal (e.g. use storage, VM infrastructure etc).

You can request access to the neutrino platform cluster following the instructions here: Instructions for logging in are at this link:

h3. %{color:Darkorange}Useful Information for using CERN resources%

The e-groups for DUNE collaborators together with their scope and links to more information are described at [[CERNegroups]].
To find out what e-groups you are in go to:

h3. %{color:Darkorange}DUNE/pDUNE Hardware and Computing Resources%

| Resource | Access Information | Documentation |
| Neutrino Platform Cluster | | More information is available from a presentation by Nektarios Benekos in June 2016: And from the Collaboration Meeting in January 2017: |

| OpenStack Virtual Machines | | |
| LXPlus | | |
| Grid Nodes | | |
| Tier-0 | | |
| CERNBox | | |
| EOS | | Hosts shared DUNE and ProtoDUNE data files. The directory structure is defined at Request for changes may be made to

There is an %EOS Quick Tutorial for Beginners at

h3. %{color:Darkorange}Storage at CERN: EOS%

h2. %{color:Darkred}Brookhaven%

h2. %{color:Darkred}SLAC%

h2. %{color:Darkred}Argonne%

h2. %{color:Darkred}LBNL%

h2. %{color:Darkred}Los Alamos%