Project

General

Profile

Interactive Computing Resources » History » Version 104

Version 103 (Ruth Pordes, 07/18/2017 07:11 AM) → Version 104/110 (Ruth Pordes, 07/18/2017 07:12 AM)

h1. Interactive Computing Resources

{{toc}}

h2. %{color:Darkred}Fermilab%

h3. {{include(Getting Accounts and Logging In at Fermilab)}}

h3. {{include(Kerberos Tips and Info)}}

h3. {{include(UPS Tips and Info)}}

h3. %{color:Darkorange}Hardware Resources%

Fermilab hosts ten general-purpose login nodes for interactive DUNE use, plus one computer reserved for compiling and linking DUNE software, and a SLF7 test computer. The table below lists their characteristics.

Information dated May 9, 2016.

|_. Node Name |_.OS Version |_. CPU Cores |_. RAM |_. Swap |_. Notes |
| dunegpvm01.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunegpvm02.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunegpvm03.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunegpvm04.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunegpvm05.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunegpvm06.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunegpvm07.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunegpvm08.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunegpvm09.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunegpvm10.fnal.gov | SLF 6.7 | 4 | 12 GB | 2 GB | |
| dunesl7gpvm01.fnal.gov | SLF 7.x | 1 | 2.9 GB | 3.1 GB | Testing only -- do not keep critical data on this node |
| dunebuild01.fnal.gov | SLF 6.7 | 16 | 32 GB | 5 GB | Only for building code |

You can find general "Do's and Don'ts for Interactive Computing":https://cdcvs.fnal.gov/redmine/projects/novaart/wiki/NOvA_Computing_Do's_and_Don'ts (written for NOvA but applicable to DUNE too)

h3. %{color:Darkorange}Home Directories%

On the interactive Linux machines, your home area will be served by a network-attached storage device (NAS), and served over NFS, so that all interactive Linux machines see the same home area. In fact, your home area is also the same on other experiments' interactive Linux computers as well. Your home area will be mounted as

<pre>/nashome/<firstletter>/<kerberosprincipal></pre>

and the environment variable $HOME will translate to this directory pathname.

Snapshots of the contents of your home area will be taken at 8 AM, 10 AM, 2 PM, and 4 PM Central Time. You can find these
snapshots in

/nashome/.snapshot

Snapshots have a lifetime of 7 days. You can recover accidentally deleted files yourself by looking first in the snapshot area. Nightly tape backups are also performed. If you need to access files on the tape backup, fill out a "Service Desk Ticket":http://servicedesk.fnal.gov

The default quota for NAS home directories is 2 GB. You can request a quota increase via a NAS/BlueArc storage increase request ticket using the "Service Desk":http://servicedesk.fnal.gov

The default permissions for the NAS home directories is (using trj a an example):

drwx--s--x 73 trj 3000 22528 Jul 13 15:20 /nashome/t/trj

The execute bits are set for the group members and others, but the read bits are not set. This means that only the owner (and the system managers) can list the files in your home directory. But group members and others can access files in your home directory, though they need to know their names. You may share files with your collaborators and others by setting the file permission bits using chmod (example: chmod g+r <file> will allow members of your group to read a file). You may also set the permissions on subdirectories of your home directory so that group members and others can list the files in that directory.

Home directories are not mounted on gpGrid worker nodes.

If you have a Fermi Domain Windows computer, you can mount your home directory as a network drive using the name \\homesrv01\<firstletter>\<kerberosprincipal>

Before April 2016, users had their home directories on AFS. Here's a link to [[legacy AFS home area documentation]].

h3. %{color:Darkorange}Storage: BlueArc%

The interactive computers listed above have mount points for Fermilab's BlueArc storage -- /dune/data, /dune/data2, /dune/app, and shared software mount points such as /grid/fermiapp. /dune/app and /grid/fermiapp have a small number (5 to 7) of daily snapshots -- look in /dune/app/.snapshot and /grid/fermiapp/.snapshot, which are useful in recovering accidentally deleted files.

You should be able to make your own directory under /dune/data/users, /dune/data2/users, and /dune/app/users.

h3. %{color:Darkorange}Storage: dCache%

Moving forwards, we would like users to make more use of the dCache disk system which is larger and costs less than BlueArc to maintain and upgrade, though it is not appropriate to store code and executable programs on dCache. These can be accessed on the dunegpvm*.fnal.gov machines via the NFS mounts /pnfs/dune and the older /pnfs/lbne. Instructions and best-practices advice are available here: https://cdcvs.fnal.gov/redmine/projects/dune/wiki/Using_DUNE%27s_dCache_Scratch_and_Persistent_Space_at_Fermilab.

The old lbnegpvm*.fnal.gov machines are now decommissioned. Files in the BlueArc disk areas /lbne/app, /lbne/data, /lbne/data2 may now be found in /dune/app/dune/data, and /dune/data2. Users on the dunegpvm machines are also members of the lbne group so that files with older ownership settings can be read and written on the dunegpvm machines.

h3. %{color:Darkorange}Other hardware resources:%

A small cluster called FNALU hosts accounts that have home directories in the new NAS storage area, and is available to members of all experiments. Currently it consists of (at least) these machines: flxi03.fnal.gov, flxi04.fnal.gov, and flxi05.fnal.gov. All are single-core machines with limited memory. fnalu.fnal.gov is a convenience name that points to the recommended login node if you want to test logins, look at your home area, and do lightweight work such as editing web pages with a text editor, but these machines are not recommended for any heavier use.

h3. %{color:Darkorange}VNC (better X window connections)%

Normally X-Protocol graphical traffic is sent back and forth between one of the dunegpvm's and your desktop or laptop computer via an SSH tunnel. You can enable this by using the -X or -Y options to ssh when logging in. The -Y option is for "trusted" X11 connections which at least was historically needed to enable ROOT to send windows back to your own computer.

The X protocol is slow for some uses, especially when running the LArSoft event display. A more efficient solution, especially when running from home or over a long network connection, is to use a VNC connection. Instructions for setting this up and using it are available at this link: [[Using VNC Connections on the dunegpvms]]

h3. %{color:Darkorange}Professional web pages%

Please see the Knowledge Base article https://fermi.service-now.com/kb_view.do?sysparm_article=KB0011889 for information on how to apply for and use web-accessible space,
as well as how to maintain the content and the use policies.

h2. %{color:Darkred}CERN%

Interactive computing resources are available at CERN:
* LXPlus interactive login machines available through ssh and described at http://information-technology.web.cern.ch/services/lxplus-service.

* OpenStack Virtual Machine infrastructure on which sustained services can be run. Each person with a CERN computing account can subscribe to the OpenStack resources and have up to 5 VMs active at any time. DUNE itself has Project resources. https://cernvm.cern.ch/portal/openstack. Access to create and manage a VM can be made through a request to join the CERN e-group dune-comp-vm. This VM infrastructure has access to the DUNE/ProtoDUNE data areas under EOS, the software distribution service CVMFS.

* The CERN Neutrino Platform (CENF) http://cenf.web.cern.ch/ has a computing resource available for use with the ProtoDUNE efforts.

h3. %{color:Darkorange}Getting Accounts and Logging In at CERN, including the Neutrino Platform Cluster%

The instructions for getting CERN accounts are here: https://web.fnal.gov/collaboration/DUNE/SitePages/Access%20to%20CERN.aspx. Once you have an account you manage which resources you have access to through this portal http://www.cern.ch/account (e.g. use storage, VM infrastructure etc).

You can request access to the neutrino platform cluster following the instructions here: https://twiki.cern.ch/twiki/bin/view/CENF/HowToGetAccess. Instructions for logging in are at this link: https://twiki.cern.ch/twiki/bin/view/CENF/NeutrinoClusterCERN#Connect_to_neutplatform_cern_ch

h3. %{color:Darkorange}Useful Information for using CERN resources%

The e-groups for DUNE collaborators together with their scope and links to more information are described at [[CERNegroups]].
To find out what e-groups you are in go to: https://e-groups.cern.ch/e-groups/EgroupsSearchForm.do

h3. %{color:Darkorange}DUNE/pDUNE Hardware and Computing Resources%

| Resource | Access Information | Documentation |
| Neutrino Platform Cluster | | https://twiki.cern.ch/twiki/bin/view/CENF/NeutrinoClusterCERN https://twiki.cern.ch/twiki/bin/view/CENF/Computing
https://twiki.cern.ch/twiki/bin/view/CENF/WebHome https://twiki.cern.ch/twiki/bin/view/CENF/CENFStorageAtCERN More information is available from a presentation by Nektarios Benekos in June 2016: http://indico.fnal.gov/getFile.py/access?contribId=1&resId=0&materialId=slides&confId=12218 And from the Collaboration Meeting in January 2017: https://indico.fnal.gov/getFile.py/access?contribId=98&sessionId=11&resId=0&materialId=slides&confId=10641 |
| OpenStack Virtual Machines | | |
| LXPlus | | |
| Grid Nodes | | |
| Tier-0 | | |
| CERNBox | | User Guide: https://cernbox.web.cern.ch/cernbox/en/ Quick overview: https://indico.cern.ch/event/538540/contributions/2187138/attachments/1282513/1906054/IT-cernbox-2016-05-31.pdf |
| EOS | | Hosts shared DUNE and ProtoDUNE data files. The directory structure is defined at https://twiki.cern.ch/twiki/bin/view/CENF/CENFStorageAtCERN#Proposal_for_a_new_structure_in. Request for changes may be made to neutplatform.support@cern.ch.
There is an %EOS Quick Tutorial for Beginners at https://cern.service-now.com/service-portal/article.do?n=KB0001998&s=eos%20tutorial |

h2. %{color:Darkred}Brookhaven%

h2. %{color:Darkred}SLAC%

h2. %{color:Darkred}Argonne%

h2. %{color:Darkred}LBNL%

h2. %{color:Darkred}Los Alamos%