Project

General

Profile

External Documentation.

Getting a grid proxy using a kerberos ticket

Many grid tools (including ifdh and xrootd) do authentication using a grid proxy. Batch jobs running on FermiGrid, and other grid and cloud batch jobs usually have a grid proxy automatically. This section describes how to get a grid proxy interactively starting from a kerberos ticket. This method works on the uboonegpvmXX login nodes.

Here are the commands that will get a grid proxy.

kinit <username>   # Get kerberos ticket.
kx509
voms-proxy-init -noregen -rfc -voms fermilab:/fermilab/uboone/Role=Analysis

It may be useful to add the following alias in your login file.

alias grid_proxy='kx509;voms-proxy-init -noregen -rfc -voms fermilab:/fermilab/uboone/Role=Analysis'

For the above command to work, your identity must be registered in the Fermilab VO with the specified experiment and role.

Most people will use Role=Analysis for their personal computing. For production computing in the uboonepro account, use Role=Production.

Getting a grid proxy in a cron job

The method described in the previous section works for cron jobs with a few caveats.

Here are some commands that are known to work for cron jobs.

export PATH=/usr/krb5/bin/:$PATH
export KRB5CCNAME=FILE:/tmp/krb5cc_${UID}_cron$$
kcron
kx509
voms-proxy-init -noregen -rfc -voms fermilab:/fermilab/uboone/Role=Analysis

The caveats of this method are as follows.

First, for the kcron command to work, you must have created a keytab on the machine where you want to run cron jobs using kcroninit. If you are running cron jobs on the login nodes uboonegpvmXX, you need to run kcroninit on each machine where you are running cron jobs (for security reasons, Fermilab prohibits storing keytabs on shared filesystems).

Second, you must get your "kcron identity" registered in the Fermilab VO, the same as you would register your regular (kinit) identity. This does not happen by default when you get your microboone computer accounts. You have to specially request it from the service desk. Once you have kcron working, the kx509 command will work out of box:

$ kdestroy
$ kcron
$ kx509
Service kx509/certificate
 issuer= /DC=gov/DC=fnal/O=Fermilab/OU=Certificate Authorities/CN=Kerberized CA HSM
 subject= /DC=gov/DC=fnal/O=Fermilab/OU=Robots/CN=uboonegpvm01.fnal.gov/CN=cron/CN=Herbert Greenlee/CN=UID:greenlee
 serial=02C268CB
 hash=1cadb7f5

Your kcron identity is the "subject=" line of output from kx509 (cut-and-paste this line and send to service desk and request to have it added to Fermilab VO). For comparison, your regular identity looks like this:
$ kdestroy
$ kinit greenlee
Password for greenlee@FNAL.GOV: 
$ kx509
Service kx509/certificate
 issuer= /DC=gov/DC=fnal/O=Fermilab/OU=Certificate Authorities/CN=Kerberized CA HSM
 subject= /DC=gov/DC=fnal/O=Fermilab/OU=People/CN=Herbert Greenlee/CN=UID:greenlee
 serial=02D3A587
 hash=b6fd713d

Getting a grid proxy using an OSG certificate

Here is the command for getting a grid proxy from an OSG certificate (stored in a public certificate file and a private key file).

voms-proxy-init -rfc -key <key-file> -cert <cert-file> -voms fermilab:/fermilab/uboone/Role=Analysis

or

voms-proxy-init -rfc -key <key-file> -cert <cert-file> -voms fermilab:/fermilab/uboone/Role=Production

Installing grid tools on your personal SLF node

When you install Scientific Linux Fermi (SLF), standard Fermilab kerberos authentication tools (e.g. kinit, klist) and the standard Fermilab configuration (/etc/krb5.conf) are installed by default. You should be able to type "kinit <username>" on a freshly installed SLF machine and it should just work.

The grid authentication tools described in the previous sections (kx509, voms-proxy-init), are not installed by default on SLF systems. These tools, and everything needed by ifdh, can be installed using the following commands (as root):

yum install krb5-fermi-getcert             # kx509 and get-cert
yum install yum-conf-epel                  # epel repository
yum install --enablerepo=epel osg-client   # grid tools

Other OSes

We are interested in other OSes, including plain Scientific Linux (SL), Scientific Linux Cern (SLC), MACOSes, etc. If someone figures out how to get grid tools working on these OSes, please edit this article.

Transferring files using ifdh

Ifdh can be used to transfer files between local filesystems and grid-accessible filesystems (basically bluearc /uboone/data and dCache /pnfs/uboone) using a variety of underlying methods. Ifdh has built-in intelligence on how to convert a grid-accessible bluearc or dCache path into different underlying transfer methods (protocol, server, url, transfer program, etc.). Underlying methods used by ifdh include cpn, gridftp, and srm. The remote path does not have to be nfs-mounted on the local machine for ifdh to work. If a remote bluearc path is nfs-mounted on the local machine, ifdh will generally prefer the cpn transfer method. Otherwise, a grid method (gridftp or srm) will be used.

The basic ifdh transfer command is

ifdh cp [--force=cpn|gridftp|srmcp|expgridftp] <source> <destination>

where <source> and <destination> can be any of a) local filesystem path, b) grid-accessible filesystem (bluearc or dCache) path, or c) url. Additional information about using ifdh can be found on the ifdhc wiki.

You can discover what transfer method ifdh is using under the covers by setting environment variable IFDH_DEBUG.

export IFDH_DEBUG=1
ifdh cp ...

Servers and user mapping

The mapping from grid credentials to linux user and group is generally configured on a per-server basis. Users of ifdh can influence the user mapping to some extent using the ifdh --force option. Here are the basic use cases.

  • Gridftp transfers to bluearc using production gridftp server (--force=gridftp). This method uses server fg-bestman1.fnal.gov. User mapping is based on role (maps to user ubooneana or uboonepro). This is the preferred method for transfers to bluearc for production computing in the uboonepro account.
  • Gridftp transfers to bluearc using experiment gridftp server (--force=expgridftp). This method uses server if-gridftp-uboone.fnal.gov. User mapping is based on user (maps to your own account). This is the preferred method for users doing their own analysis in their own accounts.
  • Srm transfers to bluearc (--force=srmcp). This method uses the same server and same role-based user mapping as --force=gridftp.
  • Transfers to dCache. All transfers to dCache use the same server fndca1.fnal.gov regardless of the --force option. The dCache server uses a role-based user mapping (maps to ubooneana or uboonepro), same as the production gridftp server fg-bestman1.fnal.gov. Options --force=gridftp and --force=expgridftp have no effect, but are harmless. Option --force=srmcp does not work.
  • Transfers from bluearc or dCache to local machine. The transfer method and user mapping usually don't matter as long as files being read are world or group readable.

Reading root files using xrootd

Xrootd is a protocol that provides read only random ("streaming") access to root files over a network using grid credentials. Programs can open xrootd urls as read-only TFiles, the same as local files. Root programs (root and hadd), the larsoft art program (lar), and art helper programs (config_dumper and sam_metadata_dumper) are all able to open and read xrootd urls.

Here is the proper way to open an xrootd url in a root program, which method also works for plain files, of course.

TFile* f = TFile::Open(url);

The following method, which works for plain files, does not work for xrootd urls.

TFile* f = new TFile(url);

Accessing dCache files using xrootd

Files in dCache can be accessed using xrootd. The remaining piece of the puzzle is constructing an xrootd url that corresponds to a /pnfs path. Here is the rule for constructing xrootd urls from dCache paths.

  • /pnfs/uboone/... -> root://fndca1.fnal.gov:1094/pnfs/fnal.gov/usr/uboone/...

Accessing bluearc files using xrootd

Doesn't exist (yet). Too bad.