BlueArc Disc system¶
Useful links to understand BlueArc performance and configuration¶
- BlueArc file/directory/user monitoring
- Bluearc read performance NOW
- BlueArc volume export list
- BlueArc volume use summary
- Bluearc admin page
- Rate for /grid/data plots (From MINOS).
- Experiment disk capacity usage plots.
Bluearc is a high performance NFS file server.
The IF/Cosmic projects have access to over 1 PB of Raid-6 Data and Application storage.
We have deployed separate controllers for the Data and Application areas,
so that data overloads to not affect access to applications.
Each controller can deliver data at .5 to 1 GByte per second.
- The single controller is a bootleneck, limiting data rates to under 1 GB/sec.
- When data is read from more than a few dozen files, head contention can slow data transfers.
- The net transfer rate is fine, but the rate per channel is lower that you would want.
- We use an 'ifdh cp' utility to regulate access.
Rules for accessing files on BlueArc volumes on worker nodes :¶
- Do not open data files directly from batch worker nodes ( Grid or Local )
- Copy input files from Bluearc to local disk and output files from local disk to Bluearc.
- Use 'ifdh cp' not cp or mv, so that access will be properly regulated.
- do not trust a script that is passed down to you from someone else, always check for proper access.
- BlueArc volumes include but are not limited to
- /<project>/data*, /<project>/app
General Description of BlueArc.¶
A full description of the architecture on the FermiGrid and FermiCloud BlueArc NAS system is available:
This page includes a list of all of the BlueArc volumes (common and experiment) on FermiGrid/Cloud along with read, write, and exec permissions.
As an introduction, BlueArc volumes are large numbers of hard drives that have been combined together utilizing RAID technology to create volumes that appear to be single large storage elements. Access to BlueArc volumes is achieved through NFS mounts between interactive or worker nodes going through a BlueArc head node that controls all of the individual hard drives. There are currently two BlueArc head nodes that hare the load of accessing all of the BlueArc volumes.
The BlueArc volumes have the great advantage that they have been NFS mounted onto most of the interactive nodes for ready access to the storage and as a source of software libraries. This allows people to use the BlueArc volumes to store their local development areas and also have access to larger volumes where data samples are stored for an experiment. (You can determine if a volume is a BlueArc volume by running a 'mount' command on your interactive node and looking for the leading "blue#" or "nas" as the first element on a line for a volume.) As well, the BlueArc volumes NFS are mounted on the FermiGrid and GPGrid worker nodes making the worker nodes having similar working environment to interactive nodes. Unfortunately, while access to the BlueArc from an interactive nodes will create a relatively small number of open files on a BlueArc volume, when the great than 25,000 worker nodes that are part of FermiGrid are directed to directly open files on BlueArc volumes this can create an overload on the BlueArc head node. When the BlueArc head node becomes overwhelmed, the BlueArc volumes will become inaccessible for all Fermilab experiments and create many angry people who can't access their data. In order to make sure that this does not become a problem, please only access the BlueArc volumes in an appropriate manner.
The general model for how to access files on BlueArc volumes can be summarized in two figures. The optimal way for data to be accessed from the BlueArc is that there are a limited number of reads to several different volumes from several different nodes, this is seen in this figure:
Were a large number of processes to access a small number of files or a single volume on BlueArc, this creates an extreme load on the BlueArc head node and will cause the system to crash. This can be seen in this figure: