The working premise is that all data access from Fermigrid/OSG will be via the ifdh cp command,
with Grid jobs reading and writing locally on worker nodes.
What we are not doing - an immediate dismount of data areas.¶
- All Fermigrid file access not using ifdhc to fail.
- Data movement via ifdhc, which is often around 1 GB/second,
would switch to the Bestman and Experiment Specific GridFTP servers.
- These servers do not have the throughput to handle the present load.
- They do not presently have SLAs appropriate to this production use.
- The increased load on authentication servers could be an issue
- There is declining but substantial Auxiliary file activity, directly reading Bluarc.
- We need a reasonable performance alternate before cutting this off completely.
- The alternate, access via caches using ifdhc, is in final design and test.
Strawman impact and timeline¶
- Hiding data mount points.
- With ifdhc v1_7_2 , there is no change to server or client loads
- The primary sources of overloads are eliminated.
- direct access to Bluearc paths would fail, as desired.
- This can start 2015 Feb 19 with the release of ifdhc v1_7_2
- Moving locks from /grid/data to /grid/app or similar
- No impact, can be done while in use via symlinks
- This can start anytime, should be done by April 2015, befoer /grid/data moved to the blue3 server
- /grid/data move to blue3
- this requires copying up to 20 TB to new disk. Probably a few hours downtime.