Project

General

Profile

Notes from 8/8/2016 meeting

Walked through component list.

Picture of whiteboard:

Action list:
- 3 VMs already created, plus 1 more to be created, will act as
1) Aggregator / "cluster" file system / xrootd server
2) EOS / xrootd server
3) FTS server
4) SAM server

Notes from 8/1/2016 meeting

Igor, Stu

Action list:
Stu
- send xrootd server/client instructions
- create block diagram
- show to Maxim, Tom, Brett
- show to Robert
Igor
- work with Steve Timm to install/configure
- fts server
- sam instance

Notes from 7/25/2016 meeting

Robert, Igor, Stu

Assumptions for initial thinking:
- ~ 250 MB/s data rate
- with 1 GB file, then 1 file every 4 sec
- assume that may be seeing ~30-50 individual buffer "nodes"
- then FTS has to "watch" all of these

FTS experts: Robert (code), Dennis, Mike (deployment)
EOS experts: Lisa (?), CERN
Castor expert: ?

Questions:
- on FTS
- if watching multiple locations, is it multi-threaded?
- on EOS
- what rates are acceptable to query (xrdls)?
- what checksums exist?
- on CASTOR
- what checksums exist?
- what is the EOS to Castor transfer mechanism?
- When do we clean up FTS memory? (currently after deletion)

Next steps:
- Answer questions
- Igor will talk to Lisa
- Robert will look at FTS behavior on multiple directories
- Do we need a protoDUNE development SAM instance?
- there is a DUNE SAM... used by 35ton
- Igor will look into it
- Development FTS instance in FermiCloud
- Look at FTS server(s) histories in ganglia, FTS itself, etc for anomalies