Do we have a handle on what is needed and what things will look like on Sept 30?
- Schema update through migrations (Randolph and Abhishek) * kaon, jpsi, ds, tev, new cluster (one demo cluster first - kaon on Sept 30) * what are the pieces that the end user will use... (admin command line tools, Randolph will list current ones, meet in Sept to complete the list of queries and make little tools that yield results) * documentation will be (external product version and setup, developed product build and install) * documentation - release database schema. Assigning regions (multicast groups and aggregation across local managers, the cluster can be the regional, provided the performance is adequate). * documentation - how to run command line tools, admin the pieces on head node and worker nodes and database node(s). * packaged like this... (dependency hierarchy in ups-lite) * one rail web application deployed... (one server, similar to lustre, talking to the main database, probably lqcd) * one database managed and deployed... (on ds2 now, but may need to move it, add database to lqcd, cleanup scripts for data older than one month, policy needs to be decided by Nirmal) * head nodes running this... (PBS scanner, local manager running?, DDS if local manager running) * workers running this... (raise shared mem limit to 32MB or higher, DDS infrastructure, local manager, careful cleanup of DDS daemons in startup scripts, inittab, rc files modified) * DDS maintenance issues addressed (discovery protocols, drop-out, etc.). Should have Gennadiy give little talk about this later in September (mid month better) at a Monday meeting.
One database / web server per cluster might be necessary. The lqcd.fnal.gov machine might be a good choice. One by Sept 30th might answer some questions about deployment.