Project

General

Profile

Wiki

Mu2e Pilot System: mu2edaq01, 04-12

Mu2e Development Nodes: mu2edaq02 and 03

DTC Acceptance Nodes: mu2edaq13-17

Accounts

Current Node Assignments

User development systems are listed here

Node Name Current Assignment DTC Present DTC Firmware SERDES Card JTAG Notes
mu2edaq01 Gateway NO N/A N/A N/A Gateway machine. Shared Resource; not available for assignment
mu2edaq02 DTC/CFO FW Development YES Bleeding-Edge YES YES In Rick's office
mu2edaq03/dcsdev01 Permanently Offline NO N/A N/A N/A Failed motherboard?
mu2edaq04 Sync Demo - mid-rack (Iris) YES v0.1 (2018-10-31-16) (CFO) YES (8-card) NO
mu2edaq05 Sync Demo - mid-rack (Iris) YES, need 1 DTC v4.1 (2018-11-14-10) YES YES Has vivado_lab installed in /opt; connecting STM JTAG
mu2edaq06 Sync Demo - mid-rack (Iris) YES, need 1 DTC v4.1 (2018-11-14-10) YES YES
mu2edaq07 Vivian (Tracker work) YES v4.0 (2018-02-06-21) YES YES
mu2edaq08 CRV YES v4.0 (2018-02-06-21) YES (8-card) YES Currently in "Off-site" configuration with local logins. No -data network connectivity
mu2edaq09 Tracker NO N/A N/A ???
mu2edaq10 Permanently Offline NO N/A N/A N/A Now mu2edaq01 due to failed motherboard
mu2edaq11 YES? v4.0 (2018-02-06-21) YES (8-card) NO STM (Sophie), Calorimeter (Dariush), Yale-Trigger group (Giani)
mu2edaq12 DCS & Sync Demo (Iris) YES ? ? ??? DCS Testing
mu2edaq13 Builds/File Operations NO N/A N/A N/A NFS Server
mu2edaq14 Trigger (Giani) AND Sync Demo (Iris) 2 DTCs N/A N/A N/A
mu2edaq15 DCS RAID - on table (Glenn) NO N/A N/A N/A
mu2edaq16 Sync Demo - top of rack (Iris) 2 DTCs N/A N/A N/A centos8. Iris will do Timing Demo for summer 2020
mu2edaq17 Sync Demo - on table (Iris) 2 DTCs N/A N/A N/A Iris will do Timing Demo for summer 2020, Ryan installing Vivado 2019.1

Port Forwarding and Web Proxy

mu2edaq01 is the only node in the cluster which is visible to the Lab network. To access ots instances running on the other nodes, there are two methods, described below.

HTTPS Web Proxy

The following ots instances are mapped to HTTPS and are available lab-wide:

Name Address Host ots port ARTDAQ Partition Number
Main https://mu2edaq01.fnal.gov:443/ mu2edaq05 2015 2
Calo https://mu2edaq01.fnal.gov:3025/ mu2edaq11 3025 4
STM https://mu2edaq01.fnal.gov:3035/ mu2edaq11 3035 5
Trig https://mu2edaq01.fnal.gov:3045/ mu2edaq14 3045 6
Hwdev https://mu2edaq01.fnal.gov:3055/ mu2edaq05 3055 7
Tracker https://mu2edaq01.fnal.gov:3065/ mu2edaq11 3065 8
Shift https://mu2edaq01.fnal.gov:3075/ mu2edaq12 3075 12
CRV https://mu2edaq01.fnal.gov:3085/ mu2edaq12 3085 9
DQMCalo https://mu2edaq01.fnal.gov:3095/ mu2edaq11 3095 3
DCS https://mu2edaq01.fnal.gov:5019/ mu2edaq11 5019 N/A
STMDBTest https://mu2edaq01.fnal.gov:3040/ mu2edaq11 3040 11

Note that artdaq partition numbers are only assigned for registered ots instances. All others should pick a number not in this list for short-term testing, or reserve one on this page for long-term tests. Only 20 total partitions are allowed (i.e. you should pick a number <= 20).

SSH Tunneling

Alternatively, if your destination is not in the above list or you are off-site without VPN, you can run an SSH tunnel to your desired ots instance:

ssh -N -L <port>:<target host>:<port> mu2edaq01.fnal.gov &

For example, to go to port 3065 on mu2edaq09:
ssh -N -L 3065:mu2edaq09:3065 mu2edaq01.fnal.gov &

Then, when connecting to ots, replace the hostname in the URL with localhost,e.g. http://localhost:3065/urn:xdaq-application:lid=200/

Vivado install on mu2edaq17

October 26, 2020: Ryan installing Vivado 2020.1 on mu2edaq@mu2edaq17:/scratch/Xilinx
2020.1 install failed due to OS being to old issue
Trying 2019.2, also complained but still popped up with remaining installation steps:
  • Choose "Vivado HL Design Edition"
  • Uncheck "Acquire or Manage a License Key" and "Enable Webtalk..."
  • Uncheck "Create program group entries" and "Create desktop shortcuts"
  • Install to /scratch/Xilinx (50GB required)
  • To run:
    cd ~mu2edaq
    source setup_vivado_daq17_full.sh
    vivado &
    

Vivado Lab install on NFS available for all nodes

October 27, 2020: Ryan installing Vivado Lab on NFS, effectively available on all nodes on ~mu2edaq/Xilinx_sl7 and ~mu2edaq/Xilinx_sl8
  • Version and OS is selected by the setup script sources in ~mu2edaq/
    source setup_vivado_daq17_full.sh         #only on mu2edaq17 for full vivado
    source setup_vivado_lab_2018.1_sl7.sh     #for vivado_lab 2018.1 on any SL7 node
    source setup_vivado_lab_2019.2_sl7.sh     #for vivado_lab 2019.1 on any SL7 node
    source setup_vivado_lab_2018.1_centos8.sh #for vivado_lab 2018.1 on any CentOS8 node
    source setup_vivado_lab_2019.2_centos8.sh #for vivado_lab 2019.1 on any CentOS8 node
    
    vivado_lab & #to open vivado lab tools and the Hardware Manager