Project

General

Profile

DAQ Expert Troubleshooting

Welcome to the DAQ expert troubleshooting page! If you get an error or a problem that you're unfamiliar with, look here to see if you can find it. You should be able to search for your problem in the table of contents or by searching (ctrl+F) keywords on this page. If you don't find the error you're looking for and think it should be here, please add it (and the solution!) or email Kirsty and Adi so we can add it!

VNC problems

DAQ experts are often called because of problems with the VNC. Two specific VNC issues we have seen recently are:

The VNC isn't accepting inputs in the way it should

It could be that somehow a ctrl or shift key has got "stuck down" in the VNC -- that is, the VNC registered it being pressed and missed the command where the key was depressed. If that happens, it could register every keystroke as having ctrl/shift held down. Try opening a new terminal or text editor in the VNC and slowly and carefully pressing and releasing ctrl and shift a few times. Hopefully the VNC will register the key being released here. Once you've done this, try typing in the terminal or text editor you just opened. It might take a few keystrokes to "clear" the backlogged ctrl/shift commands.

If you still can't get the VNC to properly accept inputs, you may need to restart it -- see below.

The VNC isn't responding/won't accept inputs (and following the above didn't fix it)

If the VNC isn't responding at all, you may need to restart it. Note that doing this will kill the current run, so you should only do it if you need to (e.g. if you can't communicate with runConsole in any other way, and you really need to stop the run)!

Ssh into EVB and kill the VNC by doing

vncserver -kill :3

Then restart the VNC with:

~/startVNC.sh

Finally, start a new run as normal. Open a terminal in the VNC and type

source .bash_profile
runConsoleDAQ.py

RAID rebuilds

When the RAID disks on EVB or one of the SEBs fail and are replaced, it can trigger a rebuild of the disk. This causes a very high I/O strain on the disk and can make it difficult to keep running (the runs may be very short and the DAQ may be very unstable). Since this is now a known problem it shouldn't take any DAQ experts by surprise, but here's a case study showing how the issue first showed up and was diagnosed.

In January 2018 the outbound fragment queue size on the SEBs was increased, as well as the inbound queue size to the assembler on EVB. This should allow data to queue for longer when an overloaded disk on the EVB means it slows down in processing events, and should allow us to continue running during a RAID rebuild on EVB. The only caveat to this is that we might not be able to run at full EXT rate. If the runs are very unstable, we may need to reduce the EXT rate (Strobe1.frequency in the config files) from 16 Hz to 7 or 12 Hz (or even lower if necessary). However, we should not have to stop running completely.

Note that in the past RAID rebuilds on the SEBs happened without problems because (before the SN stream) we weren't writing data to disk on the SEBs. Now we are, it's not clear what will happen when a RAID rebuild occurs on a SEB. Options may involve either ignoring the SN stream errors telling you that it can't write to disk (which will probably look like sn_read_lag errors) and letting the run continue, or (if that is too unstable) turning off the SN stream entirely until the rebuild is complete.

in case you want to check the status of the rebuilding process, log on the machine as root and:

sudo /usr/bin/tw_cli /c6 show

The output should look something like this (example from executing the above command on ubdaq-prod-evb):

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-1    OK             -       -       -       465.651   RiW    ON     
u2    RAID-6    REBUILD-VERIFY 75%(A)  75%(P)  256K    35390.1   RiW    ON     

VPort Status         Unit Size      Type  Phy Encl-Slot    Model
------------------------------------------------------------------------------
p8    OK             u0   465.76 GB SATA  -   /c6/e0/slt0  ST500NM0011         
p9    OK             u0   465.76 GB SATA  -   /c6/e0/slt1  ST500NM0011         
p10   DEGRADED       u2   1.82 TB   SATA  -   /c6/e0/slt2  ST2000NM0033-9ZM175 
p11   OK             u2   1.82 TB   SATA  -   /c6/e0/slt3  ST2000NM0033-9ZM175 
p12   OK             u2   1.82 TB   SATA  -   /c6/e0/slt4  ST2000NM0033-9ZM175 
p13   OK             u2   1.82 TB   SATA  -   /c6/e0/slt5  ST2000NM0033-9ZM175 
p14   OK             u2   1.82 TB   SATA  -   /c6/e0/slt6  ST2000NM0033-9ZM175 
p15   OK             u2   1.82 TB   SATA  -   /c6/e0/slt7  ST2000NM0033-9ZM175 
p16   OK             u2   1.82 TB   SATA  -   /c6/e0/slt8  ST2000NM0033-9ZM175 
p17   OK             u2   1.82 TB   SATA  -   /c6/e0/slt9  ST2000NM0033-9ZM175 
p18   OK             u2   1.82 TB   SATA  -   /c6/e0/slt10 ST2000NM0033-9ZM175 
p19   OK             u2   1.82 TB   SATA  -   /c6/e0/slt11 ST2000NM0033-9ZM175 
p20   OK             u2   1.82 TB   SATA  -   /c6/e0/slt12 ST2000NM0033-9ZM175 
p21   OK             u2   1.82 TB   SATA  -   /c6/e0/slt13 ST2000NM0033-9ZM175 
p22   OK             u2   1.82 TB   SATA  -   /c6/e0/slt14 ST2000NM0033-9ZM175 
p23   OK             u2   1.82 TB   SATA  -   /c6/e0/slt15 ST2000NM0033-9ZM175 
p24   OK             u2   1.82 TB   SATA  -   /c6/e0/slt16 ST2000NM0033-9ZM175 
p25   OK             u2   1.82 TB   SATA  -   /c6/e0/slt17 ST2000NM0033-9ZM175 
p26   ECC-ERROR      u?   1.82 TB   SATA  -   /c6/e0/slt18 ST2000NM0033-9ZM175 
p27   OK             u2   1.82 TB   SATA  -   /c6/e0/slt19 ST2000NM0033-9ZM175 
p28   OK             u2   1.82 TB   SATA  -   /c6/e0/slt20 ST2000NM0033-9ZM175 
p29   OK             u2   1.82 TB   SATA  -   /c6/e0/slt21 ST2000NM0033-9ZM175 
p30   OK             u2   1.82 TB   SATA  -   /c6/e0/slt22 ST2000NM0033-9ZM175 
p31   OK             u2   1.82 TB   SATA  -   /c6/e0/slt23 ST2000NM0033-9ZM175 

Name  OnlineState  BBUReady  Status    Volt     Temp     Hours  LastCapTest
---------------------------------------------------------------------------
bbu   On           Yes       OK        OK       OK       124    xx-xxx-xxxx  

This shows that:

There are several units in ubdaq-prod-evb:
  • RAID-1 unit named u0, with two disks: p8, and p9.
  • RAID-6 unit named u2, with 22 disks: p10-p31.
  • anything named u1 is a spare unit for RAID-6
  • The RAID-6 is being rebuilt, currently at 75% complete
  • Disk p10 has status DEGRADED - meaning the spare is getting ready to act as a regular disk. When rebuilding will arrive to 100% it will change to OK u2.
  • Disk p26 has status ECC-ERROR - this means that the disk has reported that it wrote a bad sector, but understood the error and fixed it. The ECC-ERROR flag stays until it is cleared, so might be old. However, the disk is fine and has no hardware issue. Previous to the RAID rebuild problems in January 2018, when this error occurred the disk was automatically replaced and a RAID rebuild triggered. The Run Co-ordinators have asked for that not to be done unless the disk actually fails. At this point, we see an error but the disk itself is fine, so it does not trigger a RAID rebuild.

Circular buffer occupancy high

  • A possibility is that the readfragment cannot interpret the data, and doesn't know what to do. The data is not transferred to the next step (outbound to the assembler for the NU stream data), and the circular buffer occupancy starts increasing.

It is worth checking the CPU of each machine.
In case it is irregularly high, it is worth checking the processes running on the relevant machines. Do this by typing

top -c 

on the relevant machine.

  • In case gmond (stands for ganglia) is taking a lot of the CPU we can restart it on each machine by logging in, ksu, and then doing
    service gmond restart
    

High memory use (mem_free errors)

Since we increased the outbound fragment queue length on the SEBs in March 2018, we have been seeing fairly frequent mem_free errors on the SEBs. This is believed to be related to the longer queue lengths - we are now allowing the SEBs to store more in local memory if they need to. The queue only takes as much memory as it thinks it needs, so it doesn't block out a section of memory equal to the maximum queue length at the start of a run. Instead, when it needs more space, it expands the queue and takes more memory. The problem is that the memory isn't released when it is no longer needed.

See elog #68141 and elog #68780 for discussion of the issue and example plots showing how to diagnose it. It is usually a "problem" (in that the alarm may show up) for very long runs. The good news is that all the memory gets released when the run ends, so either way (in the best case when the run ends normally, or in the worst case if the run crashes because the memory is full) the alarms should clear when the run finishes. Because of that, there is no need to take any action if you are sure this is what's going on.

If that doesn't look like the problem you're seeing, keep reading for some other hints...

Here is an example (elog #54980) of how to troubleshoot an issue when the memory use is high.

It's also worth knowing that there is a cron job called drop_caches running on the DAQ machines (EVB and the SEBs) that are intended to drop cached memory at regular intervals and avoid mem_free errors. See elog #62305 for an example and information about how to check which cron jobs are running.

A summary of how to check cron jobs:
  • Log into the machine as yourself
  • type ksu to switch to root (note: you can only do this if your Kerberos principal is in the root k5login on that machine. All DAQ experts should have root access - if you don't, file a service desk ticket to the SLAM team and ask them to add you)
  • Check /var/log/cron to see which cron jobs are running (see this page to find out which cron jobs should be running)
  • Type exit to exit root

Note: please be extra careful with root access! Only use ksu when you actually need to, and exit from root as soon as you are done. This is especially important for online machines such as these!

disk_free errors

disk_free errors are usually a problem for the Data Management experts, so if you get called for this you can redirect the shifter!

The alarm will look like

 Total Free disk space per Ganglia seb06_uB_PCStatus_PCXX_seb**/disk_free 
and will not clear.
To do: check the relevant ganglia matrix. in case the disk space just keeps increasing in the regular pace, the problem is of the data management and not DAQ - there's a PUBs daemon supposed to run on ws02 clearing the space in all SEBs when that reaches a threshold, data management expert should make sure it's running.

xmit_trigger_ctr_diff_calc or xmit_fram_ctr_diff_calc alarms

These alarms indicate a difference in the time reported by a given SEB crate and the trigger time. Here are the troubleshooting instructions for shifters when they see this alarm (you may need to remind them to check there). They are also copied here so you don't have to click on that link:

Please try to restart a run, and make an elog entry cc'ing both the DAQ and readout experts. If the error comes back after a new run is restarted (typically 10 minutes after a run starts), contact the warm readout on-call expert.

If the error continues appearing and an expert cannot be reached, then with the permission of the runco only you can allow the runs to keep running. Always contact the runco before allowing runs to proceed with this error. Continue attempting to reach out the warm readout experts by calling once every 30 minutes as the error requires their attention. Hand over this responsibility to the next shifter at the end of your shift.

Usually to solve this error the warm readout expert will need to restart the crate indicated in the alarm (the full alarm looks like ub_DAQStatus_DAQX_sebXX/xmit_trigger_ctr_diff_calc). If the alarm is appearing simultaneously in all SEBs 0-9, then it might instead be a problem with SEB10. It's unlikely that nine crates would have an identical error at the same time, but bear in mind that SEB10 is the one that handles the trigger stream, so a "difference between SEB time and trigger time" error on all SEBs could in fact be caused by a problem with the crate sending the trigger time. You may want to suggest this to the readout expert.

Problem in checklist plots: crate event number/crate frame number relative to global header/trigger crate header

These plots are part of the shifter checklist, and they are instructed to call the DAQ expert if all entries are not 0+/-1.

If the crate event numbers relative to global/trigger crate are misaligned by more than +/-1, that's OK. Due to a small bug in the trigger and readout, we can sometimes get a trigger that is not recognised by the TPC crates. This leads to an event number offset between the trigger/PMT crate (seb 10) and the TPC crates (sebs 1-9).

The thing to really watch out for is the frame number. That records the time of the event as seen by each of the readout crates and should never differ between the crates/sebs by more than +/-1. The DAQ software is written assuming that the frame numbers will be within +/-1 of each other, so if they are not then we are not taking good data. Ask the shifter to restart the run and it should hopefully go away.

DAQ is unstable due to high rate of "spare1" triggers

We had an incident in which the DAQ seemed extremely unstable after an unexpected power outage (actually it showed up around 12 hours after the power outage, coinciding with the the beam going down for maintenance). Elog #65529 describes the first symptoms of the issue: runs could not last more than 5-10 minutes and were crashing mostly due to circular_buffer_occupancy errors on most of the SEBs. Even running with an EXT trigger rate of 1Hz (which is usually incredibly stable) did not work. Experts noticed in the ganglia metrics that spikes in the variables Builder-TrigRate_Paddle and Builder-WriterQueue coincided with the run crashes, and correlated perfectly with a "spare1" line in the trigger nearline monitoring plot (that we don't usually see).

The solution is documented in Elog #65572. The "spare1" triggers are MuCS triggers, and were firing at a much higher rate than expected (we expect ~1 Hz). After the power outage, the MuCS had come back up in a strange state, and was issuing triggers continuously. Experts disconnected the MuCS trigger until the hardware could be reconfigured.

runConsoleDAQ won't start

While attempting:

> runConsoleDAQ.py 

the following error appeared:

/uboonenew/python/v2_7_9/Linux64bit+2.6-2.12/bin/python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory 

solution: open a new terminal and explicitly type:

source .bash_profile

for the full description:
See here for the problem: elog 55664
And here for the solution: elog 55665

GPS_Satellite_Status alarm

GPS_Satellite_Status goes into alarm with a status of 7 after SEB10 has been powered down and back up.

Solution: If restarting the run does nothing, reboot SEB10.

TimeDifference_GPS and TimeDifference_GPS-PPS

This could be a regular GPS problem which can be fixed by restarting the run.

In case the TimeDifference_GPS and TimeDifference_GPS-PPS errors keep appearing it is advised to check the following
  1. Do the DAQlogs show the same values of PPS for several subruns?
  2. Does the ganglia TimeDifference_GPS-PPS showed large increasing difference (as in elog 19040)?
  3. Does the bchealth command (described in GPS card ), is giving the Local-GPS difference: O(0.3) msec an order of magnitude more than usual.

If the answer is yes for more than one question, this could be a readout problem. The trigger time is given by a nieves times stamp supplied by the read out card, this time stamp will be matched to a PPS (pulse per second) when arriving to the DAQ machine. If the nieves time is not updated for some reason, the PPS value for it will keep being the same, while the GPS time will keep progressing causing the increased difference between the two.
To fix this problem one should ask the readout expert to powercycle the crate10.

Other GPS problems

Kathryn's slides from the DAQ school contain a number of other helpful tips for solving GPS problems, including problems that have occurred in the past and how they were solved.

Failed to start the new run

%MSG-i WorkerThread: Seb 08-Jun-2015 04:57:54 CDT MF-online
StateMachineEventProcessor::processEvent started running.
%MSG
DDS domain participant is not connected.
Unable to connect DomainParticipant in DDSConnection constructor.

Problems with SEB02 provided by Wes:

Solution:

uboonedaq was unable to start DDS. Not sure why we have this problem on this machine now, but the following is the normal routine to reset it (what I did):

(1) Log in to machine with problem as uboonedaq (here: ssh uboonedaq@seb02)
(2) source $UBOONEDAQ_DIR/slf6.x86_64.e7.debug/bin/setup_daq.sh
(3) do “configure-online-daq-prod” Important
(4) Try: ospl stop (Probably returns a "Ready" status super fast, when it should take a second or two).
(5) Try: ospl start (If it says "Splice System with domain name "uboonedaq uboone DAQ prod DDS Domain" is found running, ignoring command" then it's bad.)
(6) Do: ipcs. Should see output like this:


------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 0 root 777 262144 1 dest
0x5303000e 3047425 uboonedaq 600 104857600 3

(7) Look for segments owned by uboonedaq of size 104857600. For each one, do: ipcrm -m <shmid>, where <shmid> is the shmid number listed for the entry.
(8) Do: rm /tmp/sppdskey_*. It may complain that it is not permitted for some of those files (due to ownership), but we want any uboonedaq owns to be gone (you can do an ls to check).
(9) Try: ospl stop. Should do nothing (no output).
(10) Try: ospl start. Should take a few seconds to set up, and then say "Ready" and print log locations.
(11) Try: ospl stop. Should take a few seconds to stop it, and then say "Ready".

Then, you're all good!

Switching the machine that hosts the database (smc)

If the database is moved from smc to another machine, we need to make sure the DAQ is configured to look for the database on the correct machine. To tell the DAQ whether to look for the database on smc or smc2 (or elsewhere), you need to edit the file /home/uboonedaq/.sqlaccess/prod_conf.sh: change the IP addresses given for DBTOOL_READER_HOST and DBTOOL_WRITER_HOST. Once the file has been changed, you need to log out and log back in again (because the file is sourced when logging in as uboonedaq).

Run crashed owing to PMT HV off

NOTE: Since uboonedaq_datatypes v6_21_00, this shouldn't be an issue. The addition of the block listed below in projects/datatypes/ub_PMT_WindowDataCreatorHelperClass.cpp prevents it from occurring.

if(curr_rawData.size() < 2) {
return;
}

In the case that a run starts, but crashes in about 1 minute, plus no event is assembled and recorded, you could check the err log file of seb10. One possibility is that PMT HV is off, and therefore no input to the PMT FEM on slot 6, which has the cosmic los gain configuration. An example of the err log files in this case is shown below - there are a bunch of the messages "Channel 0 with 1 windows Channel 1 with 1 windows Channel 2 with 1 windows Channel 3 with 1 windows ..."
Our DAQ code cannot process a channel input without anything and that's why it crashes.

Caught exception in ub_PMT_WindowDataCreatorHelperClass::populateChannelDataVector() Message:  datatypes_exception Message: Junk data: Left with a PMT window header that is too small..

Raw Card Data
Buffer size is 2 bytes, or 1 elements 2 bytes each.

c000
Object gov::fnal::uboone::datatypes::ub_PMT_CardHeader_v6 const*.
Module[6], ID[0], Marker[ffff], RAW[0xf006ffff]
WordCount[0], RAW[0xf001f000]
Event[866], RAW[0xf866f000]
Frame[45fb1], RAW[0xffb1f045]
Checksum[4000], RAW[0xf000f004]
TrigSample[3c7], RAW[0xf0c7f023]
TrigFrameMod16[2], RAW[0xf0c7f023]
DataStartMarker[4000], RAW[0x4000]

Object gov::fnal::uboone::datatypes::ub_PMT_CardTrailer_v6 const*.
DataEndMarker[c000], RAW[0xc000]

Exception: datatypes_exception Message: Junk data: Left with a PMT window header that is too small..
Object gov::fnal::uboone::datatypes::ub_MarkedRawCrateData<gov::fnal::uboone::datatypes::ub_PMT_CardData_v6, gov::fnal::uboone::datatypes::ub_XMITEventHeader, gov::fnal::uboone::datatypes::ub_XMITEventTrailer> const*.
Object gov::fnal::uboone::datatypes::ub_XMITEventHeader const*.
00  RAW[ffffffff]
Object gov::fnal::uboone::datatypes::ub_XMITEventTrailer const*.
00  RAW[e0000000]
 *Found 3 cards.
Card 1
Object gov::fnal::uboone::datatypes::ub_MarkedRawCardData<gov::fnal::uboone::datatypes::ub_PMT_ChannelData_v6, gov::fnal::uboone::datatypes::ub_PMT_CardHeader_v6, gov::fnal::uboone::datatypes::ub_PMT_CardTrailer_v6> const*.
Object gov::fnal::uboone::datatypes::ub_PMT_CardHeader_v6 const*.
Module[4], ID[0], Marker[ffff], RAW[0xf004ffff]
WordCount[eb1c], RAW[0xfb1df00e]
Event[866], RAW[0xf866f000]
Frame[45fb1], RAW[0xffb1f045]
Checksum[a326c9], RAW[0xf6c9fa32]
TrigSample[3c7], RAW[0xf0c7f023]
TrigFrameMod16[2], RAW[0xf0c7f023]
DataStartMarker[4000], RAW[0x4000]
Object gov::fnal::uboone::datatypes::ub_PMT_CardTrailer_v6 const*.
DataEndMarker[c000], RAW[0xc000]
 *Found 48 channels.
  Channel 0 with 1 windows   Channel 1 with 1 windows   Channel 2 with 1 windows   Channel 3 with 1 windows   Channel 4 with 1 windows   Channel 5 with 1 windows   Channel 6 with 1 windows   Channel 7 with 1 windows   Channel 8 with 1 windows   Channel 9 with 1 windows   Channel 10 with 1 windows   Channel 11 with 1 windows   Channel 12 with 1 windows   Channel 13 with 1 windows   Channel 14 with 1 windows   Channel 15 with 1 windows   Channel 16 with 1 windows   Channel 17 with 1 windows   Channel 18 with 1 windows   Channel 19 with 1 windows   Channel 20 with 1 windows   Channel 21 with 1 windows   Channel 22 with 1 windows   Channel 23 with 1 windows   Channel 24 with 1 windows   Channel 25 with 1 windows   Channel 26 with 1 windows   Channel 27 with 1 windows   Channel 28 with 1 windows   Channel 29 with 1 windows   Channel 30 with 1 windows   Channel 31 with 1 windows   Channel 32 with 1 windows   Channel 33 with 1 windows   Channel 34 with 1 windows   Channel 35 with 1 windows   Channel 36 with 1 windows   Channel 37 with 1 windows   Channel 38 with 1 windows   Channel 39 with 1 windows   Channel 40 with 0 windows   Channel 41 with 0 windows   Channel 42 with 0 windows   Channel 43 with 0 windows   Channel 44 with 0 windows   Channel 45 with 0 windows   Channel 46 with 0 windows   Channel 47 with 0 windows Card 2
Object gov::fnal::uboone::datatypes::ub_MarkedRawCardData<gov::fnal::uboone::datatypes::ub_PMT_ChannelData_v6, gov::fnal::uboone::datatypes::ub_PMT_CardHeader_v6, gov::fnal::uboone::datatypes::ub_PMT_CardTrailer_v6> const*.
Object gov::fnal::uboone::datatypes::ub_PMT_CardHeader_v6 const*.
Module[5], ID[0], Marker[ffff], RAW[0xf005ffff]
WordCount[eb14], RAW[0xfb15f00e]
Event[866], RAW[0xf866f000]
Frame[45fb1], RAW[0xffb1f045]
Checksum[9ec787], RAW[0xf787f9ec]
TrigSample[3c7], RAW[0xf0c7f023]
TrigFrameMod16[2], RAW[0xf0c7f023]
DataStartMarker[4000], RAW[0x4000]
Object gov::fnal::uboone::datatypes::ub_PMT_CardTrailer_v6 const*.
DataEndMarker[c000], RAW[0xc000]
 *Found 48 channels.
  Channel 0 with 1 windows   Channel 1 with 1 windows   Channel 2 with 1 windows   Channel 3 with 1 windows   Channel 4 with 1 windows   Channel 5 with 1 windows   Channel 6 with 1 windows   Channel 7 with 1 windows   Channel 8 with 1 windows   Channel 9 with 1 windows   Channel 10 with 1 windows   Channel 11 with 1 windows   Channel 12 with 1 windows   Channel 13 with 1 windows   Channel 14 with 1 windows   Channel 15 with 1 windows   Channel 16 with 1 windows   Channel 17 with 1 windows   Channel 18 with 1 windows   Channel 19 with 1 windows   Channel 20 with 1 windows   Channel 21 with 1 windows   Channel 22 with 1 windows   Channel 23 with 1 windows   Channel 24 with 1 windows   Channel 25 with 1 windows   Channel 26 with 1 windows   Channel 27 with 1 windows   Channel 28 with 1 windows   Channel 29 with 1 windows   Channel 30 with 1 windows   Channel 31 with 1 windows   Channel 32 with 1 windows   Channel 33 with 1 windows   Channel 34 with 1 windows   Channel 35 with 1 windows   Channel 36 with 1 windows   Channel 37 with 1 windows   Channel 38 with 1 windows   Channel 39 with 1 windows   Channel 40 with 0 windows   Channel 41 with 0 windows   Channel 42 with 0 windows   Channel 43 with 0 windows   Channel 44 with 0 windows   Channel 45 with 0 windows   Channel 46 with 0 windows   Channel 47 with 0 windows Card 3
Object gov::fnal::uboone::datatypes::ub_MarkedRawCardData<gov::fnal::uboone::datatypes::ub_PMT_ChannelData_v6, gov::fnal::uboone::datatypes::ub_PMT_CardHeader_v6, gov::fnal::uboone::datatypes::ub_PMT_CardTrailer_v6> const*.
Object gov::fnal::uboone::datatypes::ub_PMT_CardHeader_v6 const*.
Module[6], ID[0], Marker[ffff], RAW[0xf006ffff]
WordCount[0], RAW[0xf001f000]
Event[866], RAW[0xf866f000]
Frame[45fb1], RAW[0xffb1f045]
Checksum[4000], RAW[0xf000f004]
TrigSample[3c7], RAW[0xf0c7f023]
TrigFrameMod16[2], RAW[0xf0c7f023]
DataStartMarker[4000], RAW[0x4000]
Object gov::fnal::uboone::datatypes::ub_PMT_CardTrailer_v6 const*.
DataEndMarker[c000], RAW[0xc000]
 *Found 0 channels.
Object gov::fnal::uboone::datatypes::ub_MarkedRawDataBlock<gov::fnal::uboone::datatypes::ub_XMITEventHeader, gov::fnal::uboone::datatypes::ub_XMITEventTrailer> const*.
  RAW Data: Buffer size is 240832 bytes, or 120416 elements 2 bytes each.

Solution:

You have to take off the pmt6 block in the DAQ configuration, so that it won't run the FEM.