Delay in SuperNova data processing
SuperNovaDDT process was deployed on Far Detector buffer nodes.
This makes a challenging test for my DataBuffer, since data comes with very big spacing
Problem with data delay¶
So now each DDT process is collecting data on number of multiplets, accumulates 10 points and sends them via DDS message to NovaGlobalTrigger.
This makes about ~1300 DDT processes running in parralel, processing 5ms slices.
It means that each packed message with 10 points contains in average 5ms*9*1300= ~60s of delay between first and last point n message
In reality, the points are a bit scattered (see next figure)
Thus, our trigger can determine SuperNova explosion with minimum delay of 60s (and it will increase with the growsth of number of BN)
Is there any way we can improve this time?¶
- Make smaller bunched message (5 points, 2 points)? Though it will increase the DDS load...
- Make some shared message buffer for all DDT processes on a node. How?
#1 Updated by Alec Habig over 5 years ago
I agree with your analysis. We should certainly see exactly what DDS can handle: maybe we're lucky and shorter but more frequent packets isn't actually a problem. This is a destructive test and needs to be done before beam comes back.
Having different DDT processes on a given buffer node aggregate their data is a good idea. An idea for implementation: make a small shared memory segment which all processes can access: each process puts its rate data into that segment, and if it is full, sends it off and empties it. This could be a programming challenge, making sure the different processes don't read and write at the same time.
#2 Updated by Andrey Sheshukov over 4 years ago
- Status changed from New to Resolved
- % Done changed from 0 to 100
Implemented as SNMessageService: novaddt:source:trunk/SuperNovaDDT/SNMessageService_service.cc
All DDT processes now accumulate messages in a shared memory, and then the message is sent.