Project

General

Profile

Bug #3886

Performance issue in NetMonTransportService::receiveMessage()

Added by Christopher Green over 7 years ago. Updated almost 3 years ago.

Status:
Closed
Priority:
Normal
Category:
Needed Enhancements
Target version:
Start date:
05/15/2013
Due date:
% Done:

0%

Estimated time:
24.00 h
Experiment:
Co-Assignees:
Duration:

Description

NetMonTransportService::receiveMessage() operates by using the last-received vector of fragments from the concurrent queue as its own state indicator and de facto FIFO, erasing the front item on each call until it is empty, and asking for the next one. This results in a ripple-copy of the remaining items. A better solution would be to have an iterator to the current item as state, and then the code actually simplifies, to something like:

 if ((!recvd_fragments_) || frag_it_ == recvd_fragments_->end()) {
    std::shared_ptr<artdaq::RawEvent> popped_event;
    do {
      incoming_events_.deqWait(popped_event);

      if (popped_event) {
        recvd_fragments_ = popped_event->releaseProduct();
        frag_it_ = recvd_fragments_->begin();
      }
      else { // Done.
        msg = nullptr;
        recvd_fragments_.reset();
        return;
      }
    }
    while (popped_event->numFragments() == 0); // popped_event is always set..

    /* Events coming out of the EventStore are not sorted but need to be
       sorted by sequence ID before they can be passed to art.
    */
    std::sort (recvd_fragments_->begin(), recvd_fragments_->end(),
         artdaq::fragmentSequenceIDCompare);
  }

  artdaq::Fragment const & topFrag = *frag_it_++;
  ...

History

#1 Updated by Kurt Biery almost 7 years ago

  • Target version set to 576
  • Estimated time set to 24.00 h

#2 Updated by Eric Flumerfelt about 4 years ago

  • Status changed from New to Resolved
  • Assignee set to Eric Flumerfelt

Without change:
2017-01-13 14:02:20 -0600: %MSG-i Aggregator: Aggregator-ironwork-5265 MF-online
2017-01-13 14:02:20 -0600: Input statistics: 92 events received at 1.52583 events/sec, data rate = 23.2842 MB/sec, monitor window = 60.2949 sec, min::max event size = 15.26::15.26 MB
2017-01-13 14:02:20 -0600: Average times per event: elapsed time = 0.655379 sec, input wait time = 0.202715 sec, avg::max event store wait time = 0.444763::1.33608 sec, shared memory copy time = 0.00235238 sec, file size test time = 6.42694e-07 sec
2017-01-13 14:02:20 -0600: %MSG

With change:
2017-01-13 13:58:16 -0600: %MSG-i Aggregator: Aggregator-ironwork-5265 MF-online
2017-01-13 13:58:16 -0600: Input statistics: 287 events received at 4.78302 events/sec, data rate = 72.9888 MB/sec, monitor window = 60.004 sec, min::max event size = 15.26::15.26 MB
2017-01-13 13:58:16 -0600: Average times per event: elapsed time = 0.209073 sec, input wait time = 0.207169 sec, avg::max event store wait time = 9.62148e-06::2.38419e-05 sec, shared memory copy time = 0.00185678 sec, file size test time = 5.3748e-07 sec
2017-01-13 13:58:16 -0600: %MSG

Merging in feature branch.

#3 Updated by Eric Flumerfelt about 4 years ago

  • Category set to Needed Enhancements
  • Target version changed from 576 to artdaq Next Release

#4 Updated by Eric Flumerfelt almost 4 years ago

  • Status changed from Resolved to Assigned
  • Target version deleted (artdaq Next Release)

Had to back out this change as it caused the system to not work. Will revisit later...
(Noted time savings were due to disk writing vs. not disk writing).

#5 Updated by Eric Flumerfelt almost 3 years ago

  • Status changed from Assigned to Closed
  • Target version set to artdaq v3_00_01

This issue is not applicable any more.

Also available in: Atom PDF