Project

General

Profile

Physics Algorithms and Computations

Dual correlated sampling

The dual correlated sampling algorithm that is currently used subtracts the Nth waveform sample from the (N-3)th sample. For the far detector this means that the sample 1500 ns in the past is being used as the measurement of the baseline level and being subtracted from the current sample to determine a leading edge rise of the pulse shape.

If the DCS value is over a predetermined threshold value, then the DCS value is reported (as the pulse height) along with the time stamp at which the current sample occured.

In addition there is a hold off mechanism in place that creates a pipeline of DCS values and reports the largest (highest) value that occurs (i.e. if the first DCS value over threshold was on the leading edge 380ns rise time pulse, then the hold off would report a value that was closer to the maximum value that was reached by the waveform.) When this occurs the time that is reports is the time of the sample whose DCS value is being reported (i.e. this value can be shifted 500+ ns back from the leading edge of the pulse's rise).

Matched Filtering

With the multiple sample readout scheme (i.e. wave form digitization) the output we get from the front end boards are an array of values representing the waveform. The readout is triggered when the DCS value of one of the samples goes over threshold, and the array of values is represented as:

waveform[N] = (n-2, n-1, n, n+1, n+2 ... N)

With a this representation we need to then fit the resulting values to the ideal waveform (rise time, fall time shapes) to extract the timing edge and the pulse height.

To perform this fit quickly, it is useful to take a generalized inner product between the measurement and a set of predetermined templates. The maximum in the set of inner products then represents the best match in the templates for the measurement.

BestFit = Max({measurement * {template_n} })

The templates that are used are simply variations on the parameters that you would expect from different responses of the electronics when varying a given parameter (i.e. they are the idea waveform for a specific pulse height but varied by the time at which the response started). The reason that this approach is useful, is that instead of formally fitting the data with a least squares fit (the generalized inner product is essentially a least squares fit) you simply compute the inner products and get an approximate best fit value. This means that the fit process can be made deterministic in terms of the time to compute it.

The other nice feature of this approach is that each inner product is independent of the others. This means that we can all of the computations in parallel (while for a traditional iterative fit we can't). This allows us to map our templating procedure onto hardware like GPUs! Yes GPUs. With a 2500+ core GPU you can run the computation in parallel (note also that the computational kernel is just an add/mult sequence) and complete the entire fit in almost no time! Oh how marvelous this is. If only we had GPUs.

Data sparsification

One of the representations of the data in the waveform digitization scheme uses a floating point compression representation of the samples to allow more samples to be packed into each readout. In this format a 5 bit mantissa is paired with a 3 bit exponent. To translate this representation back to an integer value the following procedure is uses:


inline unsigned int nbitMask(unsigned int n){
  return (0xffffffff << (32 - n)) >> (32 - n);
}

// Convert a floating point number expressed as M bits of mantissa and 
// N bits of exponent into a normal integer/floating point number
float decodeVal(unsigned int input, int inputBitsMantisa, int inputBitsExp){
  // First pull off the exponent
  unsigned int expo = 0;
  expo = input >> inputBitsMantisa;

  // Next get the Mantisa
  unsigned int mantisa = 0;
  mantisa = input & nbitMask(inputBitsMantisa);

  // The value return we do an averaging on so that we
  // minimize the error (but this makes it a floating point)
  return (pow(2, (expo-1))*(mantisa + 32.5) - 0.5);
}


We can use this procedure to create a look up table for each of the compressed values (to speed unpacking). This array is pre-computed using the following code:

float decodeMatrix[256];

// Populate the decoding matrix
for(int i=0; i<256; ++i){
  decodeMatrix[i] = decodeVal(i,5,3);
}

The look up procedure for converting from the raw samples to actual values is then:

// If you want the floating point value
float f_val = decodeMatrix[ thevalue ];

// Or if you want an integer you can stuff it in there
float i_val = decodeMatrix[ thevalue ];

// So to convert a raw_waveform of NSAMPLES points you would do
// the following:
char  raw_waveform[NSAMPLES];
float full_waveform[NSAMPLES];

for(int i=0; i<NSAMPLES; ++i){
  full_waveform[i]= decodeMatrix[ raw_waveform[i] ];
}

However it should be noted that we haven't fully explored the effect of the compression (and the very large kernel from the original 12bit ADC value down to the 8bit value may have an effect on the fit of the waveform to the ideal shape)