Agenda for the meeting on 02/01/2016:¶
We intend to release the dbar(x) / ubar(x) results for the APS April Meeting in Salt Lake City, Utah. Bryan has submitted an abstract for the conference. The dates for the release meeting need to be scheduled. Kenichi, do you have an update on the release meeting and the release criteria?
- The recent studies on dbar(x) / ubar(x) are based on the R005 data productions.
- Kenichi is working on new R-T curves that might solve the alignment issue in the Run III data.
- New data productions might also help with the rate-dependent effects in the Run II / III data.
- We aim for an analysis of the available Run II / III data. Bryan Dannowitz has summarized the data quality of the merged productions. The data quality criteria he developed allow to exclude problematic runs from the analysis (bit 26). This allows us to use almost all of the available data. Kenichi et al. have studied the data quality of the Run II / III data. There are two known problems:
- Bryan Dannowitz will provide the missing information on the target position in road-set 62 data.
- Kun and Po-Ju report on a problem with the LD2 target in road-set 62. We can exclude the affected spills from the analysis.
- Due to a DAQ problem, the common stop signal is not stable within road-set 70. It is not yet understood if we can recover the data. Arun will study this.
- Kenichi, have I missed any known data-quality issues?
- Shou has shown that the kinematic distributions of each road-set are in good agreement. He will also check if the cross-section ratios are in good agreement for each of the road sets. It is essential to report on these studies in the release presentation. Shou, do you have an update on this? Have you uploaded your kinematic distributions to the SeaQuest DocDB?
- In the release presentation, we need to describe the data sets we are combining in the analysis and list the data-quality requirements.
Data analysis and cross check¶
- The first step of the cross check is to agree on the number of good dimuons: According to studies by Kei and Shou, the number of good dimuons / reconstruction efficiency has decreased from R004 to R005. Bryan cannot reproduce this. The cross check by Bryan and Shou will identify the change in the data production and / or cut that is causing this difference. Kei is updating his studies of the reconstruction efficiency accordingly. Bryan, Kei, and Shou: What is the latest update on this discrepancy? Up to my knowledge, the issue has been resolved and the number of good dimuons has increased from R004 to R005.
- Bryan and Shou will cross-check the data of each of the road-sets separately and will then cross-check the results for the combined data set.
- Bryan and Shou will extract the dbar(x) / ubar(x) ratio first in LO and then in NLO. Bryan has cross-checked the NLO analysis with Marco Stratmann (DSSV). It is essential to report on this cross check in the release presentation.
Systematic studies and uncertainties¶
- Background correction:
- We will correct for the background contribution from upstream the target via the empty-flask correction. If possible, we would like to use only the empty-flask data and not combine the no-target and empty-flask data as for the preciouses releases. .
- Possible study: Is there a difference in the kinematic distribution of the empty-flask and no-target data?
- We will correct for the random background by adopting Bryan's method for the rate-dependence correction. Shou will provide the random background fractions as a function of the beam intensity for each road-set. In the release presentation, Shou needs to summarize his excellent slides on the background studies.
- Shou, Bryan, Kenichi: Is this the correct status of the background correction?
- Rate dependence correction:
- Bryan has developed the method for the rate dependence. In the release presentation, Bryan needs to provide an overview about his excellent studies.
- We will use this method with one update: correct first for the random background, then extract the rate dependence effect
- But before we can apply any rate correction, we need to understand the cause of the rate-depenendent effect in our data:
- Puzzle Piece A: Evan has found that the reconstruction-caused intensity-dependent inefficiecy is much larger than the hardware-caused intensity-dependent inefficiency and that the reconstruction introduces a significant kinematic dependence of the rate effects.
- Puzzle Piece B: Kei has found that the delta-ray removal is a dominant contribution to the rate dependence
- Puzzle Piece C: Bryan has found that the rate-dependence effects for LD2 are significantly larger than for LH2
- Someone needs to check the kinematic dependence of the cross-section ratio again (as we did in the summer).
- We need to decide on the systematic uncertainty for the target contamination. This is Markus's understanding of the target contamination issue:
- Kenichi, Paul, do we have a document where the soil ranges of each target gas bottle are listed?
- Bryan, have you had you had any chance to study the change of the LH2 and LD2 yields for each change in the target gas?
- Someone needs to check if the normalized J/Psi and Drell-Yan yields changed when the LD2 flask has been filled with the pure deuterium gas sample.
**If a change can be seen, then this study has to be repeated for each change of the target gas. Then we will find out how stable the target composition has been over time.
- Andrew reports the following:
I thought Bryan would responded to you but I just noticed that Bryan Ransom is not in your list of addressees. So I "cc"ed this to him. Bryan is on top of this but here is my opinion on this issue. We had several measurement of the D2 fraction. Alone the time we had several LD2 target fraction variation, when initially we used D2 from Fermilab reservoir. In April we switched to research grade D2, which is supposed to be 99.9% pure. We had taken samples several times after each RUN ended. They were sent for measurement. The results were very confusion. Then we realized that we had been taken sample in wrong way. We also contaminated our LD2 target when filling in. The issue here is that H2 can stick to inner side of the flask. Pumping to vacuum doesn't help clean it up. So what we should have done in taking the sample is to bake the sample buttle first. But we didn't. The amount of H2 inside the sample bottle is unknown. Also before filling the LD2 flask, we used H2 to fill the flask while testing the cooling procedure and possible leakage. Didn't want to waste D2 when we had only limited amount to fill only one more time at that time. There was also time when we had LD2 filled with H2 while taking data before the beam was stabilized. These data were not meant for the main purpose of the experiment but should be still good for nuclear dependence. It could also help in identifying which flask is shorter among the two liquid targets. This was a problem in E866, and it continues to E906. Effect of filling H2 into the LD2 target contaminates the LD2 target flask. So even we used research grade D2, we don't have 99.9% purity while taking data. While taking sample it is further contaminated by H2 inside the sample bottle. So what we have at the moment is that the measurement tells us the lowest limit of the D2 fraction. Kenichi proposes to use 90% +- 10% in case of measured fraction is 80%. It would be 95% +- 5% if the measurement says 90%. Paul wants to do more measurement at the end of the RUN 4. When we will purposely contaminate the target flask and see what we get, the D2 fraction, with proper sampling procedure. But this won't happen today as we are going to take data soon.