-
Notifications
You must be signed in to change notification settings - Fork 13
Added T3/T5 branches for GNN T5 classification #250
Conversation
Let me know if I should address #249 in this PR as well. I think the fix is easy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since two new variables are added (are they used anywhere, by the way?), could you please run the checks here to be sure that everything checks out?
} | ||
std::set<unsigned int> T3s_used_in_T5; | ||
std::map<unsigned int, unsigned int> T3_index_map; | ||
// std::map<unsigned int, unsigned int> T5_index_map; // not used |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If not used, maybe it should be deleted?
/* We use getMDsFromLS instead, as we only want MDs associated w/ LS candidates | ||
// Loop over minidoublets | ||
nTotalMD += miniDoubletsInGPU.nMDs[idx]; | ||
for (unsigned int jdx = 0; jdx < miniDoubletsInGPU.nMDs[idx]; jdx++) | ||
{ | ||
// Get the actual index to the mini-doublet using rangesInGPU | ||
unsigned int mdIdx = rangesInGPU.miniDoubletModuleIndices[idx] + jdx; | ||
setGnnNtupleMiniDoublet(event, mdIdx); | ||
} | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also seems to be unused, so better delete it?
ana.tx->pushbackToBranch<float>("t3_4_z", hit4_z); | ||
|
||
/* Sigmas for chi2 calculation | ||
* (stolen from SDL::computeSigmasForRegression and SDL::computeRadiusUsingRegressionk) */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that this is copied code but I wouldn't just blindly copy it, so I have a few questions below...
float inv1 = 0.01f/0.009f; | ||
float inv2 = 0.15f/0.009f; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do these represent?
std::vector<float> sigmas; | ||
float inv1 = 0.01f/0.009f; | ||
float inv2 = 0.15f/0.009f; | ||
// float inv3 = 2.4f/0.009f; // not used |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete, if unused?
// Category 1: barrel PS flat | ||
if (module_subdet == SDL::Barrel and module_type == SDL::PS and module_side == SDL::Center) | ||
{ | ||
delta1 = inv1;//1.1111f;//0.01; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commented out numbers should be deleted? The first one seems to be the result of inv1
(so why have it re-written), while the second one is random - at least I can't understand where it comes from...
This comment applies to similar instances below.
/* Despite the type of the module layer of the lower module index, | ||
* all anchor hits are on the pixel side and all non-anchor hits are | ||
* on the strip side! */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this comment relevant here? If it is, probably its connections should be made clearer?
const float kRinv1GeVf = (2.99792458e-3 * 3.8); | ||
const float k2Rinv1GeVf = kRinv1GeVf / 2.; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just include the constants from here?
float betaIn = __H2F(tripletsInGPU.betaIn[T3]); | ||
float betaOut = __H2F(tripletsInGPU.betaOut[T3]); | ||
|
||
// Legacy T4 pt estimate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
T4? I guess this is confusing, because we don't have T4s now?
@@ -1042,7 +1292,7 @@ std::tuple<float, float, float, vector<unsigned int>, vector<unsigned int>> pars | |||
const float ptAv_out = abs(dr_out * k2Rinv1GeVf / sin((betaIn_out + betaOut_out) / 2.)); | |||
|
|||
// T5 pt is average of the two pt estimates | |||
const float pt = (ptAv_in + ptAv_out) / 2.; | |||
const float pt = (ptAv_in + ptAv_out) / 2.; // this is deprecated, c.f. setQuintupletOutputBranches |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand the comment. Is this used? If not, maybe we should delete it and use the up-to-date version?
general comment. I would rather do something like |
@@ -181,6 +181,30 @@ void createGnnNtupleBranches() | |||
ana.tx->createBranch<vector<float>>("LS_sim_vz"); | |||
ana.tx->createBranch<vector<int>>("LS_isInTrueTC"); | |||
|
|||
// T3 branches | |||
ana.tx->createBranch<vector<int>>("t5_t3_idx0"); | |||
ana.tx->createBranch<vector<int>>("t5_t3_idx1"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parallelism is broken here.
Technically these are t5 branches.
The comment //T3 branches showing right above this is misleading
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by "parallelism is broken"? The comment is a typo and will be fixed with the responses to Manos's comments.
We use the three hits stored in the t3 branches for computing Balaji's Chi^2. Storing them directly makes it very easy to use them as inputs to the GNN, rather than requiring nested lookups (first to the LS, then to the MD). I also figure that there is not much "danger" here for now, as this is just for training the GNN.
This is doable if necessary, but would be more of a headache as explained above. |
Added GNN NTuple branches for T5 classification plus some minor fixes to Event.cu