You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
if concat:
self.norm = LayerNorm(hidden_features * num_heads)
Also, apologies if I have missed this but if the output of the multi-headed attention is concatenated, shouldn't this be reflected in the size of the shared weights W at successive layers? Currently it is constant at d_in x d but it should be dK x d at intermediate layers.
Hi,
Thank you for making the code public. It's really nice to see!
I think the following line here
GNN_for_EHR/model.py
Line 59 in 65cd210
should be
Also, apologies if I have missed this but if the output of the multi-headed attention is concatenated, shouldn't this be reflected in the size of the shared weights
W
at successive layers? Currently it is constant atd_in x d
but it should bedK x d
at intermediate layers.GNN_for_EHR/model.py
Line 46 in 65cd210
The text was updated successfully, but these errors were encountered: