You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the implementation of computing losses for link prediction training (in GSgnnLinkPredictionModel.forward) is not friendly to loss functions like contrastive loss and triple loss which requires that a positive edge is grouped with its corresponding negative edges.
The current implementation is tricky. the LinkPredictContrastiveDotDecoder and LinkPredictContrastiveDistMultDecoder is implemented based on the assumption that the same decoder.forward will be called twice with a positive graph and negative graph respectively. And the positive and negative graphs are compatible. We can simply sort the edges in postive and negative graphs to create <pos, neg> pairs. This implementation makes strong assumption of the correlation between the Dataloader, Decoder and the Loss function. We should find a better implementation.
The text was updated successfully, but these errors were encountered:
Currently, the implementation of computing losses for link prediction training (in GSgnnLinkPredictionModel.forward) is not friendly to loss functions like contrastive loss and triple loss which requires that a positive edge is grouped with its corresponding negative edges.
The current implementation is tricky. the LinkPredictContrastiveDotDecoder and LinkPredictContrastiveDistMultDecoder is implemented based on the assumption that the same decoder.forward will be called twice with a positive graph and negative graph respectively. And the positive and negative graphs are compatible. We can simply sort the edges in postive and negative graphs to create <pos, neg> pairs. This implementation makes strong assumption of the correlation between the Dataloader, Decoder and the Loss function. We should find a better implementation.
The text was updated successfully, but these errors were encountered: