Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor the Link prediction decoder and loss function design. #629

Open
classicsong opened this issue Nov 8, 2023 · 0 comments
Open

Refactor the Link prediction decoder and loss function design. #629

classicsong opened this issue Nov 8, 2023 · 0 comments
Labels
enhancement New feature or request

Comments

@classicsong
Copy link
Contributor

Currently, the implementation of computing losses for link prediction training (in GSgnnLinkPredictionModel.forward) is not friendly to loss functions like contrastive loss and triple loss which requires that a positive edge is grouped with its corresponding negative edges.

The current implementation is tricky. the LinkPredictContrastiveDotDecoder and LinkPredictContrastiveDistMultDecoder is implemented based on the assumption that the same decoder.forward will be called twice with a positive graph and negative graph respectively. And the positive and negative graphs are compatible. We can simply sort the edges in postive and negative graphs to create <pos, neg> pairs. This implementation makes strong assumption of the correlation between the Dataloader, Decoder and the Loss function. We should find a better implementation.

@classicsong classicsong added the enhancement New feature or request label Nov 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant