Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI-GCS: Execute at-scale evaluation and tuning campaigns of end-to-end solution #13

Open
4 tasks
pombredanne opened this issue Dec 23, 2024 · 0 comments
Open
4 tasks

Comments

@pombredanne
Copy link
Member

To evaluate, refine and tune how the code search works, we should execute tests, evaluations, bug fixing and tuning campaigns at a large enough scale for the whole end-to-end solution using the test and reference datasets collected before.

The results should be:

  • Publish automation/test scripts to automate these campaigns.
    • Create evaluation reports.
  • Apply tuning and updates from evaluation campaigns
  • Then, make a new release of MatchCode.io/PurlDB/ScanCode.io as needed
@pombredanne pombredanne converted this from a draft issue Dec 23, 2024
@pombredanne pombredanne added this to the 4-Eval and packaging milestone Dec 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

1 participant