-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code for compared methods #12
Comments
Hi! The compared methods are all from the official repos, and the usages are explained in T2I-CompBench |
Thank you, |
Yes, BLIP-VQA score is the average of the results from "vqa_result.json". We have updated BLIPvqa_eval/BLIP_vqa.py in L#117-120 to complete the calculation of the average. Thank you! |
Hello! And early Happy New Year! As you said, I can see BVAQ, CLIP, UniDet and MiniGPT-cot eval in your project, but B-CLIP and B-VQA-n are not in the project, right? I just want to check this question and Thank you for this work, it helps a lot! |
Thank you for your response, for the UniDet evaluation, there is one file ofter evaluation called vqa_result.json, however for the blip there are other files called color_test.json that help to find one to one mapping for the given image and the result, is it possible to add something similar for the UniDet as well? |
Hello! The mapping file is added in UniDet_eval/determine_position_for_eval.py, and it will be saved as mapping.json in the same directory of vqa_result.json. Thank you! |
Hello! For B-VQA-n, as explained in paper "BLIP-VQA-naive (denoted as Hope this helps! |
Thank you very much, |
The checkpoints are added in GORS_finetune/checkpoint. Thank you! |
Hello, thank you for releasing the code. |
Thanks for your questions! |
Thank you very much for your reply, it is very clear. :) |
Hi,
Thank you for your interesting work and in depth analysis.
i would like to ask about possibilty of releaseing the code of compared method in Table. 2
The text was updated successfully, but these errors were encountered: