Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-GPU inference #12

Open
resi1ience opened this issue Jul 15, 2024 · 5 comments
Open

Multi-GPU inference #12

resi1ience opened this issue Jul 15, 2024 · 5 comments

Comments

@resi1ience
Copy link

Thank you for your great work!
However, I am wonder How to do inference on multi-gpus.
Looking forward to your reply!

@kcz358
Copy link
Contributor

kcz358 commented Jul 16, 2024

You can set device_map=auto if you want to shard weights into different gpus

@resi1ience
Copy link
Author

You can set device_map=auto if you want to shard weights into different gpus

Thank you for your reply!

@yukio0321
Copy link

I also have en error when I run on multi-gpus with the config as :

accelerate launch --num_processes 1 --main_process_port 12345 -m lmms_eval \
    --model longva \
    --model_args pretrained=lmms-lab/LongVA-7B,conv_template=qwen_1_5,model_name=llava_qwen,device_map=auto \
    --tasks mme \
    --batch_size 1 \
    --log_samples \
    --log_samples_suffix mme_longva \
    --output_path ./logs/

The error is shown as follows:

Traceback (most recent call last):
  File "/root/data/lmms-eval/lmms_eval/__main__.py", line 202, in cli_evaluate
    results, samples = cli_evaluate_single(args)
  File "/root/data/lmms-eval/lmms_eval/__main__.py", line 298, in cli_evaluate_single
    results = evaluator.simple_evaluate(
  File "/root/data/lmms-eval/lmms_eval/utils.py", line 434, in _wrapper
    return fn(*args, **kwargs)
  File "/root/data/lmms-eval/lmms_eval/evaluator.py", line 135, in simple_evaluate
    results = evaluate(
  File "/root/data/lmms-eval/lmms_eval/utils.py", line 434, in _wrapper
    return fn(*args, **kwargs)
  File "/root/data/lmms-eval/lmms_eval/evaluator.py", line 639, in evaluate
    while len([file for file in os.listdir(cli_args.output_path) if file.endswith('metric_eval_done.txt')]) < lm.accelerator.num_processes:
AttributeError: 'LongVA' object has no attribute 'accelerator'

I wonder how to solve this problem.
Looking forward to your reply!

@kcz358
Copy link
Contributor

kcz358 commented Jul 19, 2024

Hi @yukio0321 , I think I fix and add the self.accelerator to LongVa in lmms-eval. Can you try it again?

@yukio0321
Copy link

Thank you. My problem has been solved

Hi @yukio0321 , I think I fix and add the self.accelerator to LongVa in lmms-eval. Can you try it again?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants