Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix reshaping unet if timestep is 0d tensor #1083

Merged
merged 1 commit into from
Dec 19, 2024

Conversation

eaidova
Copy link
Collaborator

@eaidova eaidova commented Dec 19, 2024

What does this PR do?

during testing models I found issue with reshaping unet in diffusion pipeline. Issue connected that reshaping expect to have 1d tensor (torch.tensor([5])), while in some cases it represented as 0d tensor (torch.tensor(5)), in this case reshaping failed with

  File "/home/ea/work/py311/lib/python3.11/site-packages/optimum/intel/openvino/modeling_diffusion.py", line 661, in _reshape_unet
    shapes[inputs][0] = 1
    ~~~~~~~~~~~~~~^^^
RuntimeError: Exception from src/core/src/shape_util.cpp:65:
Accessing out-of-range dimension

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@echarlaix echarlaix merged commit cda4908 into huggingface:main Dec 19, 2024
19 of 22 checks passed
AlexKoff88 pushed a commit that referenced this pull request Dec 23, 2024
* Support AWQ models

* Add tests

* Add dependencies

* Fix tests

* enable awq export only if ov support it

* fix style (#2)

* disable awq and gptq install for old torch (#3)

* fix style

* disable autogptq and autoawq install for old transformers testing

* separate common quant models patching and gptq (#4)

* disable windows install (#5)

* separate common quant models patching and gptq

* disable awq windows

* skip logits check for quantized models (#6)

* fix test after rebase

* fix testing condition for 2024.6 and unpatch in case if failed

* Fix qwen2-vl tests (#1084)

* Skip private mdoel loading test for external contributors (#1082)

* Fix reshaping unet if timestep is 0d tensor (#1083)

* Disable kv cache compression for fp vlm (#1080)

* Support AWQ models

* Add tests

* Add dependencies

* Fix tests

* enable awq export only if ov support it

* fix style (#2)

* disable awq and gptq install for old torch (#3)

* fix style

* disable autogptq and autoawq install for old transformers testing

* separate common quant models patching and gptq (#4)

* disable windows install (#5)

* separate common quant models patching and gptq

* disable awq windows

* skip logits check for quantized models (#6)

* fix test after rebase

* fix testing condition for 2024.6 and unpatch in case if failed

* add necessary packages in test_openvino_full

* fix code style after rebase (#7)

---------

Co-authored-by: eaidova <[email protected]>
Co-authored-by: Nikita Savelyev <[email protected]>
Co-authored-by: Ella Charlaix <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants