Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Selecting GPUs and compute backends #428

Open
farooqkz opened this issue Dec 28, 2024 · 5 comments
Open

Selecting GPUs and compute backends #428

farooqkz opened this issue Dec 28, 2024 · 5 comments
Labels
enhancement New feature or request

Comments

@farooqkz
Copy link

Is your feature request related to a problem? Please describe.

These days a good number of computers come with multiple GPUs, and each GPU usually has got multiple APIs to offload computation to. For instance, Most GPUs support both OpenCL and Vulkan for compute. Intel Arc GPUs also have got XMX cores specifically created for matrix operations. Furthermore, many laptops and workstations come with more than a single GPU.

Describe the solution you'd like

It would be nice to have a selection mechanism for selecting GPU or CPU. And if it's a GPU, which GPU if there are multiple GPUs. There should be a global setting and users should be able to override the global one per model. Sensible defaults should be there. Such as always trying to use a GPU if one is available.

Describe alternatives you've considered

Not much of an alternative comes to mind.

Additional context
Additional to GPUs, some PCs, even cheap SBCs come with an NPU or TPU. This should be kept in mind.

@farooqkz farooqkz added the enhancement New feature or request label Dec 28, 2024
@Jeffser
Copy link
Owner

Jeffser commented Dec 28, 2024

Hi, the backend for Alpaca is Ollama, as far as I know Ollama is only compatible with ROCm and Cuda (no Vulkan / OpenCL)

Alpaca does provide some options in the preferences dialog to modify which GPU is being used, for example CUDA_VISIBLE_DEVICES and HIP_VISIBLE_DEVICES

@Jeffser
Copy link
Owner

Jeffser commented Dec 28, 2024

I just noticed Ollama doesn't use HIP_VISIBLE_DEVICES anymore, I'm going to fix that

@Jeffser
Copy link
Owner

Jeffser commented Dec 28, 2024

4d50b7d

Alright that fixed,

I could make a GPU / CPU selector in the preferences dialog but it could take a while to get it working correctly

For now, using those overrides is the best way of configuring Ollama

@ossenthi
Copy link

ossenthi commented Jan 2, 2025

I haven't use the program for while, but I started using it again about a week ago and noticed it doesn't use my Nvidia RTX mobile Max-Q Gpu anymore. I Have tried everything guide told me to. I have cuda installed and works for other ollama-cuda setup. we need better way to diagnose problems when it comes to fails to utilize gpu.

@farooqkz
Copy link
Author

farooqkz commented Jan 5, 2025

Hi, the backend for Alpaca is Ollama, as far as I know Ollama is only compatible with ROCm and Cuda (no Vulkan / OpenCL)

Alpaca does provide some options in the preferences dialog to modify which GPU is being used, for example CUDA_VISIBLE_DEVICES and HIP_VISIBLE_DEVICES

There is a PR to add Vulkan support to Ollama. Then with that, in most multi GPU setups, there are also multiple backends as most GPUs, even ARM ones, support Vulkan.

Edit:

I just looked at the PR more thorough. And it seems the ollama team members are simply ignoring the PR...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants