Skip to content

Commit

Permalink
docs: Remove maintainer CUDA builds knowledge base section
Browse files Browse the repository at this point in the history
* In keeping with directing all CUDA build information to the cuda-feedstock
  user guides, remove almost all information from the CUDA builds section of
  the maintainer Knowledge Base and instead direct people to the user guides.
   - c.f. https://github.com/conda-forge/cuda-feedstock/blob/main/recipe/README.md
  • Loading branch information
matthewfeickert committed Jan 8, 2025
1 parent 35646a1 commit f53144c
Showing 1 changed file with 3 additions and 49 deletions.
52 changes: 3 additions & 49 deletions docs/maintainer/knowledge_base.md
Original file line number Diff line number Diff line change
Expand Up @@ -2006,55 +2006,9 @@ if you're using a `c_stdlib_version` of `2.28`, set it to `alma8`.
## CUDA builds

Although the provisioned CI machines do not feature a GPU, conda-forge does provide mechanisms
to build CUDA-enabled packages. These mechanisms involve several packages:

- `cudatoolkit`: The runtime libraries for the CUDA toolkit. This is what end-users will end
up installing next to your package.
- `nvcc`: Nvidia's EULA does not allow the redistribution of compilers and drivers. Instead, we
provide a wrapper package that locates the CUDA installation in the system. The main role of this
package is to set some environment variables (`CUDA_HOME`, `CUDA_PATH`, `CFLAGS` and others),
as well as wrapping the real `nvcc` executable to set some extra command line arguments.

In practice, to enable CUDA on your package, add `{{ compiler('cuda') }}` to the `build`
section of your requirements and rerender. The matching `cudatoolkit` will be added to the `run`
requirements automatically.

On Linux, CMake users are required to use `${CMAKE_ARGS}` so CMake can find CUDA correctly. For example:

```shell-session
mkdir build && cd build
cmake ${CMAKE_ARGS} ${SRC_DIR}
make
```

:::note

**How is CUDA provided at the system level?**

- On Linux, Nvidia provides official Docker images, which we then
[adapt](https://github.com/conda-forge/docker-images) to conda-forge's needs.
- On Windows, the compilers need to be installed for every CI run. This is done through the
[conda-forge-ci-setup](https://github.com/conda-forge/conda-forge-ci-setup-feedstock/) scripts.
Do note that the Nvidia executable won't install the drivers because no GPU is present in the machine.

**How is cudatoolkit selected at install time?**

Conda exposes the maximum CUDA version supported by the installed Nvidia drivers through a virtual package
named `__cuda`. By default, `conda` will install the highest version available
for the packages involved. To override this behaviour, you can define a `CONDA_OVERRIDE_CUDA` environment
variable. More details in the
[Conda docs](https://docs.conda.io/projects/conda/en/stable/user-guide/tasks/manage-virtual.html#overriding-detected-packages).

Note that prior to v4.8.4, `__cuda` versions would not be part of the constraints, so you would always
get the latest one, regardless the supported CUDA version.

If for some reason you want to install a specific version, you can use:

```default
conda install your-gpu-package cudatoolkit=10.1
```

:::
to build CUDA-enabled packages.
See the [relevant CUDA user guides](https://github.com/conda-forge/cuda-feedstock/blob/main/recipe/README.md)
for more information.

<a id="testing-the-packages"></a>

Expand Down

0 comments on commit f53144c

Please sign in to comment.