Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make cugraph-ops optional for cugraph-gnn packages #99

Open
wants to merge 13 commits into
base: branch-25.02
Choose a base branch
from
4 changes: 1 addition & 3 deletions conda/environments/all_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,13 @@ dependencies:
- pandas
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pylibraft==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-forked
- pytest-xdist
- pytorch-cuda=11.8
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda118*
- pytorch_geometric>=2.5,<2.6
- raft-dask==25.2.*,>=0.0.0a0
- rapids-build-backend>=0.3.0,<0.4.0.dev0
Expand Down
4 changes: 1 addition & 3 deletions conda/environments/all_cuda-121_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,13 @@ dependencies:
- pandas
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pylibraft==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-forked
- pytest-xdist
- pytorch-cuda=12.1
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda120*
- pytorch_geometric>=2.5,<2.6
- raft-dask==25.2.*,>=0.0.0a0
- rapids-build-backend>=0.3.0,<0.4.0.dev0
Expand Down
4 changes: 1 addition & 3 deletions conda/environments/all_cuda-124_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,13 @@ dependencies:
- pandas
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pylibraft==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-forked
- pytest-xdist
- pytorch-cuda=12.4
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda120*
- pytorch_geometric>=2.5,<2.6
- raft-dask==25.2.*,>=0.0.0a0
- rapids-build-backend>=0.3.0,<0.4.0.dev0
Expand Down
48 changes: 7 additions & 41 deletions dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ files:
- depends_on_dask_cudf
- depends_on_pylibraft
- depends_on_raft_dask
- depends_on_pylibcugraphops
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can the dependencies in the conda recipes be removed as well?

- pylibcugraphops ={{ minor_version }}

- pylibcugraphops ={{ minor_version }}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tingyu66 could you please make this change (removing the dependencies on pylibcugraphops in conda recipes) in this PR?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jameslamb Done, thank you!

- depends_on_cupy
- depends_on_pytorch
- depends_on_dgl
Expand All @@ -45,7 +44,6 @@ files:
- cuda_version
- docs
- py_version
- depends_on_pylibcugraphops
test_cpp:
output: none
includes:
Expand Down Expand Up @@ -116,7 +114,6 @@ files:
table: project
includes:
- depends_on_cugraph
- depends_on_pylibcugraphops
- python_run_cugraph_dgl
py_test_cugraph_dgl:
output: pyproject
Expand All @@ -142,7 +139,6 @@ files:
table: project
includes:
- depends_on_cugraph
- depends_on_pylibcugraphops
- depends_on_pyg
- python_run_cugraph_pyg
py_test_cugraph_pyg:
Expand All @@ -166,7 +162,6 @@ files:
includes:
- checks
- depends_on_cugraph
- depends_on_pylibcugraphops
- depends_on_dgl
- depends_on_pytorch
- cugraph_dgl_dev
Expand All @@ -180,7 +175,6 @@ files:
- checks
- depends_on_cugraph
- depends_on_pyg
- depends_on_pylibcugraphops
- depends_on_pytorch
- cugraph_pyg_dev
- test_python_common
Expand Down Expand Up @@ -406,7 +400,6 @@ dependencies:
common:
- output_types: [conda]
packages:
- pytorch>=2.3
- torchdata
- pydantic
specific:
Expand All @@ -431,18 +424,16 @@ dependencies:
- *tensordict
- {matrix: null, packages: [*pytorch_pip, *tensordict]}
- output_types: [conda]
# PyTorch will stop publishing conda packages after 2.5.
# Consider switching to conda-forge::pytorch-gpu.
# Note that the CUDA version may differ from the official PyTorch wheels.
matrices:
- matrix: {cuda: "12.1"}
packages:
- pytorch-cuda=12.1
- matrix: {cuda: "12.4"}
- matrix: {cuda: "12.*"}
packages:
- pytorch-cuda=12.4
- matrix: {cuda: "11.8"}
- pytorch-gpu>=2.3=*cuda120*
Copy link
Contributor

@bdice bdice Dec 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already using conda-forge, I think? pytorch-gpu is a conda-forge package, not a pytorch channel package. Also, the latest conda-forge builds are built with CUDA 12.6. CUDA 12.0 is no longer used to build.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For compatibility reasons we may want to stick to older builds of pytorch-gpu (built with cuda120) for now. We will hopefully be able to relax this in the future.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this PR switches to conda-forge::pytorch-gpu since pytorch channel will discontinue.

Also, the latest conda-forge builds are built with CUDA 12.6. CUDA 12.0 is no longer used to build.

For compatibility reasons we may want to stick to older builds of pytorch-gpu (built with cuda120) for now. We will hopefully be able to relax this in the future.

Oh, I had not noticed that the most recent build (_306) is only against 12.6. I agree with keeping 12.0 for better backward compatibility. However, the CUDA 11 build seems missing. Do we have details on their build matrix?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CUDA 11 builds were dropped recently. You may need an older version for CUDA 11 compatibility. I also saw this while working on rapidsai/cudf#17475. mamba search -c conda-forge "pytorch=*=cuda118*" indicates the latest version with CUDA 11 support is 2.5.1 build 303. The latest is 2.5.1 build 306.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For completeness, the latest CUDA 12.0 build was also 2.5.1 build 303.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks. It shouldn't be a dealbreaker unless another test component ends up requiring a newer version of torch on CUDA 11 down the line.

Copy link
Contributor

@bdice bdice Jan 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pytorch-gpu requires __cuda, and is not installable on systems without a CUDA driver. This makes it impossible to resolve the conda environment needed for devcontainers jobs in CI, which are CPU-only.

Note: many CUDA packages, including RAPIDS, are explicitly designed not to have __cuda as a run requirement, because it makes it impossible to install on a CPU node before using that environment on another system with a GPU.

It looks like if we just use pytorch instead of pytorch-gpu, we still get GPU builds:

CUDA 11 driver present:

CONDA_OVERRIDE_CUDA="11.8" conda create -n test --dry-run pytorch

shows

pytorch  2.5.1  cuda118_py313h40cdc2d_303  conda-forge

CUDA 12 driver present:

CONDA_OVERRIDE_CUDA="12.5" conda create -n test --dry-run pytorch

shows

pytorch  2.5.1  cuda126_py313hae2543e_306  conda-forge

No CUDA driver present:

CONDA_OVERRIDE_CUDA="" mamba create -n test --dry-run pytorch

shows

pytorch  2.5.1  cpu_mkl_py313_h90df46e_108  conda-forge

This should be sufficient. Let's try using just pytorch instead of pytorch-gpu with specific CUDA build selectors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two benefits here, if my proposal above works.

  1. devcontainers CI job would get CPU-only builds, which should still be fine for builds
  2. We don't need to specify CUDA versions, so this dependency doesn't have to be "specific" to CUDA 11/12

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, let's try with "pytorch" instead of "pytorch-gpu".

That opens up a risk that there may be situations where the solver chooses a CPU-only version because of some conflict, but hopefully cugraph-pyg can detect that with torch.cuda.is_available() or similar and raise an informative error saying something like "if using conda, try 'conda install cugraph-pyg pytorch-gpu'".

We don't need to specify CUDA versions, so this dependency doesn't have to be "specific" to CUDA 11/12

I looked into this today... we shouldn't have needed to specify CUDA versions in build strings for pytorch-gpu anyway, as long as we're pinning the cuda-version package somewhere (for example, in the run: dependencies of cugraph).

Looks like pytorch-gpu is == pinned to a specific pytorch.

Screenshot 2025-01-07 at 2 00 40 PM

And the pytorch CUDA builds all have run: dependencies on cuda-version.

Screenshot 2025-01-07 at 2 02 22 PM

So here in cugraph-pyg, just having cuda-version as a run: dependency would be enough to ensure a compatible pytorch-gpu / pytorch is pulled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jameslamb These simplifications to drop build string info are only possible now with conda-forge, iirc. I believe more complexity was required when we used the pytorch channel, and we probably just carried that over when switching to conda-forge.

- matrix: {cuda: "11.*"}
packages:
- pytorch-cuda=11.8
# pytorch only supports certain CUDA versions... skip
# adding pytorch-cuda pinning if any other CUDA version is requested
- pytorch-gpu>=2.3=*cuda118*
- matrix:
packages:

Expand Down Expand Up @@ -667,31 +658,6 @@ dependencies:
- pylibcugraph-cu11==25.2.*,>=0.0.0a0
- {matrix: null, packages: [*pylibcugraph_unsuffixed]}

depends_on_pylibcugraphops:
common:
- output_types: conda
packages:
- &pylibcugraphops_unsuffixed pylibcugraphops==25.2.*,>=0.0.0a0
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
- --extra-index-url=https://pypi.nvidia.com
- --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix:
cuda: "12.*"
cuda_suffixed: "true"
packages:
- pylibcugraphops-cu12==25.2.*,>=0.0.0a0
- matrix:
cuda: "11.*"
cuda_suffixed: "true"
packages:
- pylibcugraphops-cu11==25.2.*,>=0.0.0a0
- {matrix: null, packages: [*pylibcugraphops_unsuffixed]}

depends_on_cupy:
common:
- output_types: conda
Expand Down
4 changes: 1 addition & 3 deletions python/cugraph-dgl/conda/cugraph_dgl_dev_cuda-118.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,11 @@ dependencies:
- dglteam/label/th23_cu118::dgl>=2.4.0.th23.cu*
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-xdist
- pytorch-cuda=11.8
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda118*
- tensordict>=0.1.2
- torchdata
name: cugraph_dgl_dev_cuda-118
1 change: 0 additions & 1 deletion python/cugraph-dgl/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ dependencies = [
"cugraph==25.2.*,>=0.0.0a0",
"numba>=0.57",
"numpy>=1.23,<3.0a0",
"pylibcugraphops==25.2.*,>=0.0.0a0",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.

[project.optional-dependencies]
Expand Down
4 changes: 1 addition & 3 deletions python/cugraph-pyg/conda/cugraph_pyg_dev_cuda-118.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,11 @@ dependencies:
- cugraph==25.2.*,>=0.0.0a0
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-xdist
- pytorch-cuda=11.8
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda118*
- pytorch_geometric>=2.5,<2.6
- tensordict>=0.1.2
- torchdata
Expand Down
40 changes: 26 additions & 14 deletions python/cugraph-pyg/cugraph_pyg/nn/conv/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,30 @@
# See the License for the specific language governing permissions and
# limitations under the License.

from .gat_conv import GATConv
from .gatv2_conv import GATv2Conv
from .hetero_gat_conv import HeteroGATConv
from .rgcn_conv import RGCNConv
from .sage_conv import SAGEConv
from .transformer_conv import TransformerConv
import warnings

__all__ = [
"GATConv",
"GATv2Conv",
"HeteroGATConv",
"RGCNConv",
"SAGEConv",
"TransformerConv",
]
HAVE_CUGRAPH_OPS = False
try:
import pylibcugraphops
HAVE_CUGRAPH_OPS = True
except ImportError:
pass
except Exception as e:
warnings.warn(f"Unexpected error while importing pylibcugraphops: {e}")

if HAVE_CUGRAPH_OPS:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this PR putting cugraph-ops stuff behind a conditional import instead of completely removing it?

Or, said another way.... how are you expecting that someone would have cugraph-pyg=25.2.* and pylibcugraphops=25.2.* installed together? Is this being left here so it's still possible to build and install pylibcugraphops from source?

I'm hoping we can completely stop publishing pylibcugraphops packages in the 25.02 release.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conditional import is there because we planned to migrate cugraph-ops to a new location, and the new package will possibly retain the same name. We understand that rapids won't release cugraph-ops in 25.02. It's more of a placeholder for the migrated package.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh interesting, did not know that!

Ok thank you.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this new home for the code doesn't exist yet, can we remove the warning on line 24? It seems wrong to warn about something you know does not exist.

from .gat_conv import GATConv
from .gatv2_conv import GATv2Conv
from .hetero_gat_conv import HeteroGATConv
from .rgcn_conv import RGCNConv
from .sage_conv import SAGEConv
from .transformer_conv import TransformerConv

__all__ = [
"GATConv",
"GATv2Conv",
"HeteroGATConv",
"RGCNConv",
"SAGEConv",
"TransformerConv",
]
5 changes: 5 additions & 0 deletions python/cugraph-pyg/cugraph_pyg/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,11 @@
gpubenchmark = pytest_benchmark.plugin.benchmark


def pytest_ignore_collect(collection_path, config):
"""Return True to prevent considering this path for collection."""
if "nn" in collection_path.name:
return True

@pytest.fixture(scope="module")
def dask_client():
dask_scheduler_file = os.environ.get("SCHEDULER_FILE")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@
from cugraph_pyg.loader import DaskNeighborLoader
from cugraph_pyg.loader import BulkSampleLoader
from cugraph_pyg.data import DaskGraphStore
from cugraph_pyg.nn import SAGEConv as CuGraphSAGEConv

from cugraph.gnn import FeatureStore
from cugraph.utilities.utils import import_optional, MissingModule
Expand Down Expand Up @@ -403,15 +402,15 @@ def test_cugraph_loader_e2e_csc(framework: str):
)

if framework == "pyg":
convs = [
torch_geometric.nn.SAGEConv(256, 64, aggr="mean").cuda(),
torch_geometric.nn.SAGEConv(64, 1, aggr="mean").cuda(),
]
SAGEConv = torch_geometric.nn.SAGEConv
else:
convs = [
CuGraphSAGEConv(256, 64, aggr="mean").cuda(),
CuGraphSAGEConv(64, 1, aggr="mean").cuda(),
]
pytest.skip("Skipping tests that requires cugraph-ops")
# SAGEConv = cugraph_pyg.nn.SAGEConv

convs = [
SAGEConv(256, 64, aggr="mean").cuda(),
SAGEConv(64, 1, aggr="mean").cuda(),
]

trim = trim_to_layer.TrimToLayer()
relu = torch.nn.functional.relu
Expand Down
1 change: 0 additions & 1 deletion python/cugraph-pyg/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ dependencies = [
"numba>=0.57",
"numpy>=1.23,<3.0a0",
"pandas",
"pylibcugraphops==25.2.*,>=0.0.0a0",
"torch-geometric>=2.5,<2.6",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.

Expand Down
Loading