Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add linkcheck target #23

Merged
merged 1 commit into from
Mar 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 18 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,15 @@
# NVIDIA Network Operator Documentation

Note:
For official documentation, go to: https://docs.nvidia.com/networking/software/cloud-orchestration/index.html
For official documentation, go to:

Latest documentation generated from this repo: https://mellanox.github.io/network-operator-docs/
https://docs.nvidia.com/networking/software/cloud-orchestration/index.html


Latest documentation generated from this repo:

https://mellanox.github.io/network-operator-docs/

## Build docs

To generate the docs run:

Expand All @@ -13,3 +19,12 @@ To generate the docs run:

Generated files will be under `_build/docs/nvidia_network_operator/latest/`

## Check external links

To check external links, run:

```bash
./repo.sh docs -b linkcheck
```

Note that some links with anchors are reported as broken though they are working correctly.
2 changes: 1 addition & 1 deletion docs/advanced-configurations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ HTTP Proxy Configuration for Openshift
--------------------------------------

For Openshift, it is recommended to use the cluster-wide Proxy object to provide proxy information for the cluster.
Please follow the procedure described in `Configuring the Cluster-wide Proxy <https://docs.openshift.com/container-platform/4.14/networking/enable-cluster-wide-proxy.html>`_ via the Red Hat Openshift public documentation. The NVIDIA Network Operator will automatically inject proxy related ENV into the driver container, based on the information present in the cluster-wide Proxy object.
Please follow the procedure described in `Configuring the Cluster-wide Proxy <https://docs.openshift.com/container-platform/latest/networking/enable-cluster-wide-proxy.html>`_ via the Red Hat Openshift public documentation. The NVIDIA Network Operator will automatically inject proxy related ENV into the driver container, based on the information present in the cluster-wide Proxy object.

------------------------
HTTP Proxy Configuration
Expand Down
2 changes: 1 addition & 1 deletion docs/customizations/helm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -815,7 +815,7 @@ NVIDIA IPAM Plugin


.. warning::
Supported X.509 certificate management system should be available in the cluster to enable the validation webhook. Currently, the supported systems are `certmanager <https://cert-manager.io/>`_ and `Openshift certificate management <https://docs.openshift.com/container-platform/4.13/security/certificates/service-serving-certificate.html>`_.
Supported X.509 certificate management system should be available in the cluster to enable the validation webhook. Currently, the supported systems are `certmanager <https://cert-manager.io/>`_ and `Openshift certificate management <https://docs.openshift.com/container-platform/latest/security/certificates/service-serving-certificate.html>`_.

============================
NVIDIA NIC Feature Discovery
Expand Down
10 changes: 5 additions & 5 deletions docs/k8s-baremetal-ethernet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Kubernetes Prerequisites
Install kubernetes Version 1.18, or newer. You may use the following references to Install Kubernetes with deployment tools:

- `Bootstrapping clusters with kubeadm <https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/>`_
- `Installing Kubernetes with Kubespray <https://kubernetes.io/docs/setup/production-environment/tools/kubespray/>`_
- `Installing Kubernetes with Kubespray <https://kubespray.io/>`_

It is recommended to use Kubernetes Version 1.18 with the following features enabled. This will ensure the best NUMA alignment between the NIC PCI and the CPU, and better utilize SR-IOV performance:

Expand Down Expand Up @@ -96,7 +96,7 @@ RoCE Namespace Aware
Prior to Kernel Version 5.3.0, all RDMA devices were visible in all network namespaces.
Kernel Version 5.3.0 or NVIDIA OFED Version 4.7 introduce network namespace isolation of RDMA devices.
When the RDMA system is set to exclusive, this feature ensures that the RDMA device is bound to a particular net namespace and visible only to it.
To learn how to enable RoCE Namespace Aware by using RDMA CNI, see `here <https://github.com/Mellanox/rdma-cni/blob/v1.0.0/README.md>`_.
To learn how to enable RoCE Namespace Aware by using RDMA CNI, see `here <https://github.com/k8snetworkplumbingwg/rdma-cni/blob/v1.0.0/README.md>`_.

1. Set the RDMA system to "exclusive". This should be done on the host preparation stage:

Expand Down Expand Up @@ -286,7 +286,7 @@ To enable OVN Kubernetes CNI with ConnectX, see `OVN Kubernetes CNI with OVS off
Antrea
------

For Antrea CNI configuration instructions, see `Antrea CNI with OVS Offload <https://github.com/vmware-tanzu/antrea/blob/v0.10.0/docs/ovs-offload.md>`_.
For Antrea CNI configuration instructions, see `Antrea CNI with OVS Offload <https://github.com/antrea-io/antrea/blob/v0.10.0/docs/ovs-offload.md>`_.

================
RoCE Shared Mode
Expand All @@ -301,7 +301,7 @@ Kubernetes Prerequisite
Install Kubernetes Version 1.16 or above. You may use the following references when installing Kubernetes with deployment tools:

- `Bootstrapping Clusters with Kubeadm <https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/>`_
- `Installing Kubernetes with Kubespray <https://kubernetes.io/docs/setup/production-environment/tools/kubespray/>`_
- `Installing Kubernetes with Kubespray <https://kubespray.io/>`_

----------------------------------
Deploying the Shared Device Plugin
Expand Down Expand Up @@ -329,7 +329,7 @@ Create the `rdma-shared.yaml` configMap for the shared device plugin:
kubectl create -f rdma-shared.yaml
kubectl create -f https://raw.githubusercontent.com/Mellanox/k8s-rdma-shared-dev-plugin/master/images/k8s-rdma-shared-dev-plugin-ds.yaml

For advanced macvlan CNI configuration see following `instructions <https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan>`_.
For advanced macvlan CNI configuration see following `instructions <https://github.com/containernetworking/plugins/tree/main/plugins/main/macvlan>`_.

Supported IPAM (IP Address Management) operations:

Expand Down
2 changes: 1 addition & 1 deletion docs/multi-network-pod.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Below is a list of well known cluster network CNI providers:
* - Calico
- https://github.com/projectcalico/calico
* - Flannel
- https://github.com/coreos/flannel
- https://github.com/flannel-io/flannel
* - Canal
- https://github.com/projectcalico/canal
* - ovn-kubernetes
Expand Down
6 changes: 5 additions & 1 deletion repo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,8 @@ copyright_start = 2024
social_media_set = []
social_media = []
favicon = "${root}/assets/favicon.ico"
logo = "${root}/assets/nvidia-logo-white.png"
logo = "${root}/assets/nvidia-logo-white.png"

[repo_docs.builds.linkcheck]
build_by_default = false
output_format = "linkcheck"