You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some deployments that use Kyverno to set more restrictive policies, like this one rule below, require to have a service account specific for the rights to mount the cluster's kubeconfig:
policies.kyverno.io/description: >-
Apply CIS benckmark rule 5.1.5: The default service account should not be used
to ensure that rights granted to applications can be more easily audited and reviewed.
Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod,
and rights granted to that service account. The default service account should be configured such that it does not provide
a service account token and does not have any explicit rights assignments.
with the policy to set on default service account:
automountServiceAccountToken: false
Currently, in the release 0.2.0, there is no service account for the bootstrap and control plane controllers, thus the deployment fails with:
$> kubectl logs -n cacpck-system deployment/cacpck-controller-manager
2025-01-13T12:54:30Z ERROR controller-runtime.client.config unable to load in-cluster config {"error": "open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory"}
sigs.k8s.io/controller-runtime/pkg/client/config.loadConfig.func1
sigs.k8s.io/[email protected]/pkg/client/config/config.go:133
sigs.k8s.io/controller-runtime/pkg/client/config.loadConfig
sigs.k8s.io/[email protected]/pkg/client/config/config.go:155
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigWithContext
sigs.k8s.io/[email protected]/pkg/client/config/config.go:97
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfig
sigs.k8s.io/[email protected]/pkg/client/config/config.go:77
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigOrDie
sigs.k8s.io/[email protected]/pkg/client/config/config.go:175
main.main
./main.go:78
runtime.main
runtime/proc.go:271
2025-01-13T12:54:30Z ERROR controller-runtime.client.config unable to get kubeconfig {"error": "invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable", "errorCauses": [{"error": "no configuration has been provided, try setting KUBERNETES_MASTER environment variable"}]}
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigOrDie
sigs.k8s.io/[email protected]/pkg/client/config/config.go:177
main.main
./main.go:78
runtime.main
runtime/proc.go:271
Note, this issue appears when the -n cacpck-system serviceaccount/default and -n cabpck-system serviceaccount/default have automountServiceAccountToken: false in their spec.
Some deployments that use Kyverno to set more restrictive policies, like this one rule below, require to have a service account specific for the rights to mount the cluster's kubeconfig:
with the policy to set on default service account:
Currently, in the release 0.2.0, there is no service account for the bootstrap and control plane controllers, thus the deployment fails with:
For example, in RKE2 case, https://github.com/rancher/cluster-api-provider-rke2/releases/download/v0.10.0/bootstrap-components.yaml ->
where the
rke2-bootstrap-manager
has the correct cluster role bindings.Is there a posibility to also add add a service account for the CK8S deployment too?
Thanks.
The text was updated successfully, but these errors were encountered: