Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIS benckmark rule 5.1.5 not implemented #88

Open
ader1990 opened this issue Jan 13, 2025 · 1 comment
Open

CIS benckmark rule 5.1.5 not implemented #88

ader1990 opened this issue Jan 13, 2025 · 1 comment

Comments

@ader1990
Copy link

Some deployments that use Kyverno to set more restrictive policies, like this one rule below, require to have a service account specific for the rights to mount the cluster's kubeconfig:

    policies.kyverno.io/description: >-
      Apply CIS benckmark rule 5.1.5: The default service account should not be used
      to ensure that rights granted to applications can be more easily audited and reviewed.
      Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod,
      and rights granted to that service account. The default service account should be configured such that it does not provide
      a service account token and does not have any explicit rights assignments.

with the policy to set on default service account:

automountServiceAccountToken: false

Currently, in the release 0.2.0, there is no service account for the bootstrap and control plane controllers, thus the deployment fails with:

$> kubectl logs -n cacpck-system  deployment/cacpck-controller-manager

2025-01-13T12:54:30Z    ERROR   controller-runtime.client.config        unable to load in-cluster config        {"error": "open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory"}
sigs.k8s.io/controller-runtime/pkg/client/config.loadConfig.func1
        sigs.k8s.io/[email protected]/pkg/client/config/config.go:133
sigs.k8s.io/controller-runtime/pkg/client/config.loadConfig
        sigs.k8s.io/[email protected]/pkg/client/config/config.go:155
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigWithContext
        sigs.k8s.io/[email protected]/pkg/client/config/config.go:97
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfig
        sigs.k8s.io/[email protected]/pkg/client/config/config.go:77
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigOrDie
        sigs.k8s.io/[email protected]/pkg/client/config/config.go:175
main.main
        ./main.go:78
runtime.main
        runtime/proc.go:271
2025-01-13T12:54:30Z    ERROR   controller-runtime.client.config        unable to get kubeconfig        {"error": "invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable", "errorCauses": [{"error": "no configuration has been provided, try setting KUBERNETES_MASTER environment variable"}]}
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigOrDie
        sigs.k8s.io/[email protected]/pkg/client/config/config.go:177
main.main
        ./main.go:78
runtime.main
        runtime/proc.go:271

For example, in RKE2 case, https://github.com/rancher/cluster-api-provider-rke2/releases/download/v0.10.0/bootstrap-components.yaml ->

apiVersion: apps/v1
kind: Deployment
...
serviceAccountName: rke2-bootstrap-manager
...

where the rke2-bootstrap-manager has the correct cluster role bindings.

Is there a posibility to also add add a service account for the CK8S deployment too?

Thanks.

@ader1990
Copy link
Author

Note, this issue appears when the -n cacpck-system serviceaccount/default and -n cabpck-system serviceaccount/default have
automountServiceAccountToken: false in their spec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant