-
Notifications
You must be signed in to change notification settings - Fork 14.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FG:InPlacePodVerticalScaling] Incomplete prerequisites for “Resize CPU and Memory Resources assigned to Containers” #41365
Comments
/language en |
/kind support |
/retitle Incomplete prerequisites for “Resize CPU and Memory Resources assigned to Containers” /remove-kind support The prereqisited section of https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/ should state that your cluster must have Thank you for reporting this @THMAIL |
Thank you for your reply.And I have modified the file But there's another problem:
my docker version is latest:
|
Linux 172.30.94.201 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
The path run cmd
run cmd
so did the path is error? Is this a problem with my boot parameter or a bug? |
If you do want help with Kubernetes @THMAIL, please ask elsewhere. This issue tracker is the right place to tell us about shortcomings in the docs, and the wrong place to get advice on using features (alpha or otherwise). If / when you can point out a new problem, you are welcome to file an issue so that we can cover that. SIG Node can then look at improving the docs for the beta. |
/assign @sftim Quick question about feature gates, is there a way to specify the specific feature gate that must be enbaled for alpha/beta features within the feature state tag so that we don't have to manually edit docs every time a feature graduates or the Kubernetes version updates? |
Can someone please write precisely how to enable this feature? I tried to pass the flag like that: |
Hi @criscola This issue is still waiting for a volunteer / contributor to pick it up and work on a fix. |
This is my config, anyone help me to check it? cat > config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.122.41
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: kubernetes-01
taints: null
kubeletExtraArgs:
feature-gates: InPlacePodVerticalScaling=true
---
apiServer:
timeoutForControlPlane: 4m0s
extraArgs:
feature-gates: InPlacePodVerticalScaling=true
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager:
extraArgs:
feature-gates: InPlacePodVerticalScaling=true
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.27.2
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: "192.168.0.0/16"
scheduler:
extraArgs:
feature-gates: InPlacePodVerticalScaling=true
EOF |
I confirm @wenzhaojie config is correct. To summarize the feature needs the corresponding feature gate
that should do the trick. Would be great to spend a paragraph somewhere to mention this, maybe we can edit this blog post with a short note? https://kubernetes.io/blog/2023/05/12/in-place-pod-resize-alpha/ |
Regarding this issue, it might be obvious that the in-place vertical scaling has to get kubelet involved. There are some technical implementation details as well. The scheduler has to reconsider the resource requests and limits, the ResourceQuota controller has to adjust its behavior as well, so on and so forth. This leads me to rethink about a related topic. Maybe we were right when we avoided to document the feature gates on a per-component basis. Today the feature gate list is "shared" by all components. The implementation for some features like this one (in-place scaling) may involve several components. There could be a chance that feature FOO is only about the API server and the scheduler today, but soon the developers realize that the controller-manager has to do something as well to cover a corner case. |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
cc @tallclair @AnishShah @esotsal From a quick glance this looks like a documentation limitation, though the code changes for beta may also affect this situation |
/triage accepted |
/retitle [FG:InPlacePodVerticalScaling] Incomplete prerequisites for “Resize CPU and Memory Resources assigned to Containers” |
Hi, I see that docker engine was used , is cri-dockerd used ? If yes this looks like the same situation described at the first item in InPlacePodVerticalScaling known issues and discussed also here If cri-dockerd was used then i recommend to repeat the tests using a cri-o or a containerd container runtime version satisfying InPlacePodVerticalScaling CRI APIs requirements. CRI APIs requirements for InPlacePodVerticalScaling can be found at
Added |
my k8s version:1.27.2
kubectl get nodes
NAME STATUS ROLES AGE VERSION
172.30.94.14 Ready 7d v1.27.2
172.30.94.201 Ready 7d v1.27.2
ecs6w3fxmxy5c.novalocal Ready control-plane 7d v1.27.2
Problem
I want to try in-place update and I do as the document describe.
But when I execute the cmd
kubectl -n qos-example patch pod qos-demo-5 --patch '{"spec":{"containers":[{"name":"qos-demo-ctr-5", "resources":{"requests":{"cpu":"800m"}, "limits":{"cpu":"800m"}}}]}}'
,it threw err:The text was updated successfully, but these errors were encountered: