-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade from 1.7.7
to 2.1.0
uses ~6X
More Memory
#1523
Comments
1.7.7
to 2.1.0
use significantly more memory1.7.7
to 2.1.0
uses Significantly More Memory
1.7.7
to 2.1.0
uses Significantly More Memory1.7.7
to 2.1.0
uses ~6X
More Memory
Hi, this increased memory footprint is the result of replacing |
Excuse my ignorance of the internals of the system here, Is there is an option to toggle this multiplexing and caching On/Off so that we can still have the ability to run with minimal memory overhead ? Not every app have the need to achieve maximum throughput; In my case, our team wouldn't have made this tradeoff because all our apps are not pushing throughput limits. Going from a 60Mi daemonset to 500Mi is significant jump especially for big clusters with thousands of nodes, and especially for general purpose clusters where not all pods (and often majority of pods) don't mount an EFS Volume yet you're paying a |
Ofc you can do sophisticated scheduling and have more than one node-groups, and only deploy pods with EFS Mounts to a Node Group that runs the Daemonset... or have two separate daemonset with one that runs with more resources on a specific node-gorup... but all of these are solutions that introduce too much complexity. |
understood! so
in your PV (for static provisioning) or StorageClass (for dynamic provisioning) k8s definition file. hope that helps! |
for explicit examples-
or
|
I have the same issue and I saw the efs CSI container growing consistently over 800 MB of RAM. |
/kind bug
What happened?
After upgrading from
v1.7.7
to2.1.0
we noticed OOMs in Daemonset'sefs-csi-node
pods after upgrade.Before the upgrade, we set a
150Mi
memory requests/limits and didn't hit it. After the upgrade, we consistently hit the Memory Limit and didn't stop until we increased the requests/limit to500Mi
Our load and distribution of pods with EFS Mounts to nodes didn't change. We use EFS Mounts with Encryption Enabled. No any configuration overrides, everything is using default configurations as installed by the chart.
Given that this is a daemonset pod, any increase in memory is multiplied by the node count, and in our case, this is a significant increase in memory requests for a daemonset pod.
What you expected to happen?
Average memory consumption to not triple after upgrade
How to reproduce it (as minimally and precisely as possible)?
In our case it's upgrading. The increase is consistent across 9 clusters my team's operate.
Anything else we need to know?:
Environment
kubectl version
):v1.29.10-eks-7f9249a
v2.1.0
Below is the Average Memory Usage per Daemonset Pod across 9 different clusters we have and about (~600 pod)
The load and density of pods that write to EFS didn't change. The graph shows how upgrading to
v2
with the new EFS Utils is using at least 600% more memory.The text was updated successfully, but these errors were encountered: