You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.
Describe the problem/challenge you have
When the client detects multiple nodes in a cluster (via multiple builder pods) it switches modes to download the image from the builder that built the image at the end of the build, then uploads the image back to all the other builder pods and injects the image into the runtimes of each node. If you have a slow link between the client and cluster, or have lots of nodes, this can easily saturate the network link.
Description of the solution you'd like
We should find a way to get the image transferred directly between the builders so it does not have to be ping-ponged through the client, over a potentially slow link.
Design/Architecture Details
Straw-man proposal (prototyping may yield better/refined options...)
Consider a "thin" Dockerfile wrapper in this project that takes the upstream builder (docker.io/moby/buildkit) and injects an additional binary that facilitates image transfer
This transfer assistant would implement a gRPC API on stdin/stdout, and would have 2 modes of operating
Sender
Receiver
At startup, both modes require the local runtime path and runtime type (containerd or dockerd)
The Receiver would generate a random key and bind to a port. A gRPC API would be used by the CLI to gather these details from each receiver
Progress reporting from the receivers would be a nice added touch over the gRPC API
The sender would implement a gRPC API to transfer an image. It would take as input:
The local image to transfer
The list of remote IPs, port numbers, and secrets to transfer to
The transfer should be "smart" and skip layers that are already present on the receivers
Upon completion of the transfer, the CLI would terminate all of the transfer assistants
If this works well, we can explore upstreaming this to the BuildKit project so it could be included in the base image so we no longer have to maintain our own specialized image
Environment Details:
kubectl buildkit version (use kubectl buildkit version)
v0.1.0
Kubernetes version (use kubectl version)
NA
Where are you running kubernetes (e.g., bare metal, vSphere Tanzu, Cloud Provider xKS, etc.)
Largely applicable in remote clusters (e.g., *KS, Tanzu, etc.) - Local performance is more than adequate with the current implementation.
Container Runtime and version (e.g. containerd sudo ctr version or dockerd docker version on one of your kubernetes worker nodes)
Both
Vote on this request
This is an invitation to the community to vote on issues. Use the "smiley face" up to the right of this comment to vote.
👍 "This project will be more useful if this feature were added"
👎 "This feature will not enhance the project in a meaningful way"
The text was updated successfully, but these errors were encountered:
Describe the problem/challenge you have
When the client detects multiple nodes in a cluster (via multiple builder pods) it switches modes to download the image from the builder that built the image at the end of the build, then uploads the image back to all the other builder pods and injects the image into the runtimes of each node. If you have a slow link between the client and cluster, or have lots of nodes, this can easily saturate the network link.
Description of the solution you'd like
We should find a way to get the image transferred directly between the builders so it does not have to be ping-ponged through the client, over a potentially slow link.
Design/Architecture Details
Straw-man proposal (prototyping may yield better/refined options...)
If this works well, we can explore upstreaming this to the BuildKit project so it could be included in the base image so we no longer have to maintain our own specialized image
Environment Details:
kubectl buildkit version
)v0.1.0
kubectl version
)NA
Largely applicable in remote clusters (e.g., *KS, Tanzu, etc.) - Local performance is more than adequate with the current implementation.
sudo ctr version
or dockerddocker version
on one of your kubernetes worker nodes)Both
Vote on this request
This is an invitation to the community to vote on issues. Use the "smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: