Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explain Kubernetes network model in networking concept index #41419

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

sftim
Copy link
Contributor

@sftim sftim commented Jun 1, 2023

Split off and adapted from PR #39890

Once ready to merge, should help with #49278

@chrismetz09 did nearly all the work here; I'm proposing we adopt the bits we can merge right away.

/sig docs
/sig network

Most of the images being added are not yet being used. We can, IMO, merge them and then iterate.

@k8s-ci-robot k8s-ci-robot added sig/docs Categorizes an issue or PR as relevant to SIG Docs. sig/network Categorizes an issue or PR as relevant to SIG Network. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Jun 1, 2023
@k8s-ci-robot k8s-ci-robot added the language/en Issues or PRs related to English language label Jun 1, 2023
@netlify
Copy link

netlify bot commented Jun 1, 2023

Pull request preview available for checking

Built without sensitive environment variables

Name Link
🔨 Latest commit 58815e6
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-io-main-staging/deploys/675ad688274ab70008ade2c8
😎 Deploy Preview https://deploy-preview-41419--kubernetes-io-main-staging.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@jpegleg-k8s
Copy link
Contributor

Here are some notes I collected from reviewers.

Little fixes:

"pods containing one more containers" should be "pods containing one or more containers"

The diagram could use a little bit of clean up (icon alignment, icon text enlargement/scale up to make the icon text easier to read).

Conceptual/clarity fixes:

The title of the page includes Load Balancing but load balancing is not clearly explained in the writing on the page.

The host network section, and perhaps the page in general, might use some further elaboration on how this writing is focused on the Kubernetes networking model concepts, and then a little para to put layer 7 configurations into context. I received some feedback that the language was too strong (could be softened with use of "may" and "sometimes", as there are edge cases), and could use some context around layer 7 configuration and how that relates to the network model in practice. While I understood it as is, I can see how a new learner might not understand the relationships between container configuration (exposing ports, defining container ports, binding listeners in software), and the possibilities of the network.

There are several assumptions made in the description of the network model and we could clarify those assumptions.

@sftim
Copy link
Contributor Author

sftim commented Jun 9, 2023

Thanks @jpegleg-k8s.

Reviewers - is this good enough to merge in and iterate?

Comment on lines 31 to 32
L2 bridge
: a (virtual) [layer 2](https://en.wikipedia.org/wiki/Data_link_layer) bridge enabling inter-pod connectivity on the same host.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that the phrase you use below is "layer 2 bridge", not "L2 bridge"... (But also, we shouldn't be talking about layers at all here.)

If you're going to talk about bridges and encapsulation, it would be good to also mention "plugins that give pods IPs directly on the node network" (which we don't have a snappy one-word way to refer to that I can think of).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have a look at #39890 and existing reviews (maybe you did already). I'd like to talk as little as possible about encapsulation; maybe it's OK to yank any mention?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In #39890, the diagrams were introduced with phrases like "in this example", whereas that wording was lost in this PR. I think that's the problem. These diagrams do not show "the Kubernetes network model". They show one particular concrete implementation of the abstract Kubernetes network model.

If you want to fully explain the diagram to the reader, you may need to talk about L2 bridges and such. But your explanation is just about the diagram (or about what this particular unnamed plugin is doing), not about "Kubernetes networking" in general. Many network plugins use an L3 bridge instead of an L2 bridge, and some don't use a bridge at all and have an architecture which looks nothing at all like these diagrams. (eg, if you use the amazon-vpc-cni-k8s or azure-container-networking plugins then each pod would be directly connected to the same network as the node itself is).

Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
kube-proxy
: Part of Kubernetes, `kube-proxy` is optional component that you run on each Node.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
: Part of Kubernetes, `kube-proxy` is optional component that you run on each Node.
: Part of Kubernetes, `kube-proxy` is an optional component that you run on each Node.

(Stylistically, I feel that that "Node" should be "node", since we're talking about the host itself, not the v1.Node object.)

implementation (barring any intentional network segmentation policies):
kube-proxy
: Part of Kubernetes, `kube-proxy` is optional component that you run on each Node.
The kube-proxy ensures that clients can connect to [Services](/docs/concepts/services-networking/service/),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we ever say "the kube-proxy".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do often say "the kubelet", but we don't often say "the kube-proxy", just like we don't often say "the kube-apiserver". (We say "the API server" instead. And the corresponding phrase for kube-proxy would be "the Service proxy", but it looks like we basically never say that in the current docs...)

And it seems that we do sometimes currently say "the kube-proxy" in the docs anyway. 🤷‍♂️

: Part of Kubernetes, `kube-proxy` is optional component that you run on each Node.
The kube-proxy ensures that clients can connect to [Services](/docs/concepts/services-networking/service/),
including to any backend Pods that make up the Service. Clients might be other Pods, or they could be connecting from outside the cluster.
Some network plugins provide their own alternative to kube-proxy, which means you don't need to install it when you use that particular plugin.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't have to install kube-proxy when using some plugins that do use kube-proxy either (since they install it themselves). I'm not sure it makes sense to talk about installation in this doc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, but: I'm trying to find the smallest set of changes to this PR so that we can go from there-is-no-diagram to there-is-at-least something.
(Does this feedback block a merge?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Some network plugins provide their own alternative to kube-proxy, which means you don't need to install it when you use that particular plugin.
Some network plugins provide their own alternative to kube-proxy.

"IP-per-pod" model.
{{< figure src="/docs/images/k8s-net-model-arch.svg" alt="Diagram of Kubernetes networking" class="diagram-large" caption="Figure 1. High-level example of a Kubernetes cluster, illustrating container networking." >}}

The other K8s network components shown in figure consist of the following:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we abbreviate Kubernetes to K8s in the docs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we do, and one reason for that is to defend K8s as a trademark. We mostly write Kubernetes out longhand though.

Comment on lines +66 to +45
* _Local pod networking_ - optional component that enables pod-to-pod communications in the same node. You might recognize
this as a virtual layer 2 bridge (which is just one possible implementation).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bridges are not part of the Kubernetes network model. It's true that a majority of Kubernetes plugins use some sort of bridge interface on each node (though not always an L2 bridge), but this is completely invisible at the level of "the Kubernetes network model". Unless you are debugging your cluster or developing a network plugin, then the Kubernetes network model is just that all pods can communicate (at L4) with all other pods, and that's it. You neither need to know, nor to care, exactly how the network plugin implements that.

(And if you are debugging your cluster or developing a network plugin then the diagram here is still not useful because in that case you need to know specifically how your own network plugin works, not how some theoretical network plugin works.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, Pods can communicate with other Pods at layer 3. Pods can observe packet-layer communications if they try hard enough. On Linux, you might need to add a capability for that to work.

Anyway, if I need to omit the mention of a bridge for this to merge, I can.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, Pods can communicate with other Pods at layer 3

Some plugins might allow arbitrary layer 3 communication, but Kubernetes only guarantees that you can communicate with pods at L4 via TCP and UDP (and SCTP if the plugin supports it). There are no conformance requirements that pods be able to send or receive SCTP, IP multicast, IP broadcast, IPsec, ICMP pings, or any other arbitrary L3 traffic. (And there are good reasons for plugins to not allow arbitrary traffic between pods.)

Pods can observe packet-layer communications if they try hard enough.

Pods can observe the packets coming in and out of their own eth0, given CAP_NET_ADMIN. They have no ability to observe what happens on the other side of their eth0, regardless of capabilities. (Well, if they're privileged then they can install some eBPF and do whateverTF they want, but, you know.)

Anyway, if I need to omit the mention of a bridge for this to merge, I can.

as above, if you're explaining the diagram then go ahead and mention the bridge, but it should be clear that this is just how pod networking works in this example, not how it works always

Comment on lines 77 to 61
You can also have connectivity between containers running on two or more different pods on the same node; for example
Pod 7 communicating with Pod 1, with both Pods (and their containers) running on Node 1. The network plugin(s)
that you deploy are responsible for the routes or other means to make sure that
these packets arrive at the right destination.

In the cross-node case, you have container communications between pods on nodes connected
via the cluster network. In the example above, Pod 7 on Node 1 can talk to Pod 21 on Node 2.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all just boils down to "all pods can talk to all pods" which you already said

It is possible to request and configure ports on the node itself (named _host ports_),
that forward to a port on your Pod; however, this is a very niche operation.
How that forwarding is implemented is also a detail of the container runtime.
The Pod itself is not aware of the existence or non-existence of host ports.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, a lot of the existing text is specifically trying to explain Kubernetes networking to people who are assuming a Docker-like networking model, but it never explicitly says this. Contrariwise, we don't have any good explanation of how Kubernetes networking is different from typical VM networking, which is probably more relevant to more newcomers these days. It would be great to have small sections explicitly comparing Kubernetes networking to (a) traditional host networking, (b) Docker networking, (c) VM (eg OpenStack) networking. (I'm not sure if we need to talk about non-Kubernetes cloud networking since that generally tries to look like traditional host networking?)

(Also, the original text "called host ports" feels much more idiomatic than "named host ports" to me here.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be great to have small sections explicitly comparing Kubernetes networking to (a) traditional host networking, (b) Docker networking, (c) VM (eg OpenStack) networking. (I'm not sure if we need to talk about non-Kubernetes cloud networking since that generally tries to look like traditional host networking?)

I agree, but again I'm looking to find the minimal diff from what we have to what we're willing to merge. The perfect is the enemy of the published.

Comment on lines 12 to 28
Kubernetes networking addresses four concerns:
- Containers within a Pod [use networking to communicate](/docs/concepts/services-networking/dns-pod-service/) via loopback.
- Cluster networking provides communication between different Pods.
- The [Service](/docs/concepts/services-networking/service/) API lets you
[expose an application running in Pods](/docs/tutorials/services/connect-applications-service/)
to be reachable from outside your cluster.
- [Ingress](/docs/concepts/services-networking/ingress/) and [Gateway](https://gateway-api.sigs.k8s.io/) provide
extra functionality specifically for exposing your applications, websites and APIs, usually to clients outside
the cluster. Ingress and Gateway often use a load balancer to make that work reliably and at scale.
- You can also use Services to
[publish services only for consumption inside your cluster](/docs/concepts/services-networking/service-traffic-policy/).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ugh. So all of this text apparently already existed later on in the doc, but it's completely terrible. Especially, the [use networking to communicate] and [publish services only for consumption inside your cluster] links point to documents that have absolutely nothing to do with those phrases. (I'm guessing this must be a result of slow bit rot over the years as things got moved around between different documents.)

Even ignoring the bad links, this is not at all how I would summarize the goals of Kubernetes networking. I would say:

  1. The pod network allows pods to communicate with each other and with nodes, and allows pods to send traffic outside the cluster.
  2. Services provide persistent names and IPs for pods or groups of pods, with load balancing between them.
  3. Ingress and Gateway provide access to Services from outside the cluster.
  4. NetworkPolicy provides access control within the pod network.

(It's true that containers in a pod can communicate with each other via loopback, but I feel like that's more of an implementation detail of how pods work than it is a fact of "Kubernetes networking"... as you say below "Kubernetes and the container runtime provide no special support as these processes all see a common local network within the container sandbox.")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danwinship if you've the capacity, feel free to make your own PR here. You can use #39890 as a starting point or begin afresh.

I really do want to progress the work though. This effort started over a year ago and we have nothing merged.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, this is just moving text around, so I guess it's not making things worse...
(I'll try to find some time to dig into improving these docs...)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that we are never merging a PR because it has stayed there for
too long. We always appreciate smaller PRs that fixes a particular problem or
improves a specific topic.

If we want to add some diagrams to this page, good, let's focus on getting the diagram
correct, readable and meaningful. If we want to revise the Service concept overview,
fine, let's try that in a self-contained PR. If we want to adjust the flow in a page,
okay, let's do it in a single PR.
With smaller PRs, we move forward step by step. There are always progresses along the way.

Adding 1600+ lines in a single PR with modifcations to 10 files?
No. I am strongly against it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that we are never merging a PR because it has stayed there for too long. We always appreciate smaller PRs that fixes a particular problem or improves a specific topic.

Yes

If we want to add some diagrams to this page, good, let's focus on getting the diagram correct, readable and meaningful. If we want to revise the Service concept overview, fine, let's try that in a self-contained PR. If we want to adjust the flow in a page, okay, let's do it in a single PR. With smaller PRs, we move forward step by step. There are always progresses along the way.

If you can write the PR description for the thing you'd like to be reviewing, that can help: we can use that to guide contributors to write it.

Adding 1600+ lines in a single PR with modifcations to 10 files?

We often do merge PRs that include more than one image, and I don't see grounds to change that.


{{< note >}}
Network plugins are also known as _CNI_ or _CNI plugins_.
{{< /note >}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Boo!

"Some people refer to network plugins as CNI plugins or just CNIs, but this is inaccurate, since CNI is just one of several APIs involved in Kubernetes networking."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What other kind of network plugins can I use with Kubernetes?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All network plugins use CNI, but they don't just use CNI.

People understand you're talking about network plugins when you say "CNI plugins" because CNI isn't used for anything except network plugins. But these days, most of what a network plugin does doesn't involve CNI.

But anyway, doesn't need to be fixed in this PR.

@danwinship
Copy link
Contributor

blah, the above comments are a rough draft and all out of order. I meant to click "cancel review" and start over but apparently I hit "submit review" instead?

@sftim
Copy link
Contributor Author

sftim commented Jun 13, 2023

@danwinship did you have any more feedback / comments?

@sftim
Copy link
Contributor Author

sftim commented Jul 9, 2023

What about https://deploy-preview-41419--kubernetes-io-main-staging.netlify.app/docs/concepts/services-networking/#kubernetes-network-model is not generic?

If the change I need to make is to remove the text around “L2 bridge”, that feedback is OK. However, I do need feedback that helps me understand what change to make.

I think this added diagram is generic already; if it isn't, I need to know how to change it so it is.

@danwinship
Copy link
Contributor

danwinship commented Jul 9, 2023

You can't have a single diagram that is generic, unless you make it so abstract that it doesn't explain much.

The existing diagram looks something like this:

  +----------------------------------+   +------------------+
  | Node 1                           |   | Node 2           |
  |                                  |   |                  |
  |  +------------+  +------------+  |   |  +------------+  |
  |  | pod 1      |  | pod 2      |  |   |  | pod 3      |  |
  |  |            |  |            |  |   |  |            |  |
  |  +-[Pod 1 IP]-+  +-[Pod 2 IP]-+  |   |  +-[Pod 3 IP]-+  |
  |         |               |        |   |         |        |
  |     [--------bridge--------]     |   |     [bridge]     |
  |                |                 |   |         |        |
  +-----------[Node 1 IP]------------+   +----[Node 2 IP]---+
                   |                               |    
    +-----------------------------------------------------+    
    |                       Network                       |
    +-----------------------------------------------------+

but some network plugins do this instead:

  +----------------------------------------------+   +------------------------------+
  | Node 1                                       |   | Node 2                       |
  |                                              |   |                              |
  |              +------------+  +------------+  |   |              +------------+  |
  |              | pod 1      |  | pod 2      |  |   |              | pod 3      |  |
  |              |            |  |            |  |   |              |            |  |
  +-[Node 1 IP]--+-[Pod 1 IP]-+--+-[Pod 2 IP]-+--+   +-[Node 2 IP]--+-[Pod 3 IP]-+--+
         |              |               |                   |              |
    +-----------------------------------------------------------------------------+    
    |                                   Network                                   |
    +-----------------------------------------------------------------------------+    

(That is, each pod is connected directly to the "node network", and traffic from pod 1 to pod2 never passes through Node 1's host network namespace.)

And it's still the case that "Every Pod ... gets its own unique cluster-wide IP address ... [and] Pods can communicate with all other Pods on the same or separate nodes without network address translation (NAT) ... [and] Agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node", so it's still a valid implementation of the Kubernetes network model.

I guess the generic version would be to have a physical network connecting the nodes, and a "pod network" connecting the pods (and the nodes), with no indication of how the pod network and the physical network related to one another.

Or, you can just say "the implementation shown in this diagram is similar to the one used by many plugins, but other implementations are possible".

@sftim
Copy link
Contributor Author

sftim commented Jul 10, 2023

Thanks. I think it makes sense to omit the bridge. I can show that the node network and pod network are linked but that the way this happens is up to the network plugin.

@sftim sftim marked this pull request as draft July 10, 2023 07:53
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 10, 2023
@sftim sftim force-pushed the 20230601_add_diagram_to_networking_section branch from b72d96e to 7f4b0ac Compare August 15, 2023 23:34
@sftim sftim force-pushed the 20230601_add_diagram_to_networking_section branch from 7f4b0ac to 784cab0 Compare August 24, 2023 16:33
@sftim sftim force-pushed the 20230601_add_diagram_to_networking_section branch from 784cab0 to fd79d30 Compare September 7, 2023 19:59
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 4, 2023
@kbhawkey
Copy link
Contributor

@sftim , should this pull request close or are you still working on the changes?
Thanks

@sftim
Copy link
Contributor Author

sftim commented Dec 20, 2023

This is still in progress, despite appearances - for example, I had a Zoom call on Monday that touched on how to move this forward.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 19, 2024
@divya-mohan0209
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 1, 2024
@divya-mohan0209
Copy link
Contributor

@sftim: I know you're awfully short on time, but just a quarterly check-in if this is still WIP?

@sftim sftim force-pushed the 20230601_add_diagram_to_networking_section branch from fd79d30 to 88e1218 Compare April 9, 2024 15:47
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 9, 2024
@sftim
Copy link
Contributor Author

sftim commented May 1, 2024

I might get time this month to revisit the work here.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 4, 2024
@divya-mohan0209
Copy link
Contributor

@sftim Gentle reminder on this one. It seems to require a rebase.

@sftim
Copy link
Contributor Author

sftim commented Aug 12, 2024

Mmm. I need to work out what intent we have around explaining Kubernetes networking; it's hard to get the PR right because I don't think we know the message that we - Kubernetes - actually want to convey.

@divya-mohan0209
Copy link
Contributor

Thanks for the update! Would it help to have a conversation thread started around this somewhere in one of the Slack channels?

@sftim
Copy link
Contributor Author

sftim commented Aug 12, 2024

I think there was a conversation. I've left this PR like this to track it as work-that-was-in-progress and because it's an area where user feedback suggests the docs are not very helpful. But I doubt I'll do work on this before October 2024.

@sftim
Copy link
Contributor Author

sftim commented Aug 12, 2024

If you - @divya-mohan0209 - have capacity to foster that conversation, please do; it'd be very welcome.

@divya-mohan0209
Copy link
Contributor

No problem, I'm happy to help out wherever I can! Which Slack channel would be a good place to start the conversation @sftim ?

@sftim
Copy link
Contributor Author

sftim commented Aug 12, 2024

Which Slack channel would be a good place to start the conversation @sftim ?

I think SIG Docs, and pop a link to that message into each of:

  • SIG Architecture
  • SIG Network

@sftim
Copy link
Contributor Author

sftim commented Sep 10, 2024

I might yet get (ie make) time to move this forward.

@danwinship
Copy link
Contributor

filed #47903 with my attempt at rewriting the existing text without adding any new sections

@sftim
Copy link
Contributor Author

sftim commented Sep 24, 2024

This needs revising post #47903

I'd like to keep this open so we don't lose track of the work done so far.

@divya-mohan0209
Copy link
Contributor

@sftim Checking in if there has been progress on this PR after the closure of #47903

@sftim
Copy link
Contributor Author

sftim commented Nov 2, 2024

I'll work on this.

@sftim sftim force-pushed the 20230601_add_diagram_to_networking_section branch from 88e1218 to 7116fb7 Compare December 12, 2024 12:17
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 12, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from sftim. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@sftim sftim force-pushed the 20230601_add_diagram_to_networking_section branch from 7116fb7 to 58815e6 Compare December 12, 2024 12:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. language/en Issues or PRs related to English language sig/docs Categorizes an issue or PR as relevant to SIG Docs. sig/network Categorizes an issue or PR as relevant to SIG Network. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants