Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Clarify node-labels are additive or destructive #949

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions docs/src/charm/howto/install-custom.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ configuration options.
## What you'll need

This guide assumes the following:

- You have Juju installed on your system with your cloud credentials
configured and a controller bootstrapped
- A Juju model is created and selected
Expand Down Expand Up @@ -35,13 +36,16 @@ k8s:
dns-cluster-domain: "cluster.local"
dns-upstream-nameservers: "8.8.8.8 8.8.4.4"

# Add custom node labels
node-labels: "environment=production zone=us-east-1"
# Add/Remove custom node labels
# Using <key>=<value> one can ensure the label is added to all the nodes of this application
# Using <key>=- one can ensure the label is removed from all the nodes of this application
node-labels: "environment=production zone=us-east-1 node-role.kubernetes.io/worker=-"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the use-case of the removal syntax. Could you elaborate on why this is needed?

A node-label that is not defined here is not set as part of the bootstrapping process. Why/from where should a node get the labels in that phase?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, do we apply those node-labels on top of the default labels?


# Configure local storage
local-storage-enabled: true
local-storage-reclaim-policy: "Retain"
```

You can find a full list of configuration options in the
[charm configurations] page.

Expand Down
1 change: 1 addition & 0 deletions docs/src/charm/reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ charms
proxy
architecture
charm-configurations
troubleshooting
Community <community>

```
Expand Down
77 changes: 77 additions & 0 deletions docs/src/charm/reference/troubleshooting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Troubleshooting

This page provides techniques for troubleshooting common {{product}}
issues dealing specifically with the charm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Missing . at the end



## Adjusting kubernetes node labels
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Kubernetes?


### Problem

Control Plane or Worker nodes are automatically marked with a label that is unwanted.

For example, the control-plane node may be marked with both control-plane and worker roles

```
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/worker=
```

### Explanation

Each kubernetes node comes with a set of node labels enabled by default. The k8s charm defaults with both
control-plane and worker role labels, while the worker node only has a role label.

For example, consider the following simply deployment with a worker and a control-plane
```sh
$ sudo k8s kubectl get nodes
NAME STATUS ROLES AGE VERSION
juju-c212aa-1 Ready worker 3h37m v1.32.0
juju-c212aa-2 Ready control-plane,worker 3h44m v1.32.0
```


### Solution

Adjusting the roles (or any label) be executed by adjusting the applications' configuration of `node-labels`

To add another node label:

```sh
current=$(juju config k8s node-labels)
if [[ $current == *" label-to-add="* ]]; then
# replace an existing configured label
updated=${current//label-to-add=*/}
juju config k8s node-labels="${updated} label-to-add=and-its-value"
else
# specifically configure a new label
juju config k8s node-labels="${current} label-to-add=and-its-value"
fi
```

To remove a node label which was added by default

```sh
current=$(juju config k8s node-labels)
if [[ $current == *" label-to-remove="* ]]; then
# remove an existing configured label
updated=${current//label-to-remove=*/}
juju config k8s node-labels="${updated}"
else
# remove an automatically applied label
juju config k8s node-labels="${current} label-to-remove=-"
fi
```

#### Node Role example

To remove the worker node-rule on a control-plane:

```sh
juju config k8s node-labels="node-role.kubernetes.io/worker=-"
```



<!-- LINKS -->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I guess we don't need that.


Loading