generated from mrsimonemms/new
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add resources limits #48
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When destroying, this stops the external service from deleting
mrsimonemms
force-pushed
the
sje/resource-limits
branch
from
December 8, 2024 22:35
ce08059
to
36efa26
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-12-08T22:37:45Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-12-08T22:37:45Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# hcloud_firewall.firewall will be created
+ resource "hcloud_firewall" "firewall" {
+ id = (known after apply)
+ labels = {
+ "simonemms.com/project" = "k3s"
+ "simonemms.com/provisioner" = "terraform"
+ "simonemms.com/workspace" = "prod"
}
+ name = "prod-k3s-firewall"
+ apply_to {
+ label_selector = "simonemms.com/project=k3s,simonemms.com/provisioner=terraform,simonemms.com/workspace=prod"
+ server = (known after apply)
}
+ rule {
+ description = "Allow ICMP (ping)"
+ destination_ips = []
+ direction = "in"
+ protocol = "icmp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
# (1 unchanged attribute hidden)
}
+ rule {
+ description = "Allow TCP access to port 443"
+ destination_ips = []
+ direction = "in"
+ port = "443"
+ protocol = "tcp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
}
+ rule {
+ description = "Allow TCP access to port 80"
+ destination_ips = []
+ direction = "in"
+ port = "80"
+ protocol = "tcp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
}
+ rule {
+ description = "Allow access to Kubernetes API"
+ destination_ips = []
+ direction = "in"
+ port = "6443"
+ protocol = "tcp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
}
+ rule {
+ description = "Allow all TCP traffic on private network"
+ destination_ips = []
+ direction = "in"
+ port = "any"
+ protocol = "tcp"
+ source_ips = [
+ "10.0.0.0/16",
]
}
+ rule {
+ description = "Allow all UDP traffic on private network"
+ destination_ips = []
+ direction = "in"
+ port = "any"
+ protocol = "udp"
+ source_ips = [
+ "10.0.0.0/16",
]
}
+ rule {
+ description = "SSH port"
+ destination_ips = []
+ direction = "in"
+ port = "2244"
+ protocol = "tcp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
}
+ rule {
+ description = "Unifi controller"
+ destination_ips = []
+ direction = "in"
+ port = "8080"
+ protocol = "tcp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
}
+ rule {
+ description = "Unifi discovery"
+ destination_ips = []
+ direction = "in"
+ port = "10001"
+ protocol = "udp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
}
+ rule {
+ description = "Unifi speedtest"
+ destination_ips = []
+ direction = "in"
+ port = "6789"
+ protocol = "tcp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
}
+ rule {
+ description = "Unifi stun"
+ destination_ips = []
+ direction = "in"
+ port = "3478"
+ protocol = "udp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
}
+ rule {
+ description = "Unifi syslog"
+ destination_ips = []
+ direction = "in"
+ port = "5514"
+ protocol = "udp"
+ source_ips = [
+ "0.0.0.0/0",
+ "::/0",
]
}
}
# hcloud_network.network will be created
+ resource "hcloud_network" "network" {
+ delete_protection = false
+ expose_routes_to_vswitch = false
+ id = (known after apply)
+ ip_range = "10.0.0.0/16"
+ labels = {
+ "simonemms.com/project" = "k3s"
+ "simonemms.com/provisioner" = "terraform"
+ "simonemms.com/workspace" = "prod"
}
+ name = "prod-k3s-network"
}
# hcloud_network_subnet.subnet will be created
+ resource "hcloud_network_subnet" "subnet" {
+ gateway = (known after apply)
+ id = (known after apply)
+ ip_range = "10.0.0.0/16"
+ network_id = (known after apply)
+ network_zone = "eu-central"
+ type = "cloud"
}
# hcloud_placement_group.workers["pool1"] will be created
+ resource "hcloud_placement_group" "workers" {
+ id = (known after apply)
+ labels = {
+ "simonemms.com/project" = "k3s"
+ "simonemms.com/provisioner" = "terraform"
+ "simonemms.com/type" = "worker"
+ "simonemms.com/workspace" = "prod"
}
+ name = "prod-k3s-pool1"
+ servers = (known after apply)
+ type = "spread"
}
# hcloud_server.manager[0] will be created
+ resource "hcloud_server" "manager" {
+ allow_deprecated_images = false
+ backup_window = (known after apply)
+ backups = false
+ datacenter = (known after apply)
+ delete_protection = false
+ firewall_ids = (known after apply)
+ id = (known after apply)
+ ignore_remote_firewall_ids = false
+ image = "ubuntu-24.04"
+ ipv4_address = (known after apply)
+ ipv6_address = (known after apply)
+ ipv6_network = (known after apply)
+ keep_disk = false
+ labels = {
+ "simonemms.com/project" = "k3s"
+ "simonemms.com/provisioner" = "terraform"
+ "simonemms.com/type" = "manager"
+ "simonemms.com/workspace" = "prod"
}
+ location = "nbg1"
+ name = "prod-k3s-manager-0"
+ primary_disk_size = (known after apply)
+ rebuild_protection = false
+ server_type = "cx32"
+ shutdown_before_deletion = false
+ ssh_keys = (known after apply)
+ status = (known after apply)
+ user_data = "klLG1jO14ZIPyCjeBE5aD7/BenA="
+ network {
+ alias_ips = []
+ ip = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ public_net {
+ ipv4 = (known after apply)
+ ipv4_enabled = true
+ ipv6 = (known after apply)
+ ipv6_enabled = true
}
}
# hcloud_server.workers[0] will be created
+ resource "hcloud_server" "workers" {
+ allow_deprecated_images = false
+ backup_window = (known after apply)
+ backups = false
+ datacenter = (known after apply)
+ delete_protection = false
+ firewall_ids = (known after apply)
+ id = (known after apply)
+ ignore_remote_firewall_ids = false
+ image = "ubuntu-24.04"
+ ipv4_address = (known after apply)
+ ipv6_address = (known after apply)
+ ipv6_network = (known after apply)
+ keep_disk = false
+ labels = {
+ "simonemms.com/pool" = "pool1"
+ "simonemms.com/project" = "k3s"
+ "simonemms.com/provisioner" = "terraform"
+ "simonemms.com/type" = "worker"
+ "simonemms.com/workspace" = "prod"
}
+ location = "nbg1"
+ name = "prod-k3s-pool1-0"
+ placement_group_id = (known after apply)
+ primary_disk_size = (known after apply)
+ rebuild_protection = false
+ server_type = "cx32"
+ shutdown_before_deletion = false
+ ssh_keys = (known after apply)
+ status = (known after apply)
+ user_data = "klLG1jO14ZIPyCjeBE5aD7/BenA="
+ network {
+ alias_ips = []
+ ip = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ public_net {
+ ipv4 = (known after apply)
+ ipv4_enabled = true
+ ipv6 = (known after apply)
+ ipv6_enabled = true
}
}
# hcloud_server.workers[1] will be created
+ resource "hcloud_server" "workers" {
+ allow_deprecated_images = false
+ backup_window = (known after apply)
+ backups = false
+ datacenter = (known after apply)
+ delete_protection = false
+ firewall_ids = (known after apply)
+ id = (known after apply)
+ ignore_remote_firewall_ids = false
+ image = "ubuntu-24.04"
+ ipv4_address = (known after apply)
+ ipv6_address = (known after apply)
+ ipv6_network = (known after apply)
+ keep_disk = false
+ labels = {
+ "simonemms.com/pool" = "pool1"
+ "simonemms.com/project" = "k3s"
+ "simonemms.com/provisioner" = "terraform"
+ "simonemms.com/type" = "worker"
+ "simonemms.com/workspace" = "prod"
}
+ location = "nbg1"
+ name = "prod-k3s-pool1-1"
+ placement_group_id = (known after apply)
+ primary_disk_size = (known after apply)
+ rebuild_protection = false
+ server_type = "cx32"
+ shutdown_before_deletion = false
+ ssh_keys = (known after apply)
+ status = (known after apply)
+ user_data = "klLG1jO14ZIPyCjeBE5aD7/BenA="
+ network {
+ alias_ips = []
+ ip = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ public_net {
+ ipv4 = (known after apply)
+ ipv4_enabled = true
+ ipv6 = (known after apply)
+ ipv6_enabled = true
}
}
# hcloud_ssh_key.server will be created
+ resource "hcloud_ssh_key" "server" {
+ fingerprint = (known after apply)
+ id = (known after apply)
+ labels = {
+ "simonemms.com/project" = "k3s"
+ "simonemms.com/provisioner" = "terraform"
+ "simonemms.com/type" = "manager"
+ "simonemms.com/workspace" = "prod"
}
+ name = "prod-k3s-ssh_key"
+ public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGPjfqG/QomY6qu9pWp+/ioQ98QGGDh+rYlHEgrgHOQr homelab"
}
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
# ssh_resource.manager_ready[0] will be created
+ resource "ssh_resource" "manager_ready" {
+ agent = false
+ bastion_port = "22"
+ commands = [
+ "cloud-init status | grep \"status: done\"",
]
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (known after apply)
+ retry_delay = "5s"
+ timeout = "5m"
+ user = "k3smanager"
+ when = "create"
}
# ssh_resource.workers_ready[0] will be created
+ resource "ssh_resource" "workers_ready" {
+ agent = false
+ bastion_port = "22"
+ commands = [
+ "cloud-init status | grep \"status: done\"",
]
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (known after apply)
+ retry_delay = "5s"
+ timeout = "5m"
+ user = "k3smanager"
+ when = "create"
}
# ssh_resource.workers_ready[1] will be created
+ resource "ssh_resource" "workers_ready" {
+ agent = false
+ bastion_port = "22"
+ commands = [
+ "cloud-init status | grep \"status: done\"",
]
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (known after apply)
+ retry_delay = "5s"
+ timeout = "5m"
+ user = "k3smanager"
+ when = "create"
}
# module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"] will be created
+ resource "ssh_resource" "drain_workers" {
+ agent = false
+ bastion_port = "22"
+ commands = [
+ "sudo kubectl cordon prod-k3s-pool1-0",
+ "sudo kubectl drain prod-k3s-pool1-0 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
+ "sudo kubectl delete node prod-k3s-pool1-0 --force --timeout=30s",
]
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (known after apply)
+ retry_delay = "10s"
+ timeout = "5m"
+ triggers = {
+ "node_name" = "prod-k3s-pool1-0"
}
+ user = "k3smanager"
+ when = "destroy"
}
# module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"] will be created
+ resource "ssh_resource" "drain_workers" {
+ agent = false
+ bastion_port = "22"
+ commands = [
+ "sudo kubectl cordon prod-k3s-pool1-1",
+ "sudo kubectl drain prod-k3s-pool1-1 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
+ "sudo kubectl delete node prod-k3s-pool1-1 --force --timeout=30s",
]
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (known after apply)
+ retry_delay = "10s"
+ timeout = "5m"
+ triggers = {
+ "node_name" = "prod-k3s-pool1-1"
}
+ user = "k3smanager"
+ when = "destroy"
}
# module.k3s.ssh_resource.initial_manager will be created
+ resource "ssh_resource" "initial_manager" {
+ agent = false
+ bastion_port = "22"
+ commands = [
+ "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
+ "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
+ "echo \"flannel-iface: $(ip route get 10.0.0.0 | awk -F \"dev \" 'NR==1{split($2, a, \" \"); print a[1]}')\" | sudo tee -a /etc/rancher/k3s/config.yaml.d/flannel.yaml",
+ "curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=stable sh -",
+ "sudo systemctl start k3s",
+ "until sudo kubectl get node prod-k3s-manager-0; do sleep 1; done",
]
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (known after apply)
+ retry_delay = "10s"
+ timeout = "5m"
+ triggers = {
+ "channel" = "stable"
}
+ user = "k3smanager"
+ when = "create"
+ file {
+ content = (known after apply)
+ destination = "/tmp/k3sconfig.yaml"
# (4 unchanged attributes hidden)
}
}
# module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"] will be created
+ resource "ssh_resource" "install_workers" {
+ agent = false
+ bastion_port = "22"
+ commands = [
+ "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
+ "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
+ "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
]
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (known after apply)
+ retry_delay = "10s"
+ timeout = "5m"
+ triggers = {
+ "channel" = "stable"
}
+ user = "k3smanager"
+ when = "create"
+ file {
+ content = (known after apply)
+ destination = "/tmp/k3sconfig.yaml"
# (4 unchanged attributes hidden)
}
}
# module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"] will be created
+ resource "ssh_resource" "install_workers" {
+ agent = false
+ bastion_port = "22"
+ commands = [
+ "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
+ "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
+ "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
]
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (known after apply)
+ retry_delay = "10s"
+ timeout = "5m"
+ triggers = {
+ "channel" = "stable"
}
+ user = "k3smanager"
+ when = "create"
+ file {
+ content = (known after apply)
+ destination = "/tmp/k3sconfig.yaml"
# (4 unchanged attributes hidden)
}
}
# module.k3s.ssh_sensitive_resource.join_token will be created
+ resource "ssh_sensitive_resource" "join_token" {
+ agent = false
+ bastion_port = "22"
+ commands = [
+ "sudo cat /var/lib/rancher/k3s/server/token",
]
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (sensitive value)
+ retry_delay = "10s"
+ timeout = "5m"
+ user = "k3smanager"
+ when = "create"
}
# module.k3s.ssh_sensitive_resource.kubeconfig will be created
+ resource "ssh_sensitive_resource" "kubeconfig" {
+ agent = false
+ bastion_port = "22"
+ commands = (known after apply)
+ commands_after_file_changes = true
+ host = (known after apply)
+ id = (known after apply)
+ ignore_no_supported_methods_remain = false
+ port = "2244"
+ private_key = (sensitive value)
+ result = (sensitive value)
+ retry_delay = "10s"
+ timeout = "5m"
+ user = "k3smanager"
+ when = "create"
}
Plan: 19 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ hcloud_network_name = "prod-k3s-network"
+ k3s_cluster_cidr = "10.42.0.0/16"
+ kube_api_server = (sensitive value)
+ kubeconfig = (sensitive value)
+ location = "nbg1"
+ network_name = "prod-k3s-network"
+ pools = (sensitive value)
+ region = "eu-central"
+ ssh_port = 2244
+ ssh_user = "k3smanager"
time=2024-12-08T22:37:56Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of infisical/infisical from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing infisical/infisical v0.12.4...
- Installed infisical/infisical v0.12.4 (self-signed, key ID 2513406FB39E8BB6)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
data.infisical_secrets.common_secrets: Reading...
data.infisical_secrets.common_secrets: Read complete after 1s
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# data.kubernetes_nodes.cluster will be read during apply
# (depends on a resource or a module with changes pending)
<= data "kubernetes_nodes" "cluster" {
+ id = (known after apply)
+ nodes = (known after apply)
}
# helm_release.argocd will be created
+ resource "helm_release" "argocd" {
+ atomic = true
+ chart = "argo-cd"
+ cleanup_on_fail = true
+ create_namespace = true
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ manifest = (known after apply)
+ max_history = 0
+ metadata = (known after apply)
+ name = "argocd"
+ namespace = "argocd"
+ pass_credentials = false
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://argoproj.github.io/argo-helm"
+ reset_values = true
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 600
+ values = [
+ <<-EOT
global:
addPrometheusAnnotations: true
deploymentAnnotations:
secret.reloader.stakater.com/reload: oidc
domain: argocd.simonemms.com
applicationSet:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 128Mi
controller:
emptyDir:
sizeLimit: 500Mi
replicas: 2
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 1000m
memory: 1024Mi
notifications:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 128Mi
redis:
enabled: false
redis-ha:
enabled: true
haproxy:
resources:
limits:
memory: 500Mi
cpu: 500m
requests:
cpu: 250m
memory: 256Mi
redis:
resources:
limits:
cpu: 500m
memory: 700Mi
requests:
memory: 200Mi
cpu: 100m
repoServer:
autoscaling:
enabled: true
minReplicas: 2
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
dex:
enabled: false
server:
autoscaling:
enabled: true
minReplicas: 2
resources:
requests:
cpu: 10m
memory: 64Mi
limits:
cpu: 50m
memory: 128Mi
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
gethomepage.dev/description: Get stuff done with Kubernetes!
gethomepage.dev/enabled: "true"
gethomepage.dev/group: Cluster
gethomepage.dev/icon: argocd
gethomepage.dev/name: ArgoCD
gethomepage.dev/widget.type: argocd
gethomepage.dev/widget.url: http://argocd-server.argocd.svc.cluster.local
gethomepage.dev/widget.key: "{{HOMEPAGE_VAR_ARGOCD_KEY}}"
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
oidc.config: |-
"clientID": "$oidc:clientId"
"clientSecret": "$oidc:clientSecret"
"issuer": "https://oidc.simonemms.com"
"name": "OIDC"
oidc.tls.insecure.skip.verify: false
statusbadge.enabled: true
url: https://argocd.simonemms.com
"accounts.homepage": "apiKey"
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
g, homepage, role:readonly
EOT,
]
+ verify = false
+ version = "7.7.7"
+ wait = true
+ wait_for_jobs = false
}
# helm_release.hcloud_ccm will be created
+ resource "helm_release" "hcloud_ccm" {
+ atomic = true
+ chart = "hcloud-cloud-controller-manager"
+ cleanup_on_fail = true
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ manifest = (known after apply)
+ max_history = 0
+ metadata = (known after apply)
+ name = "hccm"
+ namespace = "kube-system"
+ pass_credentials = false
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://charts.hetzner.cloud"
+ reset_values = true
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ values = [
+ <<-EOT
networking:
enabled: true
env:
HCLOUD_LOAD_BALANCERS_ENABLED:
value: "false"
EOT,
]
+ verify = false
+ version = "1.21.0"
+ wait = true
+ wait_for_jobs = false
+ set {
# At least one attribute in this block is (or was) sensitive,
# so its contents will not be displayed.
}
+ set {
# At least one attribute in this block is (or was) sensitive,
# so its contents will not be displayed.
}
}
# helm_release.hcloud_csi will be created
+ resource "helm_release" "hcloud_csi" {
+ atomic = true
+ chart = "hcloud-csi"
+ cleanup_on_fail = true
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ manifest = (known after apply)
+ max_history = 0
+ metadata = (known after apply)
+ name = "hcsi"
+ namespace = "kube-system"
+ pass_credentials = false
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://charts.hetzner.cloud"
+ reset_values = true
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ verify = false
+ version = "2.11.0"
+ wait = true
+ wait_for_jobs = false
+ set {
# At least one attribute in this block is (or was) sensitive,
# so its contents will not be displayed.
}
+ set {
# At least one attribute in this block is (or was) sensitive,
# so its contents will not be displayed.
}
+ set {
# At least one attribute in this block is (or was) sensitive,
# so its contents will not be displayed.
}
+ set {
# At least one attribute in this block is (or was) sensitive,
# so its contents will not be displayed.
}
+ set {
# At least one attribute in this block is (or was) sensitive,
# so its contents will not be displayed.
}
}
# kubernetes_config_map_v1.metallb will be created
+ resource "kubernetes_config_map_v1" "metallb" {
+ data = (known after apply)
+ id = (known after apply)
+ immutable = false
+ metadata {
+ generation = (known after apply)
+ name = "nodes"
+ namespace = "metallb-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# kubernetes_namespace_v1.argocd will be created
+ resource "kubernetes_namespace_v1" "argocd" {
+ id = (known after apply)
+ wait_for_default_service_account = false
+ metadata {
+ generation = (known after apply)
+ name = "argocd"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# kubernetes_namespace_v1.external_secrets will be created
+ resource "kubernetes_namespace_v1" "external_secrets" {
+ id = (known after apply)
+ wait_for_default_service_account = false
+ metadata {
+ generation = (known after apply)
+ name = "external-secrets"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# kubernetes_namespace_v1.metallb will be created
+ resource "kubernetes_namespace_v1" "metallb" {
+ id = (known after apply)
+ wait_for_default_service_account = false
+ metadata {
+ generation = (known after apply)
+ name = "metallb-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# kubernetes_secret_v1.hcloud will be created
+ resource "kubernetes_secret_v1" "hcloud" {
+ data = (sensitive value)
+ id = (known after apply)
+ type = "Opaque"
+ wait_for_service_account_token = true
+ metadata {
+ generation = (known after apply)
+ name = "hcloud"
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# kubernetes_secret_v1.infisical will be created
+ resource "kubernetes_secret_v1" "infisical" {
+ data = (sensitive value)
+ id = (known after apply)
+ type = "opaque"
+ wait_for_service_account_token = true
+ metadata {
+ generation = (known after apply)
+ name = "infisical"
+ namespace = "external-secrets"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# kubernetes_secret_v1.oidc_secret will be created
+ resource "kubernetes_secret_v1" "oidc_secret" {
+ data = (sensitive value)
+ id = (known after apply)
+ type = "Opaque"
+ wait_for_service_account_token = true
+ metadata {
+ generation = (known after apply)
+ labels = {
+ "app.kubernetes.io/part-of" = "argocd"
}
+ name = "oidc"
+ namespace = "argocd"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
Plan: 10 to add, 0 to change, 0 to destroy. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Related Issue(s)
Fixes #
How to test