Skip to content

(Experimental) Functional backend setup for frontend dev

David Edler edited this page Mar 27, 2024 · 10 revisions

Install dependencies

# build dependencies for admin-ui
snap install rockcraft --classic
snap install yq
apt install make
# Ensure skopeo is in the path
sudo ln -s /snap/rockcraft/current/bin/skopeo /usr/local/bin/skopeo

# cluster dependencies
snap install microk8s --channel=1.28-strict/stable
microk8s status --wait-ready
microk8s enable registry
# ensure kubectl is configured to use microk8s
microk8s.kubectl config view --raw > $HOME/.kube/config
# Alias kubectl so that it can be used by Skaffold
snap alias microk8s.kubectl kubectl

# juju IAM deployment dependencies
snap install juju
mkdir -P ~/.local/share/juju
microk8s enable metallb:10.64.140.43-10.64.140.49

Install skaffold

curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && \
sudo install skaffold /usr/local/bin/

Install container-structure-test

curl -LO https://github.com/GoogleContainerTools/container-structure-test/releases/latest/download/container-structure-test-linux-amd64 && chmod +x container-structure-test-linux-amd64 && sudo mv container-structure-test-linux-amd64 /usr/local/bin/container-structure-test

Install docker

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Deploy IAM stack with Juju

This will spin up several dependencies including Hydra, Kratos and Postgres in side of a k8s cluster with namespace of iam.

juju bootstrap # choose option microk8s
juju add-model iam
juju deploy identity-platform --trust --channel latest/edge
juju status --watch 1s

Deploy admin-ui

1. Check k8s object deployed by Juju

Once the juju deployment is done, you should be able to see the deployed k8s services using the below command with example output and take note of the names for the Kratos and Hydra ClusterIP services e.g. kratos, hydra etc:

kubectl get svc -n iam

NAME                                            TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                       AGE
modeloperator                                   ClusterIP      10.152.183.169   <none>         17071/TCP                     19h
hydra-endpoints                                 ClusterIP      None             <none>         <none>                        19h
identity-platform-login-ui-operator-endpoints   ClusterIP      None             <none>         <none>                        19h
kratos-endpoints                                ClusterIP      None             <none>         <none>                        19h
kratos-external-idp-integrator                  ClusterIP      10.152.183.129   <none>         65535/TCP                     19h
kratos-external-idp-integrator-endpoints        ClusterIP      None             <none>         <none>                        19h
postgresql-k8s-endpoints                        ClusterIP      None             <none>         <none>                        19h
self-signed-certificates                        ClusterIP      10.152.183.124   <none>         65535/TCP                     19h
self-signed-certificates-endpoints              ClusterIP      None             <none>         <none>                        19h
traefik-admin-endpoints                         ClusterIP      None             <none>         <none>                        19h
traefik-public-endpoints                        ClusterIP      None             <none>         <none>                        19h
postgresql-k8s-primary                          ClusterIP      10.152.183.199   <none>         8008/TCP,5432/TCP             19h
postgresql-k8s-replicas                         ClusterIP      10.152.183.84    <none>         8008/TCP,5432/TCP             19h
patroni-postgresql-k8s-config                   ClusterIP      None             <none>         <none>                        19h
kratos                                          ClusterIP      10.152.183.136   <none>         65535/TCP,4434/TCP,4433/TCP   19h
identity-platform-login-ui-operator             ClusterIP      10.152.183.178   <none>         65535/TCP,8080/TCP            19h
hydra                                           ClusterIP      10.152.183.110   <none>         65535/TCP,4445/TCP,4444/TCP   19h
traefik-public                                  LoadBalancer   10.152.183.130   10.64.140.43   80:32726/TCP,443:30324/TCP    19h
traefik-admin                                   LoadBalancer   10.152.183.152   10.64.140.44   80:30718/TCP,443:30137/TCP    19h
postgresql-k8s                                  ClusterIP      10.152.183.132   <none>         5432/TCP,8008/TCP             19h

2. Update and apply admin-ui configMap

Then you will need to update the identity-platform-admin-ui configMap located at deployments/kubectl/configMap.yaml to what is shown below:

apiVersion: v1
kind: ConfigMap
metadata:
  name: identity-platform-admin-ui
data:
  PORT: "8000"
  LOG_LEVEL: DEBUG
  TRACING_ENABLED: "false"
  # KRATOS_PUBLIC_URL: http://kratos-public.default.svc.cluster.local
  # KRATOS_ADMIN_URL: http://kratos-public.default.svc.cluster.local
  # HYDRA_ADMIN_URL: http://hydra-admin.default.svc.cluster.local:4445
  # OATHKEEPER_PUBLIC_URL: http://oathkeeper-api.default.svc.cluster.local:4456
  KRATOS_PUBLIC_URL: http://kratos.iam.svc.cluster.local:4433
  KRATOS_ADMIN_URL: http://kratos.iam.svc.cluster.local:4434
  HYDRA_ADMIN_URL: http://hydra.iam.svc.cluster.local:4445
  OATHKEEPER_PUBLIC_URL: http://oathkeeper-api.iam.svc.cluster.local:4456
  IDP_CONFIGMAP_NAME: idps
  # IDP_CONFIGMAP_NAMESPACE: default
  IDP_CONFIGMAP_NAMESPACE: iam
  SCHEMAS_CONFIGMAP_NAME: identity-schemas
  # SCHEMAS_CONFIGMAP_NAMESPACE: default
  SCHEMAS_CONFIGMAP_NAMESPACE: iam
  RULES_CONFIGMAP_NAME: oathkeeper-rules
  RULES_CONFIGMAP_FILE_NAME: access-rules.json
  # RULES_CONFIGMAP_NAMESPACE: default
  RULES_CONFIGMAP_NAMESPACE: iam
  OPENFGA_API_SCHEME: http
  OPENFGA_API_HOST: openfga.default.svc.cluster.local:8080
  OPENFGA_API_TOKEN: "42"
  AUTHORIZATION_ENABLED: "false"
  KRATOS_ADMIN_URL: http://kratos.iam.svc.cluster.local
  HYDRA_ADMIN_URL: http://hydra.iam.svc.cluster.local:4445
  OATHKEEPER_PUBLIC_URL: http://oathkeeper-api.iam.svc.cluster.local:4456
  IDP_CONFIGMAP_NAME: idps
  # IDP_CONFIGMAP_NAMESPACE: default
  IDP_CONFIGMAP_NAMESPACE: iam
  SCHEMAS_CONFIGMAP_NAME: identity-schemas
  # SCHEMAS_CONFIGMAP_NAMESPACE: default
  SCHEMAS_CONFIGMAP_NAMESPACE: iam
  RULES_CONFIGMAP_NAME: oathkeeper-rules
  RULES_CONFIGMAP_FILE_NAME: access-rules.json
  # RULES_CONFIGMAP_NAMESPACE: default
  RULES_CONFIGMAP_NAMESPACE: iam
  OPENFGA_API_SCHEME: http
  OPENFGA_API_HOST: openfga.default.svc.cluster.local:8080
  OPENFGA_API_TOKEN: "42"
  AUTHORIZATION_ENABLED: "false"

Take note that the commented out environment variables are the original setting values. The url for the Kratos and Hydra services should have the following structure http://[service-name].[namespace].svc.cluster.local. Once the configMap.yaml is updated, then apply it to the cluster with:

kubectl apply -f deployments/kubectl/configMap.yaml -n iam

3. Build admin-ui image with skaffold

Then you should build the image using skaffold in preparation for setting up the identity-platform-admin-ui k8s deployment. Before building the image, you should remove helm related dependencies from skaffold.yaml so your file should look something like what is shown below:

apiVersion: skaffold/v4beta6
kind: Config
build:
  artifacts:
  - image: "identity-platform-admin-ui"
    sync:
      infer:
      - "internal/"
      - "pkg/"
      - "cmd/main.go"
      - "go.mod"
      - "go.sum"
    custom:
      buildCommand: ./build.sh
      dependencies:
        paths:
          - rockcraft.yaml
    platforms: ["linux/amd64"]
  local:
    push: true

test:
  - image: "identity-platform-admin-ui"
    structureTests:
      - './structure-tests.yaml'


manifests:
  rawYaml:
    - "deployments/kubectl/*"

Then eun the following command at the root level of the project to build the image:

SKAFFOLD_DEFAULT_REPO=localhost:32000 skaffold build

Once the image is built, take note of the image name in your terminal, it should be something like localhost:32000/identity-platform-admin-ui:ef798c4-dirty.

4. Deploy admin-ui

To deployment admin ui, you should make changes to deployments/kubectl/deployment.yaml so that:

  • OpenFGA dependencies are removed from the deployment since the juju deployed IAM stack does not include OpenFGA (auth checks are disabled anyway with the AUTHORIZATION_ENABLED env var set to false in configMap.yaml)
  • The correct image name is used for the admin ui deployment object. You should set spec.containers[0].image to the image name observed in step 3.

Once done, your deployment.yaml file should look like the one shown below:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: identity-platform-admin-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: identity-platform-admin-ui
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: identity-platform-admin-ui
      annotations:
        prometheus.io/path: /api/v0/metrics
        prometheus.io/scrape: "true"
        prometheus.io/port: "8000"
    spec:
      containers:
      - image: localhost:32000/identity-platform-admin-ui:ef798c4-dirty
        name: identity-platform-admin-ui
        command:  ["/usr/bin/identity-platform-admin-ui", "serve"]
        envFrom:
          - configMapRef:
              name: identity-platform-admin-ui
        ports:
        - name: http
          containerPort: 8000
        readinessProbe:
          httpGet:
            path: "/api/v0/status"
            port: 8000
          initialDelaySeconds: 1
          failureThreshold: 10
          timeoutSeconds: 5
          periodSeconds: 30
        livenessProbe:
          httpGet:
            path: "/api/v0/status"
            port: 8000
          initialDelaySeconds: 1
          failureThreshold: 10
          timeoutSeconds: 5
          periodSeconds: 30

Then you can deploy the admin-ui to the k8s cluster with the following command:

kubectl apply -f deployments/kubectl/deployment.yaml -n iam

You can verify that the admin-ui deployment is running with (you should see the pod running without errors):

kubectl get pods -n iam

Then deploy the admin-ui service for networking with (this will assign a ClusterIP service to the admin-ui deployment with port 80 exposed):

kubectl apply -f deployments/kubectl/service.yaml

To allow host communication with admin-ui, you should port forward to the port that is exposed by the admin-ui service. You can do this with:

kubectl port-forward -n iam services/identity-platform-admin-ui 8000:80

The above command will allow you to connect to admin-ui via localhost:8000. You can try hitting an api endpoint as shown below:

curl http://localhost:8000/api/v0/idps

Open Issues

1. Missing dependencies in Juju deployment

Currently the Juju deployment does not have the following dependencies:

  • OpenFGA
  • Ory/oathkeeper

2. Communicate with host machine

The setup is done in an LXD VM for environment isolation. The steps above manages to deploy the admin-ui backend in an k8s cluster inside the VM, however, ideally for developing the frontend we should spin up a web server running on the host machine and proxy traffic to the backend running within the VM. Haven't quite managed to get that working.

3. Endpoints still returns no data after posting to them

  • /schemas
Clone this wiki locally