Skip to content

Commit

Permalink
AUTO: Sync Helm Charts docs to ScalarDB Enterprise docs site repo
Browse files Browse the repository at this point in the history
  • Loading branch information
josh-wong committed Jun 14, 2024
1 parent f4c53b0 commit 410e38c
Show file tree
Hide file tree
Showing 8 changed files with 107 additions and 83 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -15,52 +15,13 @@ service:
### Image configurations
You must set `image.repository`. Be sure to specify the Scalar Manager container image so that you can pull the image from the container repository.
You must set `api.image.repository` and `web.image.repository`. Be sure to specify the Scalar Manager container image so that you can pull the image from the container repository.

```yaml
image:
repository: <SCALAR_MANAGER_IMAGE>
```

### Targets configurations

You must set `scalarManager.targets`. Please set the DNS Service URL that returns the SRV record of pods. Kubernetes creates this URL for the named port of the headless service of the Scalar product. The format is `_{port name}._{protocol}.{service name}.{namespace}.svc.{cluster domain name}`.

```yaml
scalarManager:
targets:
- name: Ledger
adminSrv: _scalardl-admin._tcp.scalardl-headless.default.svc.cluster.local
databaseType: cassandra
- name: Auditor
adminSrv: _scalardl-auditor-admin._tcp.scalardl-auditor-headless.default.svc.cluster.local
databaseType: cassandra
```

### Grafana configurations

You must set the `scalarManager.grafanaUrl`. Please specify your Grafana URL.

```yaml
scalarManager:
grafanaUrl: "http://localhost:3000"
```

## Optional configurations

### Replica configurations (Optional based on your environment)

You can specify the number of replicas (pods) of Scalar Manager using `replicaCount`.

```yaml
replicaCount: 3
```

### Refresh interval configurations (Optional based on your environment)

You can specify the refresh interval that Scalar Manager checks the status of the products using `scalarManager.refreshInterval`.

```yaml
scalarManager:
refreshInterval: 30
api:
image:
repository: <SCALAR_MANAGER_API_IMAGE>
web:
image:
repository: <SCALAR_MANAGER_WEB_IMAGE>
```
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This section explains the required configurations when setting up a custom value

### Database configurations

To access databases via ScalarDB Analytics with PostgreSQL, you must set the `scalardbAnalyticsPostgreSQL.databaseProperties` parameter by following the same syntax that you use to configure the `database.properties` file. For details about configurations, see [ScalarDB Configurations](https://scalardb.scalar-labs.com/docs/latest/configurations).
To access databases via ScalarDB Analytics with PostgreSQL, you must set the `scalardbAnalyticsPostgreSQL.databaseProperties` parameter by following the same syntax that you use to configure the `database.properties` file. For details about configurations, see [ScalarDB Configurations](https://scalardb.scalar-labs.com/docs/latest/configurations/).

```yaml
scalardbAnalyticsPostgreSQL:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -266,6 +266,23 @@ scalardbCluster:
overrideAuthority: "cluster.scalardb.example.com"
```

##### Set a root CA certificate for Prometheus Operator

If you set `scalardbCluster.serviceMonitor.enabled=true` and `scalardbCluster.tls.enabled=true` (in other words, if you monitor ScalarDB Cluster with TLS configuration by using Prometheus Operator), you must set the secret name to `scalardbCluster.tls.caRootCertSecretForServiceMonitor`.

```yaml
scalardbCluster:
tls:
enabled: true
caRootCertSecretForServiceMonitor: "scalardb-cluster-tls-ca-for-prometheus"
```

In this case, you have to create secret resources that include a root CA certificate for ScalarDB Cluster in the same namespace as Prometheus as follows:

```console
kubectl create secret generic scalardb-cluster-tls-ca-for-prometheus --from-file=ca.crt=/path/to/your/ca/certificate/file -n <NAMESPACE_SAME_AS_PROMETHEUS>
```

### Replica configurations (optional based on your environment)

You can specify the number of ScalarDB Cluster replicas (pods) by using `scalardbCluster.replicaCount`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -307,6 +307,23 @@ kubectl create secret generic scalardl-auditor-tls-ca-for-ledger --from-file=ca.

You can set the custom authority for TLS communications by using `auditor.tls.overrideAuthority`. This value doesn't change what host is actually connected. This value is intended for testing but may safely be used outside of tests as an alternative to DNS overrides. For example, you can specify the hostname presented in the certificate chain file that you set by using `auditor.tls.certChainSecret`. This chart uses this value for `startupProbe` and `livenessProbe`.

##### Set a root CA certificate for Prometheus Operator

If you set `auditor.serviceMonitor.enabled=true` and `auditor.tls.enabled=true` (in other words, if you monitor ScalarDL Auditor with TLS configuration by using Prometheus Operator), you must set the secret name to `auditor.tls.caRootCertSecretForServiceMonitor`.

```yaml
auditor:
tls:
enabled: true
caRootCertSecretForServiceMonitor: "scalardl-auditor-tls-ca-for-prometheus"
```
In this case, you have to create secret resources that include a root CA certificate for ScalarDL Auditor in the same namespace as Prometheus as follows:
```console
kubectl create secret generic scalardl-auditor-tls-ca-for-prometheus --from-file=ca.crt=/path/to/your/ca/certificate/file -n <NAMESPACE_SAME_AS_PROMETHEUS>
```

### Replica configurations (Optional based on your environment)

You can specify the number of replicas (pods) of ScalarDL Auditor using `auditor.replicaCount`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -280,6 +280,23 @@ ledger:
overrideAuthority: "ledger.scalardl.example.com"
```

##### Set a root CA certificate for Prometheus Operator

If you set `ledger.serviceMonitor.enabled=true` and `ledger.tls.enabled=true` (in other words, if you monitor ScalarDL Ledger with TLS configuration by using Prometheus Operator), you must set the secret name to `ledger.tls.caRootCertSecretForServiceMonitor`.

```yaml
ledger:
tls:
enabled: true
caRootCertSecretForServiceMonitor: "scalardl-ledger-tls-ca-for-prometheus"
```

In this case, you have to create secret resources that include a root CA certificate for ScalarDL Ledger in the same namespace as Prometheus as follows:

```console
kubectl create secret generic scalardl-ledger-tls-ca-for-prometheus --from-file=ca.crt=/path/to/your/ca/certificate/file -n <NAMESPACE_SAME_AS_PROMETHEUS>
```

### Replica configurations (optional based on your environment)

You can specify the number of replicas (pods) of ScalarDL Ledger using `ledger.replicaCount`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ First, you need to prepare a Kubernetes cluster. If you use a **minikube** envir
* Set `kubeStateMetrics.enabled`, `nodeExporter.enabled`, and `kubelet.enabled` to `true`.

* If you want to use Scalar Manager, you'll need to set the following configurations to enable Scalar Manager to embed Grafana:
* Set `grafana.ini.allow_embedding` and `grafana.ini.auth.anonymous.enabled` to `true`.
* Set `grafana.ini.security.allow_embedding` and `grafana.ini.auth.anonymous.enabled` to `true`.
* Set `grafana.ini.auth.anonymous.org_name` to the organization you are using. If you're using the sample custom values, the value is `Main Org.`.
* Set `grafana.ini.auth.anonymous.org_role` to `Editor`.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,20 +1,25 @@
# Getting Started with Helm Charts (Scalar Manager)
Scalar Manager is a web-based dashboard that allows users to:
* check the health of the Scalar products
* pause and unpause the Scalar products to backup or restore underlying databases
* check the metrics and logs of the Scalar products through Grafana dashboards

The users can pause or unpause Scalar products through Scalar Manager to backup or restore the underlying databases.
Scalar Manager also embeds Grafana explorers by which the users can review the metrics or logs of the Scalar products.
Scalar Manager is a centralized management and monitoring solution for ScalarDB and ScalarDL within Kubernetes cluster environments that allows you to:

* Check the availability of ScalarDB or ScalarDL.
* Schedule or execute pausing jobs that create transactionally consistent periods in the databases used by ScalarDB or ScalarDL.
* Check the time-series metrics and logs of ScalarDB or ScalarDL through Grafana dashboards.

For more details, refer to [Scalar Manager Overview](../scalar-manager/overview.mdx).

This guide will show you how to deploy and access Scalar Manager on a Kubernetes cluster.

## Assumption
This guide assumes that the users are aware of how to deploy Scalar products with the monitoring and logging tools to a Kubernetes cluster.
If not, please start with [Getting Started with Scalar Helm Charts](getting-started-scalar-helm-charts.mdx) before this guide.

This guide assumes that you are aware of how to deploy ScalarDB or ScalarDL with the [monitoring](getting-started-monitoring.mdx) and [logging](getting-started-logging.mdx) tools to a Kubernetes cluster.

## Requirement

* You need privileges to pull the Scalar Manager container (`scalar-manager`) from [GitHub Packages](https://github.com/orgs/scalar-labs/packages).
* You must create a Github Personal Access Token (PAT) with `read:packages` scope according to the [GitHub document](https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token) to pull the above container.
* You must deploy `kube-prometheus-stack` according to the instructions in [Getting Started with Helm Charts (Monitoring using Prometheus Operator)](getting-started-monitoring.mdx).
* You must deploy `loki-stack` according to the instructions in [Getting Started with Helm Charts (Logging using Loki Stack)](getting-started-logging.mdx).

## What we create

Expand Down Expand Up @@ -65,11 +70,22 @@ We will deploy the following components on a Kubernetes cluster as follows.

1. Add or revise this value to the custom values file (e.g. scalar-prometheus-custom-values.yaml) of the `kube-prometheus-stack`
```yaml
kubeStateMetrics:
enabled: true
nodeExporter:
enabled: true
kubelet:
enabled: true
grafana:
grafana.ini:
users:
default_theme: light
security:
allow_embedding: true
cookie_samesite: disabled
auth.anonymous:
enabled: true
org_name: "Main Org."
org_role: Editor
```
1. Upgrade the Helm installation
Expand All @@ -79,34 +95,13 @@ We will deploy the following components on a Kubernetes cluster as follows.

## Step 2. Prepare a custom values file for Scalar Manager

1. Get the sample file [scalar-manager-custom-values.yaml](conf/scalar-manager-custom-values.yaml) for `scalar-manager`.

1. Add the targets that you would like to manage. For example, if we want to manage a ledger cluster, then we can add the values as follows.
```yaml
scalarManager:
targets:
- name: my-ledgers-cluster
adminSrv: _scalardl-admin._tcp.scalardl-headless.default.svc.cluster.local
databaseType: cassandra
```
Note: the `adminSrv` is the DNS Service URL that returns SRV record of pods. Kubernetes creates this URL for the named port of the headless service of the Scalar product. The format is `_{port name}._{protocol}.{service name}.{namespace}.svc.{cluster domain name}`

1. Set the Grafana URL. For example, if your Grafana of the `kube-prometheus-stack` is exposed in `localhost:3000`, then we can set it as follows.
```yaml
scalarManager:
grafanaUrl: "http://localhost:3000"
```

1. Set the refresh interval that Scalar Manager checks the status of the products. The default value is `30` seconds, but we can change it like:
```yaml
scalarManager:
refreshInterval: 60 # one minute
```
1. Create an empty .yaml file named `scalar-manager-custom-values.yaml` for `scalar-manager`.

1. Set the service type to access Scalar Manager. The default value is `ClusterIP`, but if we access using the `minikube tunnel` command or some load balancer, we can set it as `LoadBalancer`.
```yaml
service:
type: LoadBalancer
port: 8000
```
## Step 3. Deploy `scalar-manager`
Expand Down Expand Up @@ -134,7 +129,13 @@ We will deploy the following components on a Kubernetes cluster as follows.

### If you use other Kubernetes than minikube

If you use a Kubernetes cluster other than minikube, you need to access the LoadBalancer service according to the manner of each Kubernetes cluster. For example, using a Load Balancer provided by cloud service or the `kubectl port-forward` command.
If you're using a Kubernetes cluster other than minikube, you'll need to access the `LoadBalancer` service according to the manner of each Kubernetes cluster. For example, you'll need to use a load balancer provided by your cloud services provider or use the `kubectl port-forward` command.

:::note

Scalar Manager will try to detect the external IP of Grafana and then embed Grafana based on the IP. Therefore, you must configure the Grafana service type as `LoadBalancer`, and the external IP must be accessible from your browser.

:::

## Step 5. Delete Scalar Manager
1. Uninstall `scalar-manager`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,22 @@ When you use Scalar Manager, you must deploy kube-prometheus-stack and loki-stac
When you deploy kube-prometheus-stack, you must set the following configuration in the custom values file for kube-prometheus-stack.

```yaml
kubeStateMetrics:
enabled: true
nodeExporter:
enabled: true
kubelet:
enabled: true
grafana:
grafana-ini:
grafana.ini:
users:
default_theme: light
security:
allow_embedding: true
cookie_samesite: disabled
auth.anonymous:
enabled: true
org_name: "Main Org."
org_role: Editor
```
If you already have a deployment of kube-prometheus-stack, please upgrade the configuration using the following command.
Expand Down

0 comments on commit 410e38c

Please sign in to comment.