Skip to content

Commit

Permalink
Tweaks to docs
Browse files Browse the repository at this point in the history
  • Loading branch information
doriordan committed Oct 7, 2023
1 parent c648aad commit ea932b8
Show file tree
Hide file tree
Showing 2 changed files with 149 additions and 86 deletions.
199 changes: 138 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
[![Maven Central](https://maven-badges.herokuapp.com/maven-central/io.skuber/skuber_2.12/badge.svg)](http://search.maven.org/#search|ga|1|g:%22io.skuber%22a:%22skuber_2.12%22)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/doriordan/skuber/blob/master/LICENSE.txt)


# Skuber

Skuber is a Scala client library for [Kubernetes](http://kubernetes.io). It provides a fully featured, high-level and strongly typed Scala API for managing Kubernetes cluster resources (such as Pods, Services, Deployments, ReplicaSets, Ingresses etc.) via the Kubernetes REST API server.
Expand All @@ -14,108 +13,189 @@ Skuber is a Scala client library for [Kubernetes](http://kubernetes.io). It prov
- Full support for converting resources between the case class and standard JSON representations
- Client API for creating, reading, updating, removing, listing and watching resources on a Kubernetes cluster
- The API is asynchronous and strongly typed e.g. `k8s get[Deployment]("nginx")` returns a value of type `Future[Deployment]`
- The API is offered as both Pekko and Akka based variants (from Skuber 3.0)
- Fluent API for creating and updating specifications of Kubernetes resources
- Uses standard `kubeconfig` files for configuration - see the [configuration guide](docs/Configuration.md) for details

See the [programming guide](docs/GUIDE.md) for more details.

## Example
## Prerequisites

- Java 8
- Kubernetes cluster

A Kubernetes cluster is needed at runtime. For local development purposes, `kind` is recommended.

## Skuber 3 - For Pekko (And Akka) Users

Skuber 2 depends on Akka (up to version 2.6.x) for its underlying HTTP client functionality, as well as exposing Akka Streams types for some streaming API operations (for example `ẁatch` operations). Due to the migration of the Akka license to BSL, the community requires an alternative that has a more permissive open-source license.

This example lists pods in `kube-system` namespace:
In response to this requirement, from version 3.0 Skuber will support both Pekko and Akka based clients, which offer full feature equivalency to each other. The Pekko based Skuber client has no Akka dependencies. This change has been implemented by splitting skuber client functionality into three modules / libraries:
- `skuber-core`: core Skuber model and API (without implementation) including
- the base Skuber client API definition (`skuber.api.client.KubernetesClient` trait)
- other core API types
- the case class based data model
- JSON formatters for the data model.

*Note some core packages have changed as part of this 3.x refactor, but generally that only requires changing a few `ìmport` statements when migrating from Skuber 2.x, as demonstrated in the simple examples below.*

- `skuber-pekko`: implements the Skuber API using Pekko HTTP, adding streaming operations based on Pekko Streams.

- `skuber-akka`: implements the Skuber API using Akka HTTP, adding streaming operations based on Akka Streams.

Migrating from Skuber 2 or between the two new clients is generally straightforward, requiring some minimal changes to your build (adding the new Skuber core dependency and one of Skuber Pekko or Akka dependencies) and a few changes to `ìmport` statements in your code.

You can try out the latest Skuber 3 beta release (for Scala 2.12 or 2.13) by adding to your build (replacing the Skuber 2 `skuber`library dependency if necessary):

#### Pekko Client

```sbt
libraryDependencies += "io.skuber" %% "skuber-core" % "3.0.0-beta2"
libraryDependencies += "io.skuber" %% "skuber-pekko" % "3.0.0-beta2"
```

#### Akka Client

```sbt
libraryDependencies += "io.skuber" %% "skuber-core" % "3.0.0-beta2"
libraryDependencies += "io.skuber" %% "skuber-akka" % "3.0.0-beta2"
```

See the simple examples below for both Pekko and Akka based clients in Skuber 3.x - note how only imports are different between the Pekko and Akka based code.

### Examples

#### Basic Pekko Client Example

This example lists pods in `kube-system` namespace using the Pekko based client:

```scala
import skuber._
# Pekko specific imports
import org.apache.pekko.actor.ActorSystem
import skuber.pekkoclient._

# Core skuber imports
import skuber.model._
import skuber.json.format._
import akka.actor.ActorSystem

import scala.util.{Success, Failure}

implicit val system = ActorSystem()
implicit val dispatcher = system.dispatcher

val k8s = k8sInit
val k8s = k8sInit // initializes Skuber Pekko client
val listPodsRequest = k8s.listInNamespace[PodList]("kube-system")
listPodsRequest.onComplete {
case Success(pods) => pods.items.foreach { p => println(p.name) }
case Failure(e) => throw(e)
}
```
#### Basic Akka Client Example

See more elaborate example [here](docs/Examples.md).
```scala
# Akka specific imports
import akka.actor.ActorSystem
import skuber.akkaclient._

## Quick Start
# Core skuber imports
import skuber.model._
import skuber.json.format._

Make sure [prerequisites](#prerequisites) are met. There are couple of quick ways to get started with Skuber:
import scala.util.{Success, Failure}

### With [Ammonite-REPL](http://ammonite.io/#Ammonite-REPL)
implicit val system = ActorSystem()
implicit val dispatcher = system.dispatcher

Provides you with a configured client on startup. It is handy to use this for quick experiments.
val k8s = k8sInit // initializes Skuber Akka client
val listPodsRequest = k8s.listInNamespace[PodList]("kube-system")
listPodsRequest.onComplete {
case Success(pods) => pods.items.foreach { p => println(p.name) }
case Failure(e) => throw(e)
}
```

- using bash
#### Pekko Client Streaming Operation Example

```bash
$ amm -p ./Quickstart.sc
```
```scala
import org.apache.pekko.actor.ActorSystem
import org.apache.pekko.stream.KillSwitches
import org.apache.pekko.stream.scaladsl.{Keep, Sink}
import skuber.pekkoclient._

import skuber.model.{Container, LabelSelector, Pod}
import skuber.model.apps.v1.{Deployment, DeploymentList}
import skuber.api.client.EventType

- from inside ammonite-repl:
implicit val system = ActorSystem()
implicit val dispatcher = system.dispatcher

```scala
import $file.`Quickstart`, Quickstart._
val k8s = k8sInit // initializes Skuber Pekko client, which includes added Pekko Streams based ops like `ẁatchAllContinuously`

// start watching a couple of deployments
val deploymentOneName = ...
val deploymentTwoName = ...
val stream = k8s.list[DeploymentList].map { l =>
k8s.watchAllContinuously[Deployment](Some(l.resourceVersion))
.viaMat(KillSwitches.single)(Keep.right)
.filter(event => event._object.name == deploymentOneName || event._object.name == deploymentTwoName)
.filter(event => event._type == EventType.ADDED || event._type == EventType.DELETED)
.toMat(Sink.collection)(Keep.both)
.run()
}
```

> Just handy shortcut to import skuber inside ammonite-repl:
#### Akka Client Streaming Operation Example

```scala
import $ivy.`io.skuber::skuber:2.6.7`, skuber._, skuber.json.format._
```
```scala
import akka.actor.ActorSystem
import akka.stream.KillSwitches
import akka.stream.scaladsl.{Keep, Sink}
import skuber.akkaclient._

import skuber.model.{Container, LabelSelector, Pod}
import skuber.model.apps.v1.{Deployment, DeploymentList}
import skuber.api.client.EventType

### Interactive with sbt
implicit val system = ActorSystem()
implicit val dispatcher = system.dispatcher

- Clone this repository.
val k8s = k8sInit // initializes Skuber Akka client, which includes added Akka Streams based ops like `ẁatchAllContinuously`

// start watching a couple of deployments
val deploymentOneName = ...
val deploymentTwoName = ...
val stream = k8s.list[DeploymentList].map { l =>
k8s.watchAllContinuously[Deployment](Some(l.resourceVersion))
.viaMat(KillSwitches.single)(Keep.right)
.filter(event => event._object.name == deploymentOneName || event._object.name == deploymentTwoName)
.filter(event => event._type == EventType.ADDED || event._type == EventType.DELETED)
.toMat(Sink.collection)(Keep.both)
.run()
}
```

- Tell Skuber to configure itself from the default Kubeconfig file (`$HOME/.kube/config`):
### Interactive quickstart with sbt

```bash
export SKUBER_CONFIG=file
```
The best way to get quickly started is to run some of the integration tests against a cluster. There are equivalent integration tests for both the Pekko and Akka clients.

- Clone this repository.

- Configure KUBECONFIG environment variable to point at your cluster configuration file per normal Kubernetes requirements. (Check using `kubectl cluster-info`that your cluster is up and running).
Read more about Skuber configuration [here](docs/Configuration.md)

- Run `sbt` and try one or more of the [examples](./examples/src/main/scala/skuber/examples) and then:
- Run `sbt`, then select either the `pekko`or `akka` project and run one or more of the integration tests, for example :

```bash
sbt:root> project examples
sbt:skuber-examples> run
Multiple main classes detected, select one to run:
[1] skuber.examples.customresources.CreateCRD
[2] skuber.examples.deployment.DeploymentExamples
[3] skuber.examples.fluent.FluentExamples
[4] skuber.examples.guestbook.Guestbook
[5] skuber.examples.ingress.NginxIngress
[6] skuber.examples.job.PrintPiJob
[7] skuber.examples.list.ListExamples
[8] skuber.examples.patch.PatchExamples
[9] skuber.examples.podlogs.PodLogExample
[10] skuber.examples.scale.ScaleExamples
[11] skuber.examples.watch.WatchExamples
Enter number:
sbt:root> project pekko
sbt:skuber-pekko> IntegrationTest/testOnly skuber.DeploymentSpec
```
In this case the code is simply manipulating deployments, but there are a variety of other tests that demonstrate more of the Skuber API for both the [Pekko client](pekko/src/it/scala/skuber) and the [Akka client](akka/src/it/scaqla/skuber)

For other Kubernetes setups, see the [configuration guide](docs/Configuration.md) for details on how to tailor the configuration for your clusters security, namespace and connectivity requirements.

## Prerequisites
- Java 8
- Kubernetes cluster
## Skuber 2.0

A Kubernetes cluster is needed at runtime. For local development purposes, minikube is recommended.
To get minikube follow the instructions [here](https://github.com/kubernetes/minikube)
## Release
You can use the latest release (for 2.12 or 2.13) by adding to your build:
You can use the latest 2.0 release (for 2.12 or 2.13) by adding to your build:

```sbt
libraryDependencies += "io.skuber" %% "skuber" % "2.6.7"
Expand All @@ -129,7 +209,7 @@ libraryDependencies += "io.skuber" % "skuber_2.11" % "1.7.1"

NOTE: Skuber 2 supports Scala 2.13 since v2.4.0 - support for Scala 2.11 has now been removed since v2.6.0.

## Migrating to release v2
## Migrating from V1 to V2

If you have an application using the legacy version v1 of Skuber and want to move to v2, then check out the [migration guide](docs/MIGRATION_1-to-2.md).

Expand All @@ -141,7 +221,4 @@ Building the library from source is very straightforward. Simply run `sbt test`i

This code is licensed under the Apache V2.0 license, a copy of which is included [here](LICENSE.txt).

## IMPORTANT: Akka License Model Changes
Lightbend have moved Akka versions starting from 2.7.x from an Apache 2.0 to BSL license. Skuber currently uses Akka 2.6.x and it is not planned to move to a BSL licensed Akka version - instead it is planned to migrate Skuber to the Apache Pekko open-source fork once it has a full release.

36 changes: 11 additions & 25 deletions docs/Configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,21 +3,19 @@
Skuber supports both out-of-cluster and in-cluster configurations.
Сonfiguration algorithm can be described as follows:

Initailly Skuber tries out-of-cluster methods in sequence (stops on first successful):
1. Read `SKUBER_URL` environment variable and use it as kubectl proxy url. If not set then:
1. Read `SKUBER_CONFIG` environment variable and if is equal to:
Initially Skuber tries out-of-cluster methods in sequence (stops on first successful):
- Read `SKUBER_URL` environment variable and use it as kubectl proxy url. If not set then:
- Read `SKUBER_CONFIG` environment variable and if is equal to:
* `file` - Skuber will read `~/.kube/config` and use it as configuration source
* `proxy` - Skuber will assume that kubectl proxy running on `localhost:8080` and will use it as endpoint to cluster API
* Otherwise treats contents of `SKUBER_CONFIG` as a file path to kubeconfig file (use it if your kube config file is in custom location). If not present then:
1. Read `KUBECONFIG` environment variable and use its contents as path to kubconfig file (similar to `SKUBER_CONFIG`)


If all above fails Skuber tries [in-cluster configuration method](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)
- Read `KUBECONFIG` environment variable and use its contents as path to kubconfig file (similar to `SKUBER_CONFIG`)

If all above fails Skuber tries the [in-cluster configuration method](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod). If attempting to run your client inside a pod, this will be the preferred configuration.

### Cluster URL

If proxying via a [kubectl proxy](https://kubernetes.io/docs/user-guide/kubectl/v1.6/#proxy) then you can configure Skuber to connect through that proxy by setting the SKUBER_URL environment variable to point at it e.g.
If proxying via a [kubectl proxy](https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/) then you can configure Skuber to connect through that proxy by setting the SKUBER_URL environment variable to point at it e.g.

export SKUBER_URL=http://localhost:8001

Expand All @@ -33,27 +31,17 @@ If not using a `kubectl proxy` then most clients will be configured using a [kub
- [Namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) - this allows the client to read/write Kubernetes resources in different cluster namespaces just by changing runtime configuration, which supports partitioning by team / organization.
- Cluster address - i.e the URL to which the client connects to communicate with the Kubernetes API server. This can be either a non-TLS (http) or TLS (https) URL.

To configure Skuber to use a specific kubeconfig file, set the `SKUBER_CONFIG` or`KUBECONFIG` environment variable to the location of the config file e.g.
Skuber generally follows the same conventions for configuring the client as other Kubernetes client tools such as `kubectl`- to use a specific kubeconfig file, set the `KUBECONFIG` environment variable to the location of the config file e.g.

export SKUBER_CONFIG=file:///home/kubernetes/.kube/config
export KUBECONFIG=~/.kube_config

Setting this variable as follows:

export SKUBER_CONFIG=file

will instruct the client to get its configuration for a Kubeconfig file in the default location (`$HOME/.kube/config`), assuming the application uses the default`k8sInit()` method to initialise the Skuber client.

If SKUBER_CONFIG environment variable is not set then the fallback is to get the kubeconfig location from the standard Kubernetes / kubectl KUBECONFIG variable.

If none of these environment variables are set then the kubeconfig file is loaded from its default location.

The use of the kubeconfig format means that `kubectl` can be used to modify configuration settings for Skuber clients, without requiring direct editing of the configuration file - and this is the recommended approach.

Kubeconfig files can contain multiple contexts, each encapsulating the full details required to configure a client - Skuber always configures itself (when initialised by a `k8sInit` call) from the [current context](https://kubernetes.io/docs/user-guide/kubectl/v1.6/#-em-current-context-em-).
Kubeconfig files can contain multiple contexts, each encapsulating the full details required to configure a client - Skuber always configures itself (when initialised by a `k8sInit` call) from the [current context](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) as set by the `kubectl config use-context`command

Because all of the above configuration items in the configuration file are the same as used by other Kubernetes clients such as kubectl, you (or rather the organization deploying the Skuber application) can share configuration with such other clients or set up separate configuration files for applications depending on organizational security policies, deployment processes and other requirements.

Note that - unlike the Go language client - Skuber does not attempt to merge different sources of configuration. So if a kubeconfig file is specified via **$SKUBER_CONFIG** then all configuration is sourced from that file.
Note that - unlike the Go language client - Skuber does not attempt to merge different sources of configuration.

*(Configuration can alternatively be passed programmatically to the `k8sInit` call, see the programming guide for details.)*

Expand All @@ -63,9 +51,7 @@ When using kubeconfig files, Skuber supports standard security configuration as

If the current context specifies a **TLS** connection (i.e. a `https://` URL) to the cluster server, Skuber will utilise the configured **certificate authority** to verify the server (unless the `insecure-skip-tls-verify` flag is set to true, in which case Skuber will trust the server without verification).

The above cluster configuration details can be set using [this kubectl command](https://kubernetes.io/docs/user-guide/kubectl/v1.6/#-em-set-cluster-em-).

For client authentication **client certificates** (cert and private key pairs) can be specified for the case where TLS is in use. In addition to client certificates Skuber will use any **bearer token** or **basic authentication** credentials specified. Token or basic auth can be configured as an alternative to or in conjunction with client certificates. These client credentials can be set using [this kubectl command](https://kubernetes.io/docs/user-guide/kubectl/v1.6/#-em-set-credentials-em-).
For client authentication **client certificates** (cert and private key pairs) can be specified for the case where TLS is in use. In addition to client certificates Skuber will use any **bearer token** or **basic authentication** credentials specified. Token or basic auth can be configured as an alternative to or in conjunction with client certificates.

*(Skuber loads configured server and client certificates / keys directly from the kubeconfig file (or from another location in the file system in the case where the configuration specifies a path rather than embedded data). This means there is no need to store them in the Java trust or key stores.)*

Expand Down

0 comments on commit ea932b8

Please sign in to comment.