diff --git a/site/content/how-to/monitoring/troubleshooting.md b/site/content/how-to/monitoring/troubleshooting.md index efb2457011..c168c16a86 100644 --- a/site/content/how-to/monitoring/troubleshooting.md +++ b/site/content/how-to/monitoring/troubleshooting.md @@ -436,6 +436,25 @@ To **resolve** this, you can do one of the following: - Adjust the IPFamily of your Service to match that of the NginxProxy configuration. +##### Policy cannot be applied to target + +If you `describe` your Policy and see the following error: + +```text + Conditions: + Last Transition Time: 2024-08-20T14:48:53Z + Message: Policy cannot be applied to target "default/route1" since another Route "default/route2" shares a hostname:port/path combination with this target + Observed Generation: 3 + Reason: TargetConflict + Status: False + Type: Accepted +``` + +This means you are attempting to attach a Policy to a Route that has an overlapping hostname:port/path combination with another Route. To work around this, you can do one of the following: + +- Combine the Route rules for the overlapping path into a single Route. +- If the Policy allows it, specify both Routes in the `targetRefs` list. + ### Further reading You can view the [Kubernetes Troubleshooting Guide](https://kubernetes.io/docs/tasks/debug/debug-application/) for more debugging guidance. diff --git a/tests/README.md b/tests/README.md index 10b1049904..62531de3e2 100644 --- a/tests/README.md +++ b/tests/README.md @@ -358,7 +358,7 @@ Finally, run make stop-longevity-test ``` -This will tear down the test and collect results into a file, where you can add the PNGs of the dashboard. +This will tear down the test and collect results into a file, where you can add the PNGs of the dashboard. The results collection creates multiple files that you will need to manually combine as needed (logs file, traffic output file). ### Common test amendments diff --git a/tests/results/longevity/1.4.0/oss-cpu.png b/tests/results/longevity/1.4.0/oss-cpu.png new file mode 100644 index 0000000000..722adc8c0c Binary files /dev/null and b/tests/results/longevity/1.4.0/oss-cpu.png differ diff --git a/tests/results/longevity/1.4.0/oss-memory.png b/tests/results/longevity/1.4.0/oss-memory.png new file mode 100644 index 0000000000..119f9a43ee Binary files /dev/null and b/tests/results/longevity/1.4.0/oss-memory.png differ diff --git a/tests/results/longevity/1.4.0/oss-ngf-memory.png b/tests/results/longevity/1.4.0/oss-ngf-memory.png new file mode 100644 index 0000000000..647d1272cc Binary files /dev/null and b/tests/results/longevity/1.4.0/oss-ngf-memory.png differ diff --git a/tests/results/longevity/1.4.0/oss-reload-time.png b/tests/results/longevity/1.4.0/oss-reload-time.png new file mode 100644 index 0000000000..8fe4d18682 Binary files /dev/null and b/tests/results/longevity/1.4.0/oss-reload-time.png differ diff --git a/tests/results/longevity/1.4.0/oss-reloads.png b/tests/results/longevity/1.4.0/oss-reloads.png new file mode 100644 index 0000000000..0e442fe7c4 Binary files /dev/null and b/tests/results/longevity/1.4.0/oss-reloads.png differ diff --git a/tests/results/longevity/1.4.0/oss-stub-status.png b/tests/results/longevity/1.4.0/oss-stub-status.png new file mode 100644 index 0000000000..656ac8fe1f Binary files /dev/null and b/tests/results/longevity/1.4.0/oss-stub-status.png differ diff --git a/tests/results/longevity/1.4.0/oss.md b/tests/results/longevity/1.4.0/oss.md new file mode 100644 index 0000000000..bf5348f86c --- /dev/null +++ b/tests/results/longevity/1.4.0/oss.md @@ -0,0 +1,94 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: f765b79b6cb76bf18affd138604ca0ee12f57a19 +- Date: 2024-08-15T21:46:21Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.29.7-gke.1008000 +- vCPUs per node: 2 +- RAM per node: 4019160Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: e2-medium + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 139.11ms 117.12ms 2.00s 91.14% + Req/Sec 396.68 259.55 2.08k 65.79% + 268947649 requests in 5760.00m, 92.04GB read + Socket errors: connect 0, read 344704, write 0, timeout 565 +Requests/sec: 778.20 +Transfer/sec: 279.25KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 131.64ms 94.48ms 1.99s 70.49% + Req/Sec 395.26 259.07 2.08k 65.47% + 267943878 requests in 5760.00m, 90.15GB read + Socket errors: connect 0, read 338530, write 0, timeout 9 +Requests/sec: 775.30 +Transfer/sec: 273.51KB +``` + +### Logs + +No error logs in nginx-gateway + +No error logs in nginx + +### Key Metrics + +#### Containers memory + +![oss-memory.png](oss-memory.png) + +#### NGF Container Memory + +![oss-ngf-memory.png](oss-ngf-memory.png) + +### Containers CPU + +![oss-cpu.png](oss-cpu.png) + +### NGINX metrics + +![oss-stub-status.png](oss-stub-status.png) + +### Reloads + +Rate of reloads - successful and errors: + +![oss-reloads.png](oss-reloads.png) + + +Reload spikes correspond to 1 hour periods of backend re-rollouts. + +No reloads finished with an error. + +Reload time distribution - counts: + +![oss-reload-time.png](oss-reload-time.png) + +## Comparison with previous runs + +Graphs look similar to 1.3.0 results. diff --git a/tests/results/longevity/1.4.0/plus-cpu.png b/tests/results/longevity/1.4.0/plus-cpu.png new file mode 100644 index 0000000000..2b9b4b90e0 Binary files /dev/null and b/tests/results/longevity/1.4.0/plus-cpu.png differ diff --git a/tests/results/longevity/1.4.0/plus-memory.png b/tests/results/longevity/1.4.0/plus-memory.png new file mode 100644 index 0000000000..e7c45ec6db Binary files /dev/null and b/tests/results/longevity/1.4.0/plus-memory.png differ diff --git a/tests/results/longevity/1.4.0/plus-ngf-memory.png b/tests/results/longevity/1.4.0/plus-ngf-memory.png new file mode 100644 index 0000000000..220ad36480 Binary files /dev/null and b/tests/results/longevity/1.4.0/plus-ngf-memory.png differ diff --git a/tests/results/longevity/1.4.0/plus-reloads.png b/tests/results/longevity/1.4.0/plus-reloads.png new file mode 100644 index 0000000000..31bc96ea42 Binary files /dev/null and b/tests/results/longevity/1.4.0/plus-reloads.png differ diff --git a/tests/results/longevity/1.4.0/plus-status.png b/tests/results/longevity/1.4.0/plus-status.png new file mode 100644 index 0000000000..52b819066c Binary files /dev/null and b/tests/results/longevity/1.4.0/plus-status.png differ diff --git a/tests/results/longevity/1.4.0/plus.md b/tests/results/longevity/1.4.0/plus.md new file mode 100644 index 0000000000..53e4e30f65 --- /dev/null +++ b/tests/results/longevity/1.4.0/plus.md @@ -0,0 +1,99 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 16a95222a968aef46277a77070f79bea9b87da12 +- Date: 2024-08-16T15:29:44Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.29.7-gke.1008000 +- vCPUs per node: 2 +- RAM per node: 4019160Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: e2-medium + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 121.06ms 88.32ms 1.69s 70.69% + Req/Sec 439.03 284.47 2.18k 65.18% + 297838416 requests in 5760.00m, 101.94GB read + Non-2xx or 3xx responses: 9 +Requests/sec: 861.80 +Transfer/sec: 309.28KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 121.25ms 88.41ms 1.56s 70.69% + Req/Sec 438.02 283.40 2.19k 65.30% + 297157634 requests in 5760.00m, 100.04GB read + Non-2xx or 3xx responses: 1 +Requests/sec: 859.83 +Transfer/sec: 303.52KB +``` + +Note: Non-2xx or 3xx responses correspond to the error in NGINX log, see below. + +### Logs + +nginx-gateway: + +a lot of expected "usage reporting not enabled" errors. + +nginx: + +```text +2024/06/01 21:34:09 [error] 104#104: *115862644 no live upstreams while connecting to upstream, client: 10.128.0.112, server: cafe.example.com, request: "GET /tea HTTP/1.1", upstream: "http://longevity_tea_80/tea", host: "cafe.example.com" +2024/06/03 12:01:07 [error] 105#105: *267137988 no live upstreams while connecting to upstream, client: 10.128.0.112, server: cafe.example.com, request: "GET /coffee HTTP/1.1", upstream: "http://longevity_coffee_80/coffee", host: "cafe.example.com" +``` + +Similar to last release. + +### Key Metrics + +#### Containers memory + +![plus-memory.png](plus-memory.png) + +#### NGF Container Memory + +![plus-ngf-memory.png](plus-ngf-memory.png) + +### Containers CPU + +![plus-cpu.png](plus-cpu.png) + +### NGINX Plus metrics + +![plus-status.png](plus-status.png) + +### Reloads + +Rate of reloads - successful and errors: + +![plus-reloads.png](plus-reloads.png) + +Note: compared to NGINX, we don't have as many reloads here, because NGF uses NGINX Plus API to reconfigure NGINX +for endpoints changes. + +## Comparison with previous runs + +Graphs look similar to 1.3.0 results.