Skip to content

Commit

Permalink
ZDM Helm charts for Kubernetes
Browse files Browse the repository at this point in the history
  • Loading branch information
lukasz-antoniak committed Aug 21, 2024
1 parent 08142af commit 2c84428
Show file tree
Hide file tree
Showing 23 changed files with 10,478 additions and 0 deletions.
9,283 changes: 9,283 additions & 0 deletions grafana-dashboards/ZDM Proxy Dashboard v2.json

Large diffs are not rendered by default.

56 changes: 56 additions & 0 deletions kubernetes/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
Usage:

1. Create dedicated namespace for ZDM Proxy.

```kubectl create ns zdm-proxy```

2. Update `values.yaml` file to reflect configuration of your ZDM proxy environment. All files should be placed
in the same directory as values file. Helm chart will automatically create Kubernetes secrets for passwords,
TLS certificates and Secure Connection Bundle.

3. Install Helm chart in desired Kubernetes namespace.

```helm -n zdm-proxy install zdm-proxy zdm```

The default resource allocations (memory and CPU) are designed for production environment,
if you see PODs pending due to not enough resources, try to use the following commands instead:

```
helm -n zdm-proxy install --set resources.requests.cpu=1000m --set resources.requests.memory=2000Mi \
--set resources.limits.cpu=1000m --set resources.limits.memory=2000Mi zdm-proxy zdm
```

4. Verify that all components are up and running.

```kubectl -n zdm-proxy get svc,ep,po,secret -o wide --show-labels```

You can also run `kubectl -n zdm-proxy logs pod/zdm-proxy-0` to see if there are the following entries in the log,
which means ZDM Proxy is working as expected:

```
time="2022-12-14T21:19:57Z" level=info msg="Proxy connected and ready to accept queries on 172.25.132.116:9042"
time="2022-12-14T21:19:57Z" level=info msg="Proxy started. Waiting for SIGINT/SIGTERM to shutdown."
```
5. Optionally you can install monitoring components defined in `monitoring` subfolder.
6. To generate example load, you can use [NoSQLBench](https://docs.nosqlbench.io/) tool. Check out deployment scripts in `nosqlbench` subfolder.
7. Basic ZDM Proxy operations.
- Switch primary cluster to target (all proxy pods will automatically roll-restart after the change).
```helm -n zdm-proxy upgrade zdm-proxy zdm --set primaryCluster=TARGET```
- Scale to different number of proxy pods.
```helm -n zdm-proxy upgrade zdm-proxy ./zdm --set count=5```
Note: if you've already switched primary cluster to target, make sure you add `--set primaryCluster=TARGET`
in this command line as well. An alternative is to directly edit `zdm/values.yaml` then run Helm upgrade.
8. When you're done, run helm uninstall to remove all objects.
```helm -n zdmproxy uninstall zdm-proxy```
![Demo](zdm-k8s-ccm-astra.gif)
104 changes: 104 additions & 0 deletions kubernetes/demo.tape
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# record with: $ vhs demo.tape

Output zdm-k8s-ccm-astra.gif

Set FontSize 12
Set Width 800
Set Height 600

Type "ls"
Enter
Sleep 500ms

Type "cd zdm"
Enter
Sleep 500ms

Type "cp ~/Downloads/secure-connect-test.zip ./secure-connect-bundle-target.zip"
Enter
Sleep 200ms

Type "echo 'Change username and password of C* clusters in values.yaml'"
Enter
Sleep 1s

Type "echo 'Change contact point of origin cluster to host IP address of Minikube (host.minikube.internal), e.g. 192.168.65.254'"
Enter
Sleep 1s

Type "cd .."
Enter
Sleep 200ms

Type "ccm create demo -v 4.1.5 -n 1"
Enter
Sleep 2s

Type "ccm start"
Enter
Sleep 5s

Type "ccm updateconf authenticator:PasswordAuthenticator"
Enter
Sleep 1s

Type "ccm updateconf broadcast_rpc_address:192.168.65.254"
Enter
Sleep 1s

Type "ccm stop && ccm start"
Enter
Sleep 10s

Type "ccm node1 cqlsh -u cassandra -p cassandra"
Enter
Sleep 500ms

Type "CREATE KEYSPACE IF NOT EXISTS test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true;"
Enter
Sleep 700ms

Type "CREATE ROLE 'KjaeRUazLmLjPYYYjfikZyIB' with SUPERUSER = true and LOGIN = true and PASSWORD = '1GDI7naKuFY.SSdWu4j,4OB79-sPtLDZAzr.Gjsb5_135n1SjJz+y1Zs6jKGdem7-XZWknhWJzHlSFrZ631U.3X1iloZt.ZNP971yHC4Dfi6o1qvbDKs0_zdUhZ_-KDF';"
Enter
Sleep 700ms

Type "exit"
Enter
Sleep 200ms

Type "kubectl create ns zdm-proxy"
Enter
Sleep 500ms

Type "helm -n zdm-proxy install zdm-proxy zdm"
Enter
Sleep 3s

Type "kubectl get pods -n zdm-proxy"
Enter
Sleep 500ms

Type "kubectl logs zdm-proxy-0 -n zdm-proxy"
Enter
Sleep 3s

Type "kubectl apply -f ./monitoring"
Enter
Sleep 2s

Type "kubectl get pods -n zdm-proxy"
Enter
Sleep 500ms

Type "echo 'Change username and password of C* in nosqlbench.yml'"
Enter
Sleep 1s

Type "kubectl apply -f ./nosqlbench"
Enter
Sleep 3s

Type "kubectl get pods -n zdm-proxy"
Enter
Sleep 5s

22 changes: 22 additions & 0 deletions kubernetes/monitoring/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
Usage:

1. Install Prometheus and Grafana in `zdm-proxy` namespace to monitor ZDM proxy instances.
Please note that Grafana dashboards are not imported automatically.

```
kubectl create ns zdm-proxy
kubectl apply -f ./monitoring
```

2. If you are running on Minikube, you can access Prometheus and Grafana URLs:

```
minikube service prometheus -n zdm-proxy --url
minikube service grafana -n zdm-proxy --url
```

2. To remove all Prometheus and Grafana components, execute:

```
kubectl delete -f ./monitoring
```
20 changes: 20 additions & 0 deletions kubernetes/monitoring/grafana-config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: zdm-proxy
apiVersion: v1
data:
ZDM-Prometheus.yaml: |+
apiVersion: 1
deleteDatasources:
- name: "Prometheus"
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
version: 1
editable: true
13 changes: 13 additions & 0 deletions kubernetes/monitoring/grafana-service.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: zdm-proxy
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
name: client
selector:
app: grafana
38 changes: 38 additions & 0 deletions kubernetes/monitoring/grafana.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: zdm-proxy
spec:
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
env:
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
image: grafana/grafana
ports:
- containerPort: 3000
name: client
resources:
requests:
cpu: 500m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
volumeMounts:
- name: grafana-datasources
mountPath: /etc/grafana/provisioning/datasources/
volumes:
- name: grafana-datasources
configMap:
name: grafana-datasources
14 changes: 14 additions & 0 deletions kubernetes/monitoring/prometheus-config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: zdm-proxy
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'zdm_proxy'
scrape_interval: 5s
static_configs:
- targets: ['zdm-proxy-metrics-0:14001', 'zdm-proxy-metrics-1:14001', 'zdm-proxy-metrics-2:14001']
13 changes: 13 additions & 0 deletions kubernetes/monitoring/prometheus-service.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: zdm-proxy
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
name: prometheus
selector:
app: prometheus
35 changes: 35 additions & 0 deletions kubernetes/monitoring/prometheus.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: zdm-proxy
spec:
selector:
matchLabels:
app: prometheus
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus
ports:
- containerPort: 9090
name: client
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 300m
memory: 256Mi
volumeMounts:
- name: prometheus-config
mountPath: /etc/prometheus/
volumes:
- name: prometheus-config
configMap:
name: prometheus-config
18 changes: 18 additions & 0 deletions kubernetes/nosqlbench/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
> Deployment scripts support only username and password authentication against ZDM proxy.
> Adjust credentials in _nosqlbench.yml_ file.
Usage:

1. Install [NoSQLBench](https://docs.nosqlbench.io/) and run a simple workload towards ZDM proxy. Note that
after completing the test, pod is not terminated.

```
kubectl create ns zdm-proxy
kubectl apply -f ./nosqlbench
```

2. To remove all deployed artifacts, execute:

```
kubectl delete -f ./nosqlbench
```
Loading

0 comments on commit 2c84428

Please sign in to comment.