Skip to content
This repository has been archived by the owner on Oct 27, 2021. It is now read-only.

fixed image locations in summit labs #24

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions rh-summit-2018/module-01.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -282,29 +282,29 @@ Now we can set up the Prometheus data source and the Kafka dashboard.

Access to the Grafana UI using `admin/admin` credentials.

image::grafana_login.png[grafana login]
image::images/grafana_login.png[grafana login]

Click on the "Add data source" button from the Grafana home in order to add Prometheus as data source.

image::grafana_home.png[grafana home]
image::images/grafana_home.png[grafana home]

Fill in the information about the Prometheus data source, specifying a name and "Prometheus" as type.
In the URL field, use `http://prometheus:9090` as the URL to the Prometheus server.
After "Add" is clicked, Grafana will test the connection to the data source.

image::grafana_prometheus_data_source.png[grafana prometheus data source]
image::images/grafana_prometheus_data_source.png[grafana prometheus data source]

From the top left menu, click on "Dashboards" and then "Import" to open the "Import Dashboard" window.
Open a browser tab and navigate to `https://raw.githubusercontent.com/strimzi/strimzi/0.3.0/metrics/examples/grafana/kafka-dashboard.json`.
You should see JSON content as response.
Copy and paste it in the appropriate field in the form.

image::grafana_import_dashboard.png[grafana import dashboard]
image::images/grafana_import_dashboard.png[grafana import dashboard]

After importing the dashboard, the Grafana home should show with some initial metrics about CPU and JVM memory usage.
When the Kafka cluster is used (creating topics and exchanging messages) the other metrics, like messages in and bytes in/out per topic, will be shown.

image::grafana_kafka_dashboard.png[grafana kafka dashboard]
image::images/grafana_kafka_dashboard.png[grafana kafka dashboard]

=== Handling cluster and topics

Expand Down
14 changes: 7 additions & 7 deletions rh-summit-2018/module-02.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The IoT demo is made by different components with the following architecture :

image::iot-demo.png[iot-demo]
image::images/iot-demo.png[iot-demo]

* one or more device simulators which sends temperature values to the `iot-temperature` topic;
* a stream application which uses Kafka Streams API in order to get data from the `iot-temperature` topic and process them to compute the maximum value (for each device) in the last 5 seconds; it writes the results to the `iot-temperature-max` topic;
Expand All @@ -19,7 +19,7 @@ Running the following command, a file containing two topic ConfigMaps is deploye
[source,sh]
$ oc create -f https://raw.githubusercontent.com/strimzi/strimzi-lab/master/iot-demo/stream-app/resources/topics.yml

image::topics.png[topics]
image::images/topics.png[topics]

In order to check that the topics are properly created on the Kafka cluster, it's possible to use the `kafka-topics.sh` script (distributed with Kafka) running it on one of the broker.

Expand All @@ -42,9 +42,9 @@ $ oc create -f https://raw.githubusercontent.com/strimzi/strimzi-lab/master/iot-

A route is provided in order to access the related Web UI.

image::route.png[route]
image::images/route.png[route]

image::web_ui.png[web ui]
image::images/web_ui.png[web ui]

=== Deploy the stream application

Expand All @@ -63,16 +63,16 @@ $ oc create -f https://raw.githubusercontent.com/strimzi/strimzi-lab/master/iot-

Once deployed, it starts just one pod simulating one device.

image::one_device_gauge.png[one device gauge]
image::images/one_device_gauge.png[one device gauge]

it's possible to scale up the number of pods in order to simulate more devices sending temperature values (each one with a different and randomly generated id).

image::scale_up_device.png[scale up device]
image::images/scale_up_device.png[scale up device]

Opening the consumer Web UI it's possible to see the "gauges" charts showing the processed max temperature values for all the active devices on the left side.
The right side is useful to see the log of the incoming messages from devices, showing the device id alongside the max temperature value processed by the stream application for such a device.

image::more_device_gauges.png[more device gauges]
image::images/more_device_gauges.png[more device gauges]

=== Clean up

Expand Down
2 changes: 1 addition & 1 deletion rh-summit-2018/module-03.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ In this module of the lab you'll learn the following things:

The overall architecture of this lab module looks like this:

image::debezium-demo.png[debezium-demo]
image::images/debezium-demo.png[debezium-demo]

=== Setting Up the Example Application

Expand Down