Observing microservice meshes with Kiali
At some point when you are developing your microservice architecture, you will need to visualize what is happening in your service mesh. You will have questions like “Which service is connected to which other service?” and “How much traffic goes to each microservice?” But because of the loosely tied nature of microservice architectures , these questions can be difficult to answer.
Those are the kinds of question that Kiali has the ability to answer, by giving you a big picture of the mesh, and showing the whole flow of your requests and data.
Kiali builds upon the same concepts as Istio, and you can check the glossary for a refresher.
Kiali taps into the data provided by Istio and OpenShift to generate its visualizations. It fetches ingress data (such as request tracing with Jaeger), the listing and data of the services, health indexes, and so on.
Kiali runs as a service together with Istio, and does not require any changes to Istio or Openshift configuration (besides the ones required to install Istio).
A prerequisite for installing Kiali is that you must have OpenShift and Istio installed and configured.
Update Kiali with the following commands:
# URLS for Jaeger and Grafana export JAEGER_URL="http://tracing-istio-system.$(minishift ip).nip.io" export GRAFANA_URL="http://grafana-istio-system.$(minishift ip).nip.io" echo "apiVersion: v1 kind: ConfigMap metadata: name: kiali namespace: istio-system labels: app: kiali chart: kiali heritage: Tiller release: istio data: config.yaml: | istio_namespace: istio-system server: port: 20001 external_services: istio: url_service_version: http://istio-pilot:8080/version jaeger: url: $JAEGER_URL grafana: url: $GRAFANA_URL"| kubectl apply -f -; oc delete pod -l app=kiali -n istio-system
So now we can access Kiali at
kiali-istio-system.$(minishift ip).nip.io, so
let’s do it:
The default credentials are "admin/admin", but it’s recommended to change them before using it in production.
To show the capabilities of Kiali, you’ll need an Istio-enabled application to
be running. For this, we can use the
customer-tutorial application we created
To generate data for it, we can
curl it with this command:
After you login, you should see the Service Graph page:
It shows a graph with all the microservices, connected by the requests going through then. On this page, you can see how the services interact with each other.
Applications link in the left navigation. On this page you can
view a listing of all the services that are running in the cluster, and
additional information about them, such as health status.
Click on the "customer" application to see its details:
By hovering the icon on the Health section, you can see the health of a service (a service is considered healthy) when it’s online and responding to requests without errors:
By clicking on
Outbound Metrics or
Inbound Metrics, you can also see the
metrics for an application, like so:
Workloads link in the left navigation. On this page you can view
a listing of all the workloads are present on your applications.
Click on the
customer workload. Here you can see details for the workload,
such as the pods and services that are included in it:
Outbound Metrics and
Inbound Metrics, you can check the
metrics for the workload. The metrics are the same as the