Knative Tutorial

Download Tutorial Sources

Before we start setting up the environment, let’s clone the tutorial sources and set the TUTORIAL_HOME environment variable to point to the root directory of the tutorial:

git clone -b release/0.7.x https://github.com/redhat-developer-demos/knative-tutorial &&\
export TUTORIAL_HOME="$(pwd)/knative-tutorial"

This tutorial was developed and tested with:

  • Knative v0.7.1

  • Minikube v1.4.0

  • OpenShift v4.1+

Choose your Kubernetes Cluster

Knative can be installed only on Kubernetes cluster. The following section shows how to install Knative on vanilla Kubernetes or on Red Hat Openshift — an enterprise grade Kubernetes platform — 

  • Kubernetes

  • OpenShift

Configure and Start Minikube

Before installing Knative and its components, we need to create a Minikube virtual machine and deploy Kubernetes into it.

Download minikube and add it to your path.

minikube profile knativetutorial (1)

minikube start -p knativetutorial --memory=8192 --cpus=6 \(1)
  --kubernetes-version=v1.14.0 \
  --vm-driver=hyperkit \(2)
  --disk-size=50g \
  --extra-config='apiserver.enable-admission-plugins=LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook' \
  --insecure-registry='10.0.0.0/24' (3)

1 Setting Minikube profile to knative so that all future Minikube commands are executed with this profile context
2 On macOS/windows. On Linux, please use --vm-driver=kvm2
3 Make the internal registry to be insecure registry

Enable Internal Registry

minikube addons enable registry

It will take few mins for the registry to be enabled, you can watch the status using kubectl get pods -n kube-system -w | grep registry

A successful enable of registry will show the following pods when running the command:

kubectl get pods -n kube-system | grep registry

registry-7c5hg                             1/1     Running   0          29m
registry-proxy-cj6dj                       1/1     Running   0          29m

Navigate to registry helper folder:

cd $TUTORIAL_HOME/apps/minikube-registry-helper

Deploy Registry Helper

As part of some exercises in the tutorial, we need to push and pull images to local internal registry. To make push and pull smoother we will have a helper deployed so that we can use some common names like dev.local, example.com as registry aliases for internal registry

Add entries to minikube host file

kubectl apply -n kube-system -f registry-aliases-config.yaml &&\
kubectl apply -n kube-system -f node-etc-hosts-update.yaml

Wait for the Daemonset to be running before proceeding to next step, the status of the Daemonset can be viewed via kubectl get pods -n kube-system -w, you can do CTRL+c to end the watch.

Check if the entries are added to minikube node host file

minikube ssh -- sudo cat /etc/hosts

A successful daemonset run will show the minikube node /etc/hosts file with the following entries:

127.0.0.1       localhost
127.0.1.1 demo
10.111.151.121  dev.local
10.111.151.121  example.com

The IP for dev.local and example.com will match the CLUSTER-IP of the internal registry. To know the Cluster IP run the command:

kubectl get svc registry -n kube-system

NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
registry   ClusterIP   10.111.151.121   <none>        80/TCP                  178m

Update CoreDNS

./patch-coredns.sh

To check successful patch run the following command

kubectl get cm -n kube-system coredns -o yaml

A successful patch will have the coredns updated ConfigMap coredns in kube-system

apiVersion: v1
data:
  Corefile: |-
    .:53 {
        errors
        health
        rewrite name dev.local registry.kube-system.svc.cluster.local
        rewrite name example.com registry.kube-system.svc.cluster.local
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  name: coredns

Install Istio


  • As Knative need only Istio Ingress Gateway and Pilot, we will use Knative Istio lean install, which just installs the required component

  • Installation of Istio components will take some time and it is highly recommended that you start Knative components installation only after you have verified that all Istio component pods are running. The Istio pods can be watched using the command:

kubectl -n istio-system get pods -w

You can use CTRL+c to terminate the watch

A successful Istio install will have the pods running in the istio-system namespace as shown below:

NAME                                     READY   STATUS      RESTARTS   AGE
cluster-local-gateway-579cfd9fdd-9hb7p   0/1     Running     0          62s
istio-ingressgateway-776c54f7c4-m9qz2    1/2     Running     0          62s
istio-init-crd-10-mc7h5                  0/1     Completed   0          68s
istio-init-crd-11-86lsf                  0/1     Completed   0          68s
istio-pilot-75b876b994-w8x7t             0/1     Running     0          62s
istio-pilot-75b876b994-w8x7t             1/1     Running     0          67s
istio-ingressgateway-776c54f7c4-m9qz2    2/2     Running     0          69s

Install Custom Resource Definitions

kubectl apply --selector knative.dev/crd-install=true \
  --filename https://github.com/knative/serving/releases/download/v0.7.1/serving.yaml \
  --filename https://github.com/knative/eventing/releases/download/v0.7.1/release.yaml

First time when you run the above command will show some warnings and error as shown below, you can either safely ignore them or re-running the above command will cause the errors to disappear.

unable to recognize "https://github.com/knative/serving/releases/download/v0.7.1/serving.yaml": no matches for kind "Image" in version "caching.internal.knative.dev/v1alpha1"
unable to recognize "https://github.com/knative/eventing/releases/download/v0.7.1/release.yaml": no matches for kind "ClusterChannelProvisioner" in version "eventing.knative.dev/v1alpha1"

Install Knative Serving

kubectl apply --selector networking.knative.dev/certificate-provider!=cert-manager \
  --filename https://github.com/knative/serving/releases/download/v0.7.1/serving.yaml

As the Knative serving components are getting installed, you can watch their status using the following command:

kubectl -n knative-serving get pods -w

You can use CTRL+c to terminate the watch

A successful Knative serving installation will have its pods running in the knative-serving namespace as shown below:

NAME                                READY   STATUS    RESTARTS   AGE
activator-bc968b649-r6l82           1/1     Running   0          36s
autoscaler-66f5cd8774-kjs9k         1/1     Running   0          36s
controller-68977d579c-phltz         1/1     Running   0          36s
networking-istio-5d8d9d574d-5lfsl   1/1     Running   0          36s
webhook-894c8cb4d-7hl97             1/1     Running   0          36s

Install Knative Eventing

kubectl apply --selector networking.knative.dev/certificate-provider!=cert-manager \
  --filename https://github.com/knative/eventing/releases/download/v0.7.1/release.yaml

As the Knative eventing components are getting installed, you can watch their status using the following commands:

kubectl -n knative-eventing get pods -w

You can use CTRL+c to terminate the watch

A successful Knative eventing installation will have the following pods running in the knative-eventing namespace:

knative-eventing namespace
NAME                                           READY   STATUS    RESTARTS   AGE
eventing-controller-5c5f664ddb-9bmq9           1/1     Running   0          3m40s
eventing-webhook-7ccb674dc6-qjtwp              1/1     Running   0          3m40s
imc-controller-9bcf67784-j7m2h                 1/1     Running   0          3m40s
imc-dispatcher-7c6c7d798c-kvzf7                1/1     Running   0          3m40s
in-memory-channel-controller-8dfd9c8df-cpsph   1/1     Running   0          3m40s
in-memory-channel-dispatcher-8776cb68f-wntmb   1/1     Running   0          3m39s
sources-controller-6f4d494fb9-8wkj4            1/1     Running   0          3m40s

Configuring Kubernetes namespace

We will use a non default Kubernetes namespace called knativetutorial for all the tutorial exercises.

kubectl create namespace knativetutorial

The kubens utility installed as part of kubectx allows for easy switching between Kubernetes namespaces.

kubens knativetutorial

OpenShift

Knative Serving and Eventing install needs OpenShift4, you need to have one provisioned using try.openshift.com or can use any existing OpenShift4 cluster.

Once you have your cluster, you can download the latest OpenShift client(oc) from here and add to your path.

oc version

The output should show oc version >=4.1

oc version
Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.0+b4261e0", GitCommit:"b4261e07ed", GitTreeState:"clean", BuildDate:"2019-07-06T03:16:01Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6+73b5d76", GitCommit:"73b5d76", GitTreeState:"clean", BuildDate:"2019-09-23T16:18:51Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"}

We will be using Kubernetes Operators to install Serverless (Knative Serving) and Knative Eventing components on OpenShift.

  • The Knative Serving community component is deprecated on OpenShift and we will be using Red Hat serverless operator.

  • The Knative Eventing components are right now available as knative-eventing community operator and is currently not supported by Red Hat.

Login as admin

Login to OpenShift console using the cluster admin credentials.

Create projects

OpenShift is integrated with OperatorHub that allows you to install the components(using Kubernetes operators) from within OpenShift web console. But for the installation we will be using the command line mode via oc

oc adm new-project istio-system && \
oc adm new-project knative-serving && \
oc adm new-project knative-eventing && \
oc adm new-project knativetutorial

Install catalog sources

oc apply -f "$TUTORIAL_HOME/install/redhat-operators-csc.yaml" \
  -f "$TUTORIAL_HOME/install/community-operators-csc.yaml"

It will take few minutes for the operators to be installed and reconciled, check the status using the command:

oc -n openshift-marketplace get csc

A successful reconciliation should show an output like:

NAME                           STATUS      MESSAGE                                       AGE
community-operators-packages   Succeeded   The object has been successfully reconciled   62s
redhat-operators-packages      Succeeded   The object has been successfully reconciled   62s

Open a new terminal and start a watch on the command oc get csv -n openshift-operators. For further reference in the setup we will call this terminal as WATCH_WINDOW.

watch 'oc get csv -n openshift-operators -ocustom-columns-file=$TUTORIAL_HOME/install/csv-columns.txt'

You can terminate the watch using the command Ctrl+c

Install Servicemesh

Subscribe to Servicemesh via OperatorHub

oc apply -f "$TUTORIAL_HOME/install/servicemesh/subscription.yaml"

It will take few minutes for the servicemesh and its dependent components to be installed. Watch the status in the WATCH_WINDOW

A successful servicemesh subscription install should show the output in WATCH_WINDOW like:

NAME                                         VERSION               PHASE
elasticsearch-operator.4.1.20-201910102034   4.1.20-201910102034   Succeeded
jaeger-operator.v1.13.1                      1.13.1                Succeeded
kiali-operator.v1.0.6                        1.0.6                 Succeeded
servicemeshoperator.v1.0.1                   1.0.1                 Succeeded

The servicemesh operator needs to be copied to the istio-system project before we can create a ServiceMeshControlPlane and ServiceMeshMemberRoll custom resources, you can watch the status using the command:

watch 'oc get csv -n istio-system -ocustom-columns-file=$TUTORIAL_HOME/install/csv-columns.txt'

It will take few seconds for the operator to be copied

Once you see the following status, you can terminate the watch using Ctrl+c:

NAME                                         VERSION               PHASE
elasticsearch-operator.4.1.20-201910102034   4.1.20-201910102034   Succeeded
jaeger-operator.v1.13.1                      1.13.1                Succeeded
kiali-operator.v1.0.6                        1.0.6                 Succeeded
servicemeshoperator.v1.0.1                   1.0.1                 Succeeded
oc create -f "$TUTORIAL_HOME/install/servicemesh/smcp.yaml" && \
oc create -f "$TUTORIAL_HOME/install/servicemesh/smmr.yaml"

It will take few minutes for the servicemesh components to be installed, you can watch the status using the command:

oc get pods -n istio-system -w

A successful servicemesh install should show the following pods:

NAME                                     READY   STATUS    RESTARTS   AGE
cluster-local-gateway-7795cc7956-mqmq7   1/1     Running   0          92s
istio-citadel-f88bdd688-c52z8            1/1     Running   0          2m58s
istio-galley-f8f96c6bf-x7f4k             1/1     Running   0          2m48s
istio-ingressgateway-65bf84457c-7rh5t    1/1     Running   0          92s
istio-pilot-7f57f8bb5b-cr2qr             1/1     Running   0          110s
These are the minimal set of servicemesh components required for serverless.

Install Serverless

Subscribe to Serverless via OperatorHub

oc apply -f "$TUTORIAL_HOME/install/knative-serving/subscription.yaml"

Wait for the subscription PHASE to be Installed before proceeding to next step. You can watch the status in the WATCH_WINDOW.

A successful knative serving subscription install should show the output in WATCH_WINDOW like:

NAME                                         VERSION               PHASE
elasticsearch-operator.4.1.20-201910102034   4.1.20-201910102034   Succeeded
jaeger-operator.v1.13.1                      1.13.1                Succeeded
kiali-operator.v1.0.6                        1.0.6                 Succeeded
serverless-operator.v1.0.0                   1.0.0                 Succeeded
servicemeshoperator.v1.0.1                   1.0.1                 Succeeded

The serverless operator needs to be copied to the knative-serving project before we can create a KnativeServing custom resource, you can watch the status using the command:

watch 'oc get csv -n knative-serving -ocustom-columns-file=$TUTORIAL_HOME/install/csv-columns.txt'

It will take few seconds for the operator to be copied

Once you see the following status, you can terminate the watch using Ctrl+c:

NAME                                         VERSION               PHASE
elasticsearch-operator.4.1.20-201910102034   4.1.20-201910102034   Succeeded
jaeger-operator.v1.13.1                      1.13.1                Succeeded
kiali-operator.v1.0.6                        1.0.6                 Succeeded
serverless-operator.v1.0.0                   1.0.0                 Succeeded
servicemeshoperator.v1.0.1                   1.0.1                 Succeeded
oc apply -f "$TUTORIAL_HOME/install/knative-serving/cr.yaml"

It will take few minutes for the Knative serving components to be installed, you can watch the status using:

oc get pods -n knative-serving -w

You can terminate the watch using the command Ctrl+c

A successful serverless install will show the following pods in knative-serving namespace:

NAME                                    READY   STATUS    RESTARTS   AGE
activator-78464cc84-vq9wp               1/1     Running   1          103s
autoscaler-57479674d6-hlvx7             1/1     Running   0          102s
controller-6fcb5b4b78-flq8d             1/1     Running   0          97s
networking-certmanager-8c6d68d4-cmf7x   1/1     Running   0          97s
networking-istio-644984496f-db58w       1/1     Running   0          96s
webhook-84b96fdc6f-vbpxm                1/1     Running   1          96s

Install Knative Eventing

oc apply -f "$TUTORIAL_HOME/install/knative-eventing/subscription.yaml"

Wait for the subscription PHASE to be Installed before proceeding to next step. You can watch the status in the WATCH_WINDOW.

A successful knative eventing subscription install should show the output in WATCH_WINDOW like:

NAME                                         VERSION               PHASE
elasticsearch-operator.4.1.20-201910102034   4.1.20-201910102034   Succeeded
jaeger-operator.v1.13.1                      1.13.1                Succeeded
kiali-operator.v1.0.6                        1.0.6                 Succeeded
knative-eventing-operator.v0.8.0             0.8.0                 Succeeded
serverless-operator.v1.0.0                   1.0.0                 Succeeded
servicemeshoperator.v1.0.1                   1.0.1                 Succeeded

The knative-eventing operator needs to be copied to the knative-eventing project before we can create a KnativeEventing custom resource, you can watch the status using the command:

watch 'oc get csv -n knative-eventing -ocustom-columns-file=$TUTORIAL_HOME/install/csv-columns.txt'

It will take few seconds for the operator to be copied

Once you see the following status, you can terminate the watch using Ctrl+c:

NAME                                         VERSION               PHASE
elasticsearch-operator.4.1.20-201910102034   4.1.20-201910102034   Succeeded
jaeger-operator.v1.13.1                      1.13.1                Succeeded
kiali-operator.v1.0.6                        1.0.6                 Succeeded
knative-eventing-operator.v0.8.0             0.8.0                 Succeeded
serverless-operator.v1.0.0                   1.0.0                 Succeeded
servicemeshoperator.v1.0.1                   1.0.1                 Succeeded

It will take few minutes for the Knative eventing components to be installed, you can watch the status using:

oc get pods -n knative-eventing -w

You can terminate the watch using the command Ctrl+c

A successful Knative Eventing install will show the following pods in Knative eventing namespace:

NAME                                            READY   STATUS    RESTARTS   AGE
eventing-controller-758d785bf7-jr7bh            1/1     Running   0          2m4s
eventing-webhook-7ff46cd45f-w8d9v               1/1     Running   0          2m3s
imc-controller-75d7f598df-4knn4                 1/1     Running   0          113s
imc-dispatcher-77f565585c-z6d8x                 1/1     Running   0          113s
in-memory-channel-controller-6b4967d97b-x2hcj   1/1     Running   0          2m
in-memory-channel-dispatcher-8bbcd4f9-kxmcw     1/1     Running   0          118s
sources-controller-788874d5fc-8jqrz             1/1     Running   0          2m4s

Congratulations! You have now installed all the required components to run the tutorial exercises. You can terminate the WATCH_WINDOW using Ctrl+c.

Navigate to tutorial project


Development tutorial

You have two options to choose your development environment,

CLI tools

The following CLI tools are required for running the exercises in this tutorial. Please have them installed and configured before you get started with any of the tutorial chapters.

Tool macOS Fedora windows

Git

Download

Download

Download

Docker

Docker for Mac

dnf install docker

Docker for Windows

kubectl

Download

Download

Download

stern

brew install stern

Download

Download

yq

brew install yq

Download

Download

httpie

brew install httpie

dnf install httpie

https://httpie.org/doc#windows-etc

hey

Download

Download

Download

watch

brew install watch

dnf install procps-ng

kubectx and kubens

brew install kubectx

https://github.com/ahmetb/kubectx

All in one Environment

If you don’t wish to install the aforementioned tools, you could get started with tutorial using the all-in-one-environment based on Eclipse Che — a browser based IDE --.

Install chectl

To get started in setting up this environment, we need to have chectl. You can follow the instructions to have it installed locally.

You can verify the install using the command:

chectl version

The command above should return an output like:

chectl/7.2.0 darwin-x64 node-v10.16.3
The above output is an example from macOS (darwin), the OS information in output may vary based on your OS

Navigate to $TUTORIAL_HOME:


Create Che server

chectl server:start --platform minikube

It will take few minutes for Che server to be started, watch the output of chectl command for the status.

A successful Che install should show an output like:

chectl server:start --platform minikube
  βœ” Verify Kubernetes API...OK
  βœ” πŸ‘€  Looking for an already existing Che instance
    βœ” Verify if Che is deployed into namespace "che"...it is not
  βœ” ✈️  Minikube preflight checklist
    βœ” Verify if kubectl is installed
    βœ” Verify if minikube is installed
    βœ” Verify if minikube is running
    ↓ Start minikube [skipped]
      β†’ Minikube is already running.
    βœ” Verify if minikube ingress addon is enabled
    ↓ Enable minikube ingress addon [skipped]
      β†’ Ingress addon is already enabled.
    βœ” Retrieving minikube IP and domain for ingress URLs...192.168.64.166.
nip.io.
  βœ” πŸƒβ€  Running Helm to install Che
    βœ” Verify if helm is installed
    βœ” Create Tiller Role Binding...done.
    βœ” Create Tiller Service Account...done.
    βœ” Create Tiller RBAC
    βœ” Create Tiller Service...done.
    βœ” Preparing Che Helm Chart...done.
    βœ” Updating Helm Chart dependencies...done.
    βœ” Deploying Che Helm Chart...done.
  βœ” βœ…  Post installation checklist
    βœ” Devfile registry pod bootstrap
      βœ” scheduling...done.
      βœ” downloading images...done.
      βœ” starting...done.
    βœ” Plugin registry pod bootstrap
      βœ” scheduling...done.
      βœ” downloading images...done.
      βœ” starting...done.
    βœ” Che pod bootstrap
      βœ” scheduling...done.
      βœ” downloading images...done.
      βœ” starting...done.
    βœ” Retrieving Che Server URL...http://che-che.192.168.64.166.nip.io
    βœ” Che status check
Command server:start has completed successfully.
The Che server url will be matching to your minikube ip minikube ip

Configure Che server

The default Che server Kubernetes service account has limited privileges, for this tutorial we will use a different service account called tutorial-tools which is a cluster-admin within your Kubernetes cluster:

Create the service account, tutorial-tools configmap:

kubectl apply -n che -f $TUTORIAL_HOME/install/tutorial-tools.yaml

Patch the Che configmap to use tutorial-tools service account

kubectl patch configmaps -n che che \
  --patch '{"data": {"CHE_INFRA_KUBERNETES_SERVICE__ACCOUNT__NAME": "tutorial-tools"} }'

We need to restart the che pod to make it pick up the new service account from the configmap:

kubectl delete -n che pod che

It will take few mins for the new che pod to come up, you can watch the status using the command kubectl get -n che pods -w. Use Ctrl+c to terminate the watch

Create workspace

Create Che workspace and clone the knative tutorial sources in it:

chectl workspace:start -f $TUTORIAL_HOME/devfile.yaml

A successful workspace creation will show the following output

chectl workspace:start -f devfile.yaml
  βœ” Retrieving Che Server URL...http://che-che.192.168.64.166.nip.io
  βœ” Verify if Che server is running...RUNNING (auth disabled)
  βœ” Create workspace from Devfile devfile.yaml

Workspace IDE URL:
http://che-che.192.168.64.166.nip.io/dashboard/#/ide/che/knative-tutorial
The IP used in the Che workspace url will be your minikube ip(minikube ip)

Open your environment using the Workspace IDE URL listed in the output to start the workspace.

It will take few minutes for the workspace to be up and running.

Familiarize your workspace

Opening your Che workspace using the Workspace IDE URL will show browser window like:

che env overview

You can open the New Terminal to start running the tutorial exercises.

The editor has been pre-configured with all common and popular vscode plugins:

  • Java

  • Xml

  • YAML

  • Kubernetes