Minikube
Configure and Start Minikube
Before installing Knative and its components, we need to create a Minikube virtual machine and deploy Kubernetes into it.
Download minikube and add it to your path.
$TUTORIAL_HOME/bin/start-minikube.sh
😄 [knativetutorial] minikube v1.28.0 on Darwin 11.1
✨ Using the hyperkit driver based on user configuration
👍 Starting control plane node knativetutorial in cluster knativetutorial
🔥 Creating hyperkit VM (CPUs=6, Memory=8192MB, Disk=51200MB) ...
🐳 Preparing Kubernetes v1.23.0 on Docker 20.10.2 ...
> kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
> kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
> kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
> kubectl: 41.98 MiB / 41.98 MiB [---------------] 100.00% 1.10 MiB p/s 38s
> kubeadm: 37.96 MiB / 37.96 MiB [-----------] 100.00% 450.47 KiB p/s 1m26s
> kubelet: 108.01 MiB / 108.01 MiB [---------] 100.00% 822.97 KiB p/s 2m14s
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "knativetutorial" by default
Install Knative
Deploy Custom Resource Definitions
kubectl apply \
--filename https://github.com/knative/serving/releases/download/knative-v1.8.3/serving-crds.yaml \
--filename https://github.com/knative/eventing/releases/download/knative-v1.8.5/eventing-crds.yaml
Now that you have installed the Knative Serving and Eventing CRDs, the following sections we will verify the CRDs by querying the api-resources
.
All Knative Serving resources will be under the API group called serving.knative.dev
.
kubectl api-resources --api-group='serving.knative.dev'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
configurations config,cfg serving.knative.dev/v1 true Configuration
domainmappings dm serving.knative.dev/v1beta1 true DomainMapping
revisions rev serving.knative.dev/v1 true Revision
routes rt serving.knative.dev/v1 true Route
services kservice,ksvc serving.knative.dev/v1 true Service
All Knative Eventing resources will be under the one of following API groups:
-
messaging.knative.dev
-
eventing.knative.dev
-
sources.knative.dev
kubectl api-resources --api-group='messaging.knative.dev'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
channels ch messaging.knative.dev/v1 true Channel
subscriptions sub messaging.knative.dev/v1 true Subscription
kubectl api-resources --api-group='eventing.knative.dev'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
brokers eventing.knative.dev/v1 true Broker
eventtypes eventing.knative.dev/v1beta1 true EventType
triggers eventing.knative.dev/v1 true Trigger
kubectl api-resources --api-group='sources.knative.dev'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
apiserversources sources.knative.dev/v1 true ApiServerSource
containersources sources.knative.dev/v1 true ContainerSource
pingsources sources.knative.dev/v1 true PingSource
sinkbindings sources.knative.dev/v1 true SinkBinding
The Knative has two main infrastructure components: controller and webhook helps in translating the Knative CRDs which are usually written YAML files, into Kubernetes objects like Deployment and Service. Apart from the controller and webhook, the Knative Serving and Eventing also install their respective functional components which are listed in the upcoming sections.
Install Knative Serving
kubectl apply \
--filename \
https://github.com/knative/serving/releases/download/knative-v1.8.3/serving-core.yaml
Wait for the Knative Serving deployment to complete:
kubectl rollout status deploy controller -n knative-serving
kubectl rollout status deploy activator -n knative-serving
kubectl rollout status deploy autoscaler -n knative-serving
kubectl rollout status deploy webhook -n knative-serving
A successfuly deployment should show the following pods in knative-serving
namespace:
kubectl get pods -n knative-serving
NAME READY STATUS RESTARTS AGE
activator-6fb68fff7b-24bsp 1/1 Running 0 23s
autoscaler-54f48f5bb7-mvvrd 1/1 Running 0 23s
controller-66cb4b556b-n8n99 1/1 Running 0 23s
domain-mapping-66f8d5bc4c-9f9c7 1/1 Running 0 23s
domainmapping-webhook-55bdf595dd-4t4w2 1/1 Running 0 23s
webhook-6d5c77f989-lf2sm 1/1 Running 0 23s
Install Kourier Ingress Gateway
kubectl apply \
--filename \
https://github.com/knative/net-kourier/releases/download/knative-v1.8.1/kourier.yaml
Wait for the Ingress Gateway deployment to complete:
kubectl rollout status deploy 3scale-kourier-control -n knative-serving
kubectl rollout status deploy 3scale-kourier-gateway -n kourier-system
A successful Kourier Ingress Gateway should show the following pods in kourier-system
and knative-serving
:
kubectl get pods --all-namespaces -l 'app in(3scale-kourier-gateway,3scale-kourier-control)'
NAMESPACE NAME READY STATUS RESTARTS AGE
kourier-system 3scale-kourier-gateway-79898dffd4-qsc65 1/1 Running 1 3d10h
Now configure Knative serving to use Kourier as the ingress:
kubectl patch configmap/config-network \
-n knative-serving \
--type merge \
-p '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
Install and Configure Ingress Controller
To access the Knative Serving services from the minikube host, it will be easier to have Ingress deployed and configured.
The following section will install and configure Contour as the Ingress Controller.
kubectl apply \
--filename https://projectcontour.io/quickstart/contour.yaml
Wait for the Ingress to be deployed and running:
kubectl rollout status ds envoy -n projectcontour
kubectl rollout status deploy contour -n projectcontour
A successful rollout should list the following pods in projectcontour
kubectl get pods -n projectcontour
NAME READY STATUS RESTARTS AGE
contour-76bb4ff6cc-8666g 1/1 Running 0 89s
contour-76bb4ff6cc-wvtpx 1/1 Running 0 89s
envoy-dszgl 2/2 Running 0 89s
Now create an Ingress to Kourier Ingress Gateway:
cat <<EOF | kubectl apply -n kourier-system -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kourier-ingress
namespace: kourier-system
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kourier
port:
number: 80
EOF
ingress.networking.k8s.io/kourier-ingress created
Configure Knative to use the kourier-ingress
Gateway:
ksvc_domain="\"data\":{\""$(minikube -p knativetutorial ip)".nip.io\": \"\"}"
kubectl patch configmap/config-domain \
-n knative-serving \
--type merge \
-p "{$ksvc_domain}"
configmap/config-domain patched
Install Knative Eventing
kubectl apply \
--filename \
https://github.com/knative/eventing/releases/download/knative-v1.8.5/eventing-core.yaml \
--filename \
https://github.com/knative/eventing/releases/download/knative-v1.8.5/in-memory-channel.yaml \
--filename \
https://github.com/knative/eventing/releases/download/knative-v1.8.5/mt-channel-broker.yaml
Like Knative Serving deployment, Knative Eventing deployment will also take few minutes to complete, check the status of the deployment using:
kubectl rollout status deploy eventing-controller -n knative-eventing
kubectl rollout status deploy eventing-webhook -n knative-eventing
kubectl rollout status deploy imc-controller -n knative-eventing
kubectl rollout status deploy imc-dispatcher -n knative-eventing
kubectl rollout status deploy mt-broker-controller -n knative-eventing
kubectl rollout status deploy mt-broker-filter -n knative-eventing
kubectl rollout status deploy mt-broker-filter -n knative-eventing
A successful deployment should show the following pods in knative-eventing
namespace:
kubectl get pods -n knative-eventing
NAME READY STATUS RESTARTS AGE
eventing-controller-56ccd89cd8-w9wmm 1/1 Running 0 26s
eventing-webhook-76b66cd56c-n2dqj 1/1 Running 0 26s
imc-controller-6c8cfbb558-zvglm 1/1 Running 0 25s
imc-dispatcher-b8bf96b6d-mpltl 1/1 Running 0 25s
mt-broker-controller-7dfb75f5cc-zv5jr 1/1 Running 0 25s
mt-broker-filter-8799894b4-fhk5z 1/1 Running 0 25s
mt-broker-ingress-685f5554c-mbpbd 1/1 Running 0 25s