Introduction
This blog covers deploying the Contour ingress controller and demonstrates its use with a Ingress and IngressRoute examples. The Contour ingress controller will be deployed on my local system which does not have access to a service of type LoadBalancer, so the Contour service is modified to be of type NodePort.
Deploy Contour
There are a few different methods for deploying Contour, but I like to just clone the github repository and deploy from the YAMLs provided.
$ git clone https://github.com/heptio/contour.git
$ cd contour/deployment/deployment-grpc-v2
$ ls -1
01-common.yaml
02-contour.yaml
02-rbac.yaml
02-service.yaml
As you can see there is a 02-service.yaml file. We will be modifying this file to make the service of type NodePort rather than LoadBalancer. If you are on a Kubernetes implementation like PKS with NSX-T or one of the public cloud providers like GKE you would not need to do this.
apiVersion: v1
kind: Service
metadata:
name: contour
namespace: heptio-contour
annotations:
# This annotation puts the AWS ELB into "TCP" mode so that it does not
# do HTTP negotiation for HTTPS connections at the ELB edge.
# The downside of this is the remote IP address of all connections will
# appear to be the internal address of the ELB. See docs/proxy-proto.md
# for information about enabling the PROXY protocol on the ELB to recover
# the original remote IP address.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
# Scrape metrics for the contour container
# The envoy container is scraped by annotations on the pod spec
prometheus.io/port: "8000"
prometheus.io/scrape: "true"
spec:
ports:
- port: 80
nodePort: 30080 # or whatever NodePort value you want within your K8S allowed range
name: http
protocol: TCP
targetPort: 8080
- port: 443
nodePort: 30443 # or whatever NodePort value you want within your K8S allowed range
name: https
protocol: TCP
targetPort: 8443
selector:
app: contour
type: NodePort
---
Now save and apply the resources to your Kubernetes environment.
$ kubectl apply -f .
Verify Contour is Running
The deployment above will create a new heptio-contour namespace with various Kubernetes resources. One of which should be a service of type NodePort pointing to Contour pods running instances of the Contour container and the Envoy container. Make sure both containers are in the ready state. Given we have a service of type NodePort, you should be able to hit any of the nodes in your cluster to access it. Note: Contour does not provide any ingress traffic until an Ingress or IngressRoute is actually deployed.
$ kubectl get pods -n heptio-contour
NAME READY STATUS RESTARTS AGE
contour-66bc464fb5-f4b7b 2/2 Running 2 7d4h
contour-66bc464fb5-nkwgd 2/2 Running 2 7d4h
$ kubectl get svc -n heptio-contour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour NodePort 10.107.138.251 <none> 80:30080/TCP,443:30443/TCP 7d4h
Deploy Examples
Contour supports the basic Ingress resource type as well as a CustomResourceDefinition type IngressRoute. Below are some basic examples of each. I encourage you to look into the IngressRoute type. It provides some cool features like weighted service traffic.
Deploy a Service
First thing we will need is a service that our ingress traffic can connect. I am using the term service here in the generic sense like a web-service. Simplest way is just to create a pod running the nginx image and then expose it using a Kubernetes Service resource.
$ kubectl run --generator=run-pod/v1 --image=nginx --labels=run=web web
pod/web created
Next, expose the pod as a Service.
$ kubectl expose pod web --port=80
service/web exposed
That will create a Kubernetes Service of type ClusterIP which is fine for our examples.
Deploy Ingress Resource
Now we want to create a Kubernetes resource of type Ingress. The example below routes traffic to our nginx web service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ing1
annotations:
kubernetes.io/ingress.class: contour
spec:
rules:
- host: s1.cluster1.corp.local
- http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
Save the YAML to a file and apply it to your cluster. Once it is deployed execute the following curl command.
$ curl -v -H 'Host: s1.cluster1.corp.local' worker1:30080
Few items to point out with the Ingress resource and curl command.
- host is nothing more than something I made up since it is on my local system and I update the header when I make the actual curl call. This makes it easy for testing.
- Worker1 is one of the nodes in my Kubernetes cluster and has an entry in /etc/hosts.
- Path is to the root of the host; “/”
- Backend is the service that we exposed previously.
Deploy IngressRoute Resource
This second example uses a CustomResourceDefinition, CRD, to route traffic to our service. Slight modification to the Ingress example in that this example demonstrates traffic to a subpath, ‘/s1’, rather than the root path with the s1 as a prefix on the hostname.
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: ingroute1
spec:
virtualhost:
fqdn: cluster1.corp.local
routes:
- match: /s1
prefixRewrite: "/"
services:
- name: web
port: 80
Save the YAML to a file and apply it to your cluster. You can check the IngressRoute with the following command.
$ kubectl get ingressroute
NAME FQDN TLS SECRET FIRST ROUTE STATUS STATUS DESCRIPTION
ingroute1 cluster1.corp.local /s1 valid valid IngressRoute
Once it is deployed execute the following curl command.
$ curl -v -H 'Host: cluster1.corp.local' worker1:30080/s1
Items of note
- changed the host here to cluster1.corp.local which removed the s1 prefix.
- Added the s1 as a subpath.
Conclusion
This blog introduced Heptio Contour. A Kubernetes IngressController. It covered deploying Contour and demonstrated two simple examples. I would encourage you to check out the documentation on Contour’s IngressRoute resource at https://github.com/heptio/contour/blob/master/docs/ingressroute.md.
Related