- How does Ingress work in Kubernetes?
- How can you see it?
- How can you configure it?
- How does it load-balance?
- How does it implement SSL?
Without Ingress, you probably use a reverse-proxy or a load balancing solution like Nginx, HAPROXY, TRAEFIK. I would deploy them on the Kubernetes Cluster and configure them to route traffic to other services The configuration involves defining URL Routes, configuring certificates, etc. Ingress is implemented by Kubernetes in kind of the same way. You first deploy a supported solution, which happens to be any of the solutions (Nginx, HAPROXY, TRAEFIK), and then specify a set of rules to configure ingress. The solution you deploy is called an ingress controller and the set of rules you configure are called ingress resources.
Ingress resources are created using definition files like the ones we use to create pods, deployments, and services. A Kubernetes cluster does not come with an Ingress Controller by default. If you set up a cluster following the demos in the course, you won not have an ingress controller built into it. So, you simply create ingress resources and expect them to work.
You do not have an ingress controller on Kubernetes by default. So, what do you deploy?
There are a number of solutions available for ingress. A few of them being GCE which is Google HTTPS Load Balancer, Nginx, Contour, HAPROXY, TRAFIK, and Istio. Out of this GCE and Nginx are currently being supported and maintained by the Kubernetes project.
The Ingress Controllers are not just another load balancer or nginx server. The load balancer components are just part of it. The ingress controllers have additional intelligence built into them to monitor the kubernetes cluster for new definitions or ingress resources and configure the nginx server accordingly. An Nginx Controller is deployed as just another deployment in Kubernetes.
So, we start with a deployment file definition, named nginx-ingress-controller. With one replica and a simple pod definition template. We will label it nginx-ingress and the image used is nginx-ingress controller with the right version. This is a special build of NGINX built specifically to be used as an ingress controller in Kubernetes. So it has its own set of requirements. Within the image the nginx program is stored at location.
/nginx-ingress-controller, So you must pass that as the command to start the nginx-controller-service.
If you worked with Nginx before, you know that it has a set of configuration options such as the path to store the logs, SSL setting, session timeout, etc. In order to decouple these configuration data from the Nginx-controller image, you must create a ConfigMap object and pass that in the ConfigMap object need not have any entries at the point. A blank object will do. But creating one makes it easy for you to modify a configuration setting in the future. You will just have an add it to this ConfigMap and not have to worry about modifying the nginx configuration files. You must also pass in two environment variables that carry the Pod’s name and namespace it is deployed to Nginx service requires these to read the configuration data from within the Pod, and finally, specify the ports used by the ingress controller which happens to be 80 and 443. Now, we need a service to expose the ingress controller to the external world. So, we create a NodePort Service with the Nginx-ingress label selector to link the service to the deployment. The Ingress controllers have additional intelligence built into them to monitor the Kubernetes cluster for ingress resources and configure the underlying Nginx server when something is changed but for the ingress controller to do this it requires a service account with right set up permissions for that we create a service account with the correct roles and roles bindings.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller spec: replicas: 1 selector: matchLabels: name: nginx-ingress template: metadata: labels: name: nginx-ingress spec: containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldPef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443
apiVersion: v1 kind: Service metadata: name: nginx-ingress spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http - port: 443 targetPort: 433 protocol: TCP name: https selector: name: nginx-ingress
apiVersion: v1 kind: ConfigMap metadata: name: nginx-configuration
apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount
You can check nginx deployment, service, configmap, and serviceaccount with Minikube.
$ minikube addons enable ingress
$ kubectl get deployment -n kube-system | grep nginx ingress-nginx-controller 1/1 1 1 131m $ kubectl get service -n kube-system | grep nginx ingress-nginx-controller-admission ClusterIP 10.99.140.249 <none> 443/TCP 132m $ kubectl get configmap -n kube-system | grep nginx NAME DATA AGE nginx-load-balancer-conf 1 96m $ kubectl get serviceaccount -n kube-system | grep nginx ingress-nginx 1 129m ingress-nginx-admission 1 129m
So, Ingress Controller
- ConfigMaps to feed Nginx Data.
- A serviceaccount with the right permission to access all of the objects.
Now we are ready with an ingress controller in its simplest form.
ingress resources is a set of rules and configurations applied to the ingress controller. You can configure rules to say simply forward all incoming traffic to a single application or route traffic to different applications. Based on the URL. The Ingress resource is created with a Kubernetes Definition file.
the backend section defines where the traffic routed to. So, if you have a single backend then you do not really have any rules.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-web spec: backend: serviceName: web-nodeport-service servicePort: 80
$ kubectl create -f ingress-web.yaml Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress ingress.extensions/ingress-web created $ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-web <none> * 192.168.64.5 80 25s
The new ingress is created and routes all incoming traffic directly to the we-service. If you want to use rules, you should route traffic based on different conditions. If you create one rule for traffic originating from each domain or hostname That means when users reach your cluster using the doamin name,
- www.myapplication.com. you can handle the traffic using rule 1.
- www.apps.myapplication.com. you can handle the traffic using rule 2.
- Everything Else Rule3
Within each rules, you can handle different paths.
- 404 Not found
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-web-db spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: web-nodeport-service port: number: 80 - http: paths: - path: /mongodb pathType: Prefix backend: service: name: mongodb-clusterip-service port: number: 80
$ kubectl create -f ingress-web-db.yaml ingress.networking.k8s.io/ingress-web-db created $ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-web-db <none> * 192.168.64.5 80 80s
//curl test $ curl http://192.168.64.5/ This is Node.js for Testing Kubernetes MongoDB and Redis Connection. Also, ConfigMap and Secrets can be tested. $ curl http://192.168.64.5/mongodb It looks like you are trying to access MongoDB over HTTP on the native driver port.
$ kubectl describe ingress ingress-web-db Name: ingress-web-db Namespace: default Address: 192.168.64.5 Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) Rules: Host Path Backends ---- ---- -------- * / web-nodeport-service:80 172.17.0.4:4000) * /mongodb mongodb-clusterip-service:80 172.17.0.3:27017) Annotations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 4m24s nginx-ingress-controller Ingress default/ingress-web-db Normal UPDATE 3m44s nginx-ingress-controller Ingress default/ingress-web-db
- default backend is If a user does not match any of the rules, then the user is directed to the service specified as the default backend.
- default-http-backend: this is service name.
You also need pod and service for those ingress.
Lastly, the third type of configuration is using domain names. traffic by domain name, we use the host field the host field in each rule matches the specified value with the domain name used in the request URL and routes traffic to the appropriate backend.
If you do not specify the host field it will simply consider it as a
* or accept all the incoming traffic through that particular rule without matching the hostname.
Rules with single path
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-web-db-domain spec: rules: - host: web.myapplication.com http: paths: - path: / pathType: Prefix backend: service: name: web-nodeport-service port: number: 80 - host: db.myapplication.com http: paths: - path: / pathType: Prefix backend: service: name: mongodb-clusterip-service port: number: 80
- you can still have multiple path specifications
$ kubectl create -f ingress-web-db-domain.yaml ingress.networking.k8s.io/ingress-web-db-domain created $ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-web-db <none> * 192.168.64.5 80 16m ingress-web-db-domain <none> web.myapplication.com,db.myapplication.com 80 9s
# add hosts $ cat /etc/hosts 192.168.64.5 db.myapplication.com 192.168.64.5 web.myapplication.com # curl test $ curl web.myapplication.com This is Node.js for Testing Kubernetes MongoDB and Redis Connection. Also, ConfigMap and Secrets can be tested. $ curl db.myapplication.com It looks like you are trying to access MongoDB over HTTP on the native driver port.