CKAD exam Preparation Notes - Kubernetes Services and Ingress - Part 6

CKAD exam Preparation Notes - Kubernetes Services and Ingress - Part 6

In the last part, we have covered the Labels, Selectors and Annotations, Jobs, and CronJobs. This part will cover Services and types of services (ClusterIP, NodePort, and Load Balancer) and Ingress in Kubernetes

Services

  • Pods in Kubernetes are ephemeral and their IP changes after the restart.

  • So using the Pods IP for internal or external communication doesn't make sense.

  • Services is the Kubernetes object that provides stable or static IP for the container.

  • Services also work as a Load balancer if you have multiple replicas of pods.

  • In technical terms, A Service is an abstraction that exposes an application running on a set of Pods as a network service

  • Services enable application access from both internal and external sources.

Types of Services

ClusterIP

  • ClusterIP is the default service in Kubernetes, which means whenever you create a service without mentioning the type, Kubernetes will automatically create a ClusterIP.

  • As its name suggests, the ClusterIP service is only reachable inside the cluster.

  • You can't make a request to the pod or replicasets from outside the cluster.

  • ClusterIP is generally used for internal connection of pods, for example, between the Backend and Database.

Imperative command for ClusterIP

kubectl create service clusterip NAME --tcp=<port>:<targetPort>

Here, we have two types of ports

  • port: port represents the port on the will service that will expose the application.

  • targetPort: targetPort is the port on which the application runs in the pod.

ClusterIP example: kubectl create service clusterip nginx-service --tcp=80:8080

In the above example, port 80 is the container port, and 8080 is the port on which service will be accessible.

YAML manifest for ClusterIP

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-service
  name: nginx-service
spec:
  ports:
  - name: nginx-service
    port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: ClusterIP
status:
  loadBalancer: {}

NodePort

  • NodePort is used to open a specific port on all cluster nodes.

  • A ClusterIP is automatically created to route the request whenever we create NodePort.

  • With NodePort service, you can access the application on node-ip:node-port from outside the cluster.

  • If you do not specify the port number, Kubernetes will automatically choose a port from the range of 30000–32767.

  • Any other NodePort service should not use the port you choose to expose the application.

  • One use case of NodePort is testing the application by exposing it with NodePort.

  • NodePort is not an ideal service for production. We should prefer using a Load Balancer, which we will discuss in the next section.

Imperative command for NodePort kubectl create svc nodeport nginx-service --tcp=80:8080 --node-port=30080 Here we have specified the node-port 30080. Kubernetes will automatically provide a port if you skip this flag.

YAML manifest for NodePort

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-service
  name: nginx-service
spec:
  ports:
  - name: nginx-service
    nodePort: 30080
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: nginx
  type: NodePort
status:
  loadBalancer: {}

Load Balancer

  • Load Balancer is an extension of the NodePort server. Kubernetes will automatically create the NodePort and ClusterIP services whenever you create a Load Balancer.

  • Load Balancer integrates the NodePorts with the external Load Balancers provided by the cloud service providers.

  • The NodePort created by Kubernetes will only be accessible from the Load Balancer.

  • The Load Balancer provided by cloud providers like AWS, Azure, GCP, etc., comes with an external endpoint that can be added as an A or CNAME DNS record.

  • Load Balancers managed by cloud providers are expensive, so they should be used when you are sure that NodePort isn't enough to fulfill the requirements.

  • Kubernetes creates a new Load Balancer service whenever you want to expose a service outside the cluster. You can't share a load balancer between two load balancer services.

Imperative command for Load Balancer

kubectl create service loadbalancer NAME [--tcp=port:targetPort]

kubectl create svc loadbalancer nginx-service --tcp=80:8080

YAML manifest for Load Balancer

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-service
  name: nginx-service
spec:
  ports:
  - name: nginx-service
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: nginx
  type: LoadBalancer
status:
  loadBalancer: {}

Ingress

  • Ingress is not a service but an API object that routes the request to the application running in the Kubernetes cluster.

  • Ingress maps the internal application on different external endpoints, such as HTTP and HTTPS URLs.

  • In a complex application, we can have multiple public endpoints. We can create a load balancer for each endpoint, but that will cost a lot.

  • With Ingress, we can utilize a single load balancer for incoming requests and then route that request to the internal component based on requests.

  • Ingress can also manage the SSL certificates for all the external URLs.

  • Configuring Ingress requires an Ingress Controller to install in the cluster, as Kubernetes doesn't come with an Ingress controller by default. Many third-party options, like NGINX, Traefik, HAProxy, Kong, etc., are available.

  • One of the most common Ingress Controller in NGINX, and you can refer to this blog to see how you can install it in your cluster.

  • Once the Ingress controller is installed, we can start configuring the Ingress Resources.

Ingress Resource Configuration

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: service-ingress
spec:
  rules:
  - host: "service1.example.com"
    http:
      paths:
      - pathType: Prefix
        path: "/service1"
        backend:
          service:
            name: service1
            port:
              number: 80
  - host: "service2.example.com"
    http:
      paths:
      - pathType: Prefix
        path: "service2"
        backend:
          service:
            name: service2
            port:
              number: 80

External Resources

  1. Kubernetes Services explained | ClusterIP vs NodePort vs LoadBalancer vs Headless Service by Nana Janashia

  2. Kubernetes Ingress Tutorial for Beginners by Nana Janashia

  3. Kubernetes expose command documentation

Practices Question

  1. Create a pod with an Nginx image and expose it with a ClusterIP service. Create another busybox container and access the Nginx default page over the ClusterIP.

  2. Create an Nginx pod and expose it on NodePort 30080. Try accessing it on the node-ip:node-port from your local browser.

  3. Create an Nginx pod and expose it with the external load balancer. (Do it if you are okay with spending a few bucks)

  4. Install the Nginx Ingress controller in the cluster by referring to the Nginx documentation I shared above. Now run a container with an Nginx image and create an Ingress resource to access the same on port 80. Check if you can access the Nginx default page over the load balancer URL.

That's all for this part, and In the next part, we will cover topics Volumes, Persistent Volumes, Persistent Volume Claims and Storage Classes etc.

Bye

To be Continued..!!

Did you find this article valuable?

Support Prateek Jain by becoming a sponsor. Any amount is appreciated!