Services in k8s

Services in k8s

What if there are no services in k8s

If there were no concept of services in Kubernetes, it would be much more difficult to manage communication between different parts of a distributed application running on a Kubernetes cluster.

Services are an abstraction that provides a stable IP address and DNS name for a set of pods that perform the same function. This allows other pods to communicate with them without needing to know their specific IP addresses or port numbers. Without services, developers would need to manually manage the IP addresses and port numbers for each pod, which could quickly become unwieldy as the number of pods grows.

In addition, services provide other important features like load balancing, automatic failover, and service discovery. Without these features, it would be much harder to ensure that traffic is routed to healthy pods and that applications remain available even if some pods fail.

Overall, services are a crucial concept in Kubernetes that make it easier to build and manage distributed applications, and without them, the platform would be much less powerful and flexible.

What is load balancing in k8s

Load balancing in Kubernetes is the process of distributing network traffic across multiple instances of an application running in a cluster, to optimize performance, ensure availability, and prevent overload of any individual instance.

In Kubernetes, load balancing is achieved through the use of a Kubernetes Service, which acts as a stable endpoint for accessing a group of pods that perform the same function. When traffic is sent to the Service's IP address, it is automatically routed to one of the available pods using a load-balancing algorithm.

There are different load balancing algorithms available in Kubernetes, such as round-robin, least connections, and IP hash, which can be configured based on specific application requirements.

In addition, Kubernetes provides built-in support for automatic load balancing of Services across multiple nodes in the cluster, using a feature called the Kubernetes Service Proxy. This ensures that traffic is automatically directed to healthy pods, even if they are running on different nodes in the cluster.

Overall, load balancing is a critical aspect of building and managing scalable applications in Kubernetes, and the platform provides a powerful and flexible set of tools for achieving optimal load-balancing performance.

The process behind the service discovery in k8s

Service discovery in Kubernetes is the process of automatically discovering the network location of services within a Kubernetes cluster so that clients can access them without needing to know the specific IP addresses or ports of the individual instances.

When a Service is created in Kubernetes, it is assigned a unique IP address and DNS name within the cluster. The DNS name is based on the name of the Service and is automatically registered with the Kubernetes DNS service.

When a client needs to access the Service, it can use the DNS name to resolve the IP address of one of the available instances of the Service. The Kubernetes DNS service then returns the IP address of one of the healthy pods associated with the Service, using a round-robin algorithm.

This process is facilitated by several Kubernetes components, including the kube-proxy, which manages network routing between nodes in the cluster, and the kube-dns or CoreDNS service, which provides DNS resolution for Kubernetes Services and pods.

In addition, Kubernetes provides several features for customizing service discovery behavior, such as external DNS integration, custom DNS nameservers, and service discovery for non-Kubernetes resources.

Overall, service discovery is a critical component of building and managing distributed applications in Kubernetes, and the platform provides a powerful set of tools for automating and customizing this process.

Exposing the outside world in k8s

In Kubernetes, exposing an application or service to the outside world involves creating an external endpoint that can be accessed by external clients, such as users or other services outside of the Kubernetes cluster.

There are several ways to expose applications and services outside of a Kubernetes cluster, including:

  1. Kubernetes Services with NodePort type: This method exposes a Kubernetes Service on a static port on each node in the cluster, and then maps traffic to that port to the Service. Clients can then access the Service by connecting to any node's IP address on that port.

  2. Kubernetes Services with LoadBalancer type: This method creates a cloud provider-specific load balancer that routes traffic to the Service. The load balancer is provisioned by the cloud provider, and clients can access the Service using the load balancer's IP address.

  3. Kubernetes Services with External IP type: This method assigns a static IP address to the Service, which can be accessed directly by external clients. This is useful in cases where the external clients are not within the same network as the Kubernetes cluster.

  4. Kubernetes Ingress: This is a more advanced method of exposing applications and services to the outside world, which provides a way to route traffic to different Services based on the URL path or hostname of the incoming request. Ingress can be used to expose multiple Services through a single IP address and can provide advanced features such as SSL termination and HTTP/2 support.

Overall, Kubernetes provides a range of options for exposing applications and services to the outside world, and the best approach will depend on the specific requirements of the application and the underlying infrastructure.

Examples of the load-balancing, discovery, and exposing the outside world in k8s

Load-balancing

  1. First, create a Kubernetes Deployment with multiple replicas of your application. For example:
yamlCopy codeapiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:latest
        ports:
        - containerPort: 8080

In this example, the Deployment named "myapp" is created with three replicas, and each replica runs a container based on the "myapp" image, listening on port 8080.

  1. Apply the Kubernetes Deployment manifest to the cluster using the kubectl apply command.

  2. Next, create a Kubernetes Service with the type of LoadBalancer to expose the Deployment externally. For example:

yamlCopy codeapiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  type: LoadBalancer
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080

In this example, a Service named "myapp-service" is created with the type of LoadBalancer, and port 80 is mapped to the targetPort 8080. The Service selects the Pods with the label "app=myapp".

  1. Apply the Kubernetes Service manifest to the cluster using the kubectl apply command.

  2. Kubernetes will automatically create a load balancer and direct incoming traffic to the replicas of the Deployment. You can check the status of the load balancer using the kubectl get svc command.

  3. Now, you can access your application by navigating to the external IP address of the load balancer in your web browser. You can find the external IP address of the load balancer by running kubectl get svc myapp-service.

Note that this is just one example of load balancing in Kubernetes. Other types of Services can be used for different use cases, such as ClusterIP for internal communication between Pods, or NodePort for exposing a Service on a specific port on each node in the cluster.

Service discovery

  1. First, create a Kubernetes Deployment with your application. For example:
yamlCopy codeapiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:latest
        ports:
        - containerPort: 8080

In this example, the Deployment named "myapp" is created with three replicas, and each replica runs a container based on the "myapp" image, listening on port 8080.

  1. Apply the Kubernetes Deployment manifest to the cluster using the kubectl apply command.

  2. Next, create a Kubernetes Service to expose the Deployment internally using a ClusterIP. For example:

yamlCopy codeapiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  selector:
    app: myapp
  type: ClusterIP
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080

In this example, a Service named "myapp-service" is created with type ClusterIP, and port 80 is mapped to the targetPort 8080. The Service selects the Pods with the label "app=myapp".

  1. Apply the Kubernetes Service manifest to the cluster using the kubectl apply command.

  2. Now, you can use ClusterIP to discover the Service within the cluster. The ClusterIP is a virtual IP address that is assigned to the Service. You can find the ClusterIP by running the command kubectl get service myapp-service.

  3. You can use the ClusterIP to access the Service from within other Pods in the cluster. For example, if you have another Deployment that needs to communicate with the "myapp" Deployment, you can set the ClusterIP as an environment variable in the other Deployment, like this:

yamlCopy codeapiVersion: apps/v1
kind: Deployment
metadata:
  name: otherapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: otherapp
  template:
    metadata:
      labels:
        app: otherapp
    spec:
      containers:
      - name: otherapp
        image: otherapp:latest
        env:
        - name: MYAPP_HOST
          value: myapp-service
        ports:
        - containerPort: 8080

In this example, the "otherapp" Deployment is created with one replica, and the MYAPP_HOST environment variable is set to the name of the "myapp" Service. The container in the "otherapp" Deployment can use the environment variable to communicate with the "myapp" Deployment using the ClusterIP.

Note that this is just one example of service discovery using ClusterIP in Kubernetes. There are other ways to discover Services, such as using DNS or environment variables, and there are also more advanced features like Service Meshes that can provide additional functionality for service discovery and communication.

Exposing the world in k8s

here's an example of exposing a Kubernetes Service to the outside world using Amazon Elastic Kubernetes Service (EKS):

  1. First, create a Kubernetes Service with the LoadBalancer type, and specify the targetPort and nodePort as appropriate for your application. For example:
yamlCopy codeapiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  type: LoadBalancer
  ports:
    - name: http
      port: 80
      targetPort: 8080
      nodePort: 30080
  selector:
    app: myservice

In this example, the Service named "myservice" is created with the LoadBalancer type, and port 80 is mapped to the targetPort 8080. The nodePort is set to 30080, which is the port on which the load balancer will listen for traffic.

  1. Apply the Kubernetes Service manifest to the EKS cluster using the kubectl apply command.

  2. Once the Service is created, EKS will automatically create an Amazon Elastic Load Balancer (ELB) to route traffic to the Kubernetes Service. To get the external DNS name of the ELB, you can run the following command:

  3.   kubectl get svc myservice -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
    

This will display the DNS name of the ELB.

  1. Now, you can access your application from the outside world by connecting to the DNS name of the ELB on port 80. For example, if the DNS name of the ELB is myservice-1234567890.us-west-2.elb.amazonaws.com, you can access the application by navigating to http://myservice-1234567890.us-west-2.elb.amazonaws.com in your web browser.

Note that in order to use the LoadBalancer type with EKS, you need to have the appropriate AWS credentials set up in your environment, and your EKS cluster needs to have the correct IAM permissions to create and manage load balancers. Additionally, there may be additional configuration options for the ELB, such as SSL termination or health checks, that you can configure using annotations on the Service manifest.

Did you find this article valuable?

Support Shiva krishna Addikicherla by becoming a sponsor. Any amount is appreciated!