What is Kubernetes? An Easy Explanation for Beginners

What is Kubernetes? An Easy Explanation for Beginners

Kubernetes (also known as K8s) is an open-source container orchestration system for automating the deployment, scaling, and management of containerized applications. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation (CNCF).

Kubernetes provides a platform for deploying, managing, and scaling applications that are packaged in containers, such as Docker containers. It abstracts the underlying infrastructure and provides a common layer for deploying applications in a variety of environments, including on-premises, in the cloud, or in a hybrid environment.

Key features of Kubernetes include:

  1. Scalability: Kubernetes allows you to scale your application horizontally (add more replicas) or vertically (increase resources) as needed.

  2. High availability: Kubernetes provides features such as self-healing and automatic restarts to ensure that your application is always available.

  3. Resource allocation: Kubernetes allows you to allocate resources such as CPU, memory, and storage to your containers.

  4. Networking: Kubernetes provides a network policy framework to control traffic between containers and external networks.

  5. Security: Kubernetes provides network policies and secret management to secure your applications.

  6. Deployment management: Kubernetes provides a way to deploy and manage applications in containers, including rolling updates, rollbacks, and self-healing.

Kubernetes uses a master-slave architecture, where:

  • Master nodes (also known as control plane nodes) manage the cluster and make decisions about deploying and scaling applications.

  • Worker nodes (also known as minion nodes) run the containers and provide the compute resources for the applications.

  • Kubernetes has become a popular choice for deploying and managing containerized applications in production environments, thanks to its:

  • Flexibility: Kubernetes supports a wide range of container runtimes, including Docker, rkt, and cri-o.

  • Extensibility: Kubernetes has a large ecosystem of plugins and extensions that can be used to add new features and functionality.

  • Portability: Kubernetes allows you to deploy applications in a variety of environments, including on-premises, in the cloud, or in a hybrid environment.

  • Some common use cases for Kubernetes include:

    • Cloud-native applications: Kubernetes is well-suited for deploying cloud-native applications that are built using microservices architecture.

    • Containerized legacy applications: Kubernetes can be used to containerize and modernize legacy applications.

    • Big data and analytics: Kubernetes provides a scalable and flexible platform for deploying big data and analytics workloads.

Many companies use Kubernetes in their production environments. Here are some well-known companies that use Kubernetes:

  1. Airbnb: Airbnb uses Kubernetes to manage their microservices-based architecture, with over 1,000 services deployed across multiple clusters.

  2. Google: Google uses Kubernetes to power their Google Cloud Platform, as well as their own internal infrastructure.

  3. Microsoft: Microsoft uses Kubernetes to power their Azure Container Service, as well as their own internal infrastructure.

  4. Amazon: Amazon uses Kubernetes to power their Amazon Elastic Container Service for Kubernetes (EKS).

  5. Expedia: Expedia uses Kubernetes to manage their containerized applications, with over 100 services deployed across multiple clusters.

  6. The New York Times: The New York Times uses Kubernetes to power their content delivery network, handling millions of requests per day.

  7. Spotify: Spotify uses Kubernetes to manage their microservices-based architecture, with over 1,000 services deployed across multiple clusters.

  8. Pearson: Pearson uses Kubernetes to power their educational software platforms, handling millions of users worldwide.

  9. Lufthansa: Lufthansa uses Kubernetes to manage their containerized applications, including their airline operational systems.

  10. SAP: SAP uses Kubernetes to power their cloud-based enterprise software, including their SAP Cloud Platform.

  11. JP Morgan Chase: JP Morgan Chase uses Kubernetes to manage their containerized applications, including their investment banking and trading platforms.

  12. Ford: Ford uses Kubernetes to power their connected car platform, handling millions of vehicle data points per day.

  13. Verizon: Verizon uses Kubernetes to manage their 5G network infrastructure, including their edge computing and IoT platforms.

  14. Yahoo!: Yahoo! uses Kubernetes to power their advertising platforms, handling billions of ad impressions per day.

  15. eBay: eBay uses Kubernetes to manage their containerized applications, including their e-commerce platform and auction services.

Docker Swarm vs Kubernates

Docker Swarm and Kubernetes are both container orchestration tools, but they have different design goals, architectures, and use cases. Here's a comparison of Docker Swarm and Kubernetes:

Docker Swarm

Docker Swarm and Kubernetes are both container orchestration tools, but they have different design goals, architectures, and use cases. Here's a comparison of Docker Swarm and Kubernetes:

Docker Swarm

Docker Swarm is a container orchestration tool developed by Docker. It allows you to deploy, manage, and scale containerized applications across multiple hosts. Swarm provides a simple, easy-to-use interface for deploying and managing containers.

Key Features:

  1. Simple architecture: Swarm has a simple, decentralized architecture that makes it easy to set up and manage.

  2. Easy to use: Swarm has a user-friendly interface and a simple CLI that makes it easy to deploy and manage containers.

  3. Native Docker integration: Swarm is tightly integrated with Docker, making it a natural choice for Docker users.

  4. Scalability: Swarm supports horizontal scaling, allowing you to add or remove nodes as needed.

  5. High availability: Swarm provides features like node failure detection and automatic container restarts to ensure high availability.

Limitations:

  1. Limited advanced features: Swarm lacks some advanced features, such as rolling updates, self-healing, and network policies, which are available in Kubernetes.

  2. Limited support for non-Docker containers: Swarm is designed primarily for Docker containers, and support for other container runtimes is limited.

Kubernetes

Kubernetes (also known as K8s) is an open-source container orchestration system developed by Google. It provides a platform for deploying, managing, and scaling containerized applications. Kubernetes is designed to be highly scalable, flexible, and fault-tolerant.

Key Features:

  1. Advanced features: Kubernetes provides advanced features like rolling updates, self-healing, and network policies, which are not available in Swarm.

  2. Highly scalable: Kubernetes is designed to handle large-scale deployments and can scale horizontally and vertically.

  3. Flexible: Kubernetes supports a wide range of container runtimes, including Docker, rkt, and cri-o.

  4. Extensive ecosystem: Kubernetes has a large and active community, with many plugins and extensions available.

  5. Multi-cloud support: Kubernetes supports deployment on multiple cloud providers, including AWS, GCP, and Azure.

Complexity:

Kubernetes has a more complex architecture than Swarm, with a larger number of components and a steeper learning curve.

When to choose Docker Swarm:

  1. Small to medium-sized deployments: Swarm is suitable for smaller deployments where simplicity and ease of use are more important than advanced features.

  2. Docker-only environments: If you're already invested in the Docker ecosystem, Swarm is a natural choice.

  3. Dev and testing environments: Swarm is a good fit for dev and testing environments where simplicity and ease of use are more important than advanced features.

When to choose Kubernetes:

  1. Large-scale deployments: Kubernetes is better suited for large-scale deployments that require advanced features like rolling updates and self-healing.

  2. Multi-cloud environments: Kubernetes is a good choice for deployments that span multiple cloud providers.

  3. Complex applications: Kubernetes is better suited for complex, distributed applications that require advanced features like service meshes and ingress controllers.

Control Plane

a control plane refers to the components of a system that manage and orchestrate the deployment, scaling, and management of containerized applications. The control plane is responsible for making decisions about how to allocate resources, schedule tasks, and maintain the overall health and performance of the system.

In Kubernetes, the control plane consists of several components that work together to manage the cluster:

  1. API Server: The API server is the primary entry point for users and clients to interact with the Kubernetes cluster. It provides a RESTful API that allows users to create, update, and delete resources such as pods, services, and deployments.

  2. Controller Manager: The controller manager is responsible for running and managing the control plane controllers. Controllers are responsible for ensuring that the desired state of the cluster is maintained, such as scaling deployments, running jobs, and managing node resources.

  3. Scheduler: The scheduler is responsible for assigning pods to nodes in the cluster. It takes into account factors such as resource availability, node constraints, and pod affinity when making scheduling decisions.

  4. Etcd: Etcd is a distributed key-value store that stores the state of the cluster. It is used to store information about pods, services, and other objects in the cluster.

The control plane components work together to provide the following functionality:

  • Resource allocation: The control plane determines how to allocate resources such as CPU, memory, and storage to pods and services.

  • Scheduling: The control plane schedules pods on nodes in the cluster, taking into account factors such as resource availability and node constraints.

  • Self-healing: The control plane detects and responds to node failures, automatically rescheduling pods on healthy nodes.

  • Scaling: The control plane scales deployments up or down in response to changes in workload or resource utilization.

  • Networking: The control plane provides network policies and services that enable communication between pods and services.

In a Kubernetes cluster, the control plane components are typically deployed on a set of master nodes, which are responsible for managing the cluster. The control plane components communicate with each other and with the worker nodes to manage the cluster and schedule workloads.

Worker Nodes

a worker node (also known as a minion node) is a machine that runs containers and provides computing resources to the cluster. Worker nodes are responsible for executing the tasks assigned to them by the control plane, such as running containers, providing network services, and storing data.

In a Kubernetes cluster, worker nodes are typically Linux machines that run the Kubernetes node component, which includes:

  1. Kubelet: The kubelet is a agent that runs on each worker node and is responsible for managing the node's resources and executing tasks assigned by the control plane.

  2. Container runtime: The container runtime is responsible for running containers on the worker node. Kubernetes supports multiple container runtimes, including Docker, rkt, and cri-o.

  3. Network plugin: The network plugin is responsible for providing networking services to the containers running on the worker node.

Namespace

a namespace is a way to partition a Kubernetes cluster into isolated virtual clusters. Namespaces provide a mechanism for dividing the resources of a cluster into logical groups, allowing multiple teams or applications to share the same cluster without interfering with each other.

Key characteristics of namespaces:

  1. Logical isolation: Namespaces provide a logical isolation between resources, allowing multiple teams or applications to share the same cluster without interfering with each other.

  2. Resource segregation: Namespaces segregate resources, such as pods, services, and deployments, into separate groups, making it easier to manage and allocate resources.

  3. Access control: Namespaces provide a way to control access to resources, allowing administrators to define who can create, update, or delete resources within a namespace.

  4. Scalability: Namespaces allow for horizontal scaling, making it possible to add or remove resources as needed, without affecting other namespaces.

Use cases for namespaces:

  1. Multi-tenancy: Namespaces provide a way to partition a cluster into multiple virtual clusters, allowing multiple teams or organizations to share the same cluster without interfering with each other.

  2. Application isolation: Namespaces provide a way to isolate applications from each other, making it easier to manage and deploy applications without affecting other applications in the cluster.

  3. Development and testing environments: Namespaces provide a way to create separate environments for development, testing, and production, allowing for easier management and deployment of applications.

  4. Resource allocation: Namespaces provide a way to allocate resources, such as CPU, memory, and storage, to specific teams or applications, making it easier to manage and optimize resource utilization.

Creating and managing namespaces:

  1. Create a namespace: You can create a namespace using the kubectl create namespace command or through the Kubernetes API.

  2. Manage namespace resources: You can manage resources within a namespace using the kubectl command-line tool or through the Kubernetes API.

  3. Update namespace settings: You can update namespace settings, such as resource quotas and limits, using the kubectl command-line tool or through the Kubernetes API.

  4. Delete a namespace: You can delete a namespace using the kubectl delete namespace command or through the Kubernetes API.

Pods

A pod is the basic execution unit of a containerized application. It is a logical host for one or more containers. Pods are ephemeral, meaning they can be created, scaled, and deleted as needed.

Characteristics of pods:

  1. Logical host: Pods are a logical host for one or more containers.

  2. Ephemeral: Pods are ephemeral, meaning they can be created, scaled, and deleted as needed.

  3. Isolated: Pods are isolated from each other, meaning they have their own network stack, process space, and file system.

  4. Containerized: Pods are containerized, meaning they run in a container runtime, such as Docker, rkt, or cri-o.

  5. Scalable: Pods can be scaled up or down by adding or removing replicas.

Pod components:

  1. Containers: Pods can have one or more containers, which are the smallest units of deployment, scaling, and management in Kubernetes.

  2. Volumes: Pods can have one or more volumes, which are directories that are accessible to the containers in the pod.

  3. Networking: Pods have their own IP address and network stack, which allows them to communicate with other pods and services in the cluster.

Pod lifecycle:

  1. Creation: Pods are created by the Kubernetes control plane in response to a deployment or replication controller.

  2. Running: Pods are running when they are executing the containers and providing services to the application.

  3. Scaling: Pods can be scaled up or down by adding or removing replicas.

  4. Deletion: Pods can be deleted when they are no longer needed or when the application is updated.

Pod types:

  1. Deployments: Pods can be created as part of a deployment, which manages the rollout of new versions of an application.

  2. ReplicaSets: Pods can be created as part of a replica set, which ensures a specified number of replicas are running at any given time.

  3. StatefulSets: Pods can be created as part of a stateful set, which manages the deployment and scaling of stateful applications.

  4. DaemonSets: Pods can be created as part of a daemon set, which ensures a specified number of replicas are running on each node in the cluster.

Pod networking:

  1. Pod IP address: Each pod has its own IP address, which is used to communicate with other pods and services in the cluster.

  2. Service discovery: Pods can use service discovery mechanisms, such as DNS or environment variables, to find and communicate with other pods and services in the cluster.

  3. Network policies: Pods can be configured with network policies, which control incoming and outgoing traffic to and from the pod.

Pod Life cycle

A pod's lifecycle refers to the stages a pod goes through from its creation to its eventual deletion. Here are the main stages of a pod's lifecycle:

1. Pending (0-30 seconds)

  • The pod is created and the Kubernetes control plane begins to process it.

  • The pod is in a pending state until it is scheduled to a node.

2. Running (30 seconds - unknown)

  • The pod is scheduled to a node and its containers are started.

  • The pod is running and providing services to the application.

3. Succeeded (unknown - 30 minutes)

  • The pod has completed its execution successfully.

  • The pod remains in a succeeded state for a short period before being deleted.

4. Failed (unknown - 30 minutes)

  • The pod has failed to execute its containers.

  • The pod remains in a failed state for a short period before being deleted.

5. Unknown (unknown - 30 minutes)

  • The pod's status is unknown, usually due to a node or container failure.

  • The pod remains in an unknown state for a short period before being deleted.

6. Terminating (unknown - 30 minutes)

  • The pod is being terminated, usually due to a deletion request.

  • The pod remains in a terminating state until its containers are stopped and deleted.

7. Deleted

  • The pod has been deleted and its resources are released.

Here are some additional notes on pod lifecycle:

  • Pod timeouts: Pods can have timeouts, which determine how long they will wait for a container to start or complete. If a container fails to start or complete within the timeout, the pod will be terminated.

  • Pod restart policy: Pods can have a restart policy, which determines what happens when a container fails or terminates. The policy can be set to always, never, or on-failure.

  • Pod garbage collection: Pods that are no longer needed or have failed can be garbage collected, which means they are deleted and their resources are released.

Services in Kubernetes!

A Service in Kubernetes is an abstraction over a set of Pods that defines a network interface and a set of endpoint policies. It allows you to expose your application to the outside world, and provides features like load balancing, session persistence, and SSL termination.

1. Cluster IP

  • ClusterIP: Exposes the Service internally, within the cluster, using a virtual IP address.

  • Accessible: Only accessible from within the cluster, not from outside.

  • Use cases: Internal communication between Pods, or for testing and debugging purposes.

2. Node Port

  • NodePort: Exposes the Service on a specific port on each Node in the cluster.

  • Accessible: From outside the cluster, via the Node's IP address and the exposed port.

  • Use cases: Development, testing, or debugging purposes, where you want to access the Service from outside the cluster.

3. Load Balancer

  • LoadBalancer: Exposes the Service externally, using a cloud provider's load balancer.

  • Accessible: From outside the cluster, via the load balancer's IP address.

  • Use cases: Production environments, where you want to expose the Service to the outside world and distribute traffic across multiple Pods.

Here's a summary:

Service TypeAccessible FromUse Cases
Cluster IPWithin the clusterInternal communication, testing, debugging
Node PortOutside the cluster, via Node IPDevelopment, testing, debugging
Load BalancerOutside the cluster, via load balancer IPProduction, traffic distribution, external access

Replication Controller

A Replication Controller in Kubernetes is a built-in controller that ensures a specified number of replica pods are running at any given time. It achieves this by continuously monitoring the state of the pods and creating or deleting pods as necessary to maintain the desired replica count. This helps to ensure high availability and scalability of applications running on a Kubernetes cluster.

Replication Set

A Replication Set in Kubernetes is a higher-level abstraction over a Replication Controller. It performs the same function of ensuring a specified number of replica pods are running at any given time, but it uses a set-based label selector instead of an equality-based label selector. This allows Replication Sets to support more label matching patterns and makes it easier to manage more complex deployments.

Daemon Set

A Daemon Set in Kubernetes is a type of controller that ensures that a specified number of replica pods are running on each node in the cluster. It is useful for running system daemons or background services that need to be running on every node, such as logging agents, monitoring agents, or network plugins. When a new node joins the cluster, the Daemon Set automatically schedules a pod to run on that node. When a node is removed from the cluster, the Daemon Set automatically cleans up any associated pods.

In Kubernetes, there are several ways to update the application pods in a rolling and controlled manner:

  1. Recreate Rolling Update: This strategy involves taking down all of the existing pods at once and replacing them with the new version of the pods. While this strategy can result in downtime for the application, it is the simplest and fastest way to update pods.

  2. Rolling Update: This strategy involves progressively replacing the old version of the pods with the new version, one pod at a time. This allows the application to continue running and serving traffic during the update process. Kubernetes automatically manages the rollout of the new version of the pods while ensuring that the number of healthy pods remains above a certain threshold.

  3. Blue-green Deployment: This strategy involves creating a new version of the deployment alongside the existing version, without any traffic being routed to the new version. Once the new version is fully up and running, traffic is gradually shifted from the old version to the new version. This allows for a no-downtime deployment and easy rollback in case of issues with the new version.

Kubernetes provides built-in support for both Rolling Updates and Blue-green Deployments, making it easy to deploy and update applications in a controlled and consistent manner.

Config map

A ConfigMap in Kubernetes is an object that allows you to store and manage configuration data for your applications as key-value pairs. ConfigMaps can be created from files or directories on the Kubernetes master node or from data provided as part of a YAML or JSON manifest.

Once created, ConfigMaps can be mounted as volumes in pods or used as environment variables for the containers in those pods. This allows the configuration data to be easily accessed and used by the application at runtime. ConfigMaps are useful for storing configuration data that is not sensitive, such as application settings or configuration files, and that may need to be updated or modified without rebuilding the application container image.

ConfigMaps are lightweight and flexible, making them a popular choice for managing configuration data in Kubernetes. They can be used in combination with other objects, such as Secrets, to store sensitive data that requires stronger security measures.

Helem Chart

Helm is a package manager for Kubernetes that allows you to easily install, upgrade, and manage applications on Kubernetes clusters. A Helm Chart is a collection of files that define the resources and configuration needed to deploy an application on a Kubernetes cluster. Helm Charts are similar to Docker Compose files but are specifically designed for use with Kubernetes.

Helm Charts define the application's Kubernetes resources, such as pods, services, config maps, and secrets, as well as the configuration values needed to customize the application for a specific environment. Helm Charts can be created and shared by developers, making it easy to deploy and manage applications on Kubernetes across different clusters and environments.

Helm Charts can be installed and managed using the Helm command-line tool, which simplifies the process of deploying and managing applications on Kubernetes. Helm provides features such as versioning, rollbacks, and upgrades, making it an ideal solution for managing applications on Kubernetes.

RBAC

RBAC (Role-Based Access Control) in Kubernetes is a feature that enables you to control access to the Kubernetes API and resources based on user roles. RBAC allows you to define roles that specify the permissions a user or group of users has to perform certain actions on specific Kubernetes resources.

RBAC uses the concept of roles and role bindings to define access control policies in Kubernetes. A role is a collection of permissions that specify what actions can be performed on which resources. A role binding binds a role to a user or group of users, granting them the permissions defined in the role.

RBAC can be used to manage access to Kubernetes resources at both the cluster and namespace levels. With RBAC, you can grant or deny access to specific resources, actions, and paths, providing a fine-grained level of control over who can access and modify resources in your Kubernetes cluster.

RBAC is a powerful feature of Kubernetes that helps you secure your cluster and ensure that only authorized users have access to the resources and data in your cluster. RBAC can be used in combination with other security features, such as network policies and secrets, to provide a comprehensive security solution for your applications running on Kubernetes.

Argocd

Agocd is a declarative continuous delivery (CD) tool for Kubernetes that allows you to automate deployments, manage configuration drift, and enforce gitops best practices. Argocd uses a GitOps approach, where the desired state of your Kubernetes cluster is stored in a Git repository and synchronized with the cluster by Argocd.

Argocd integrates with your existing CI/CD pipeline and can be configured to automatically deploy and update applications as changes are pushed to the Git repository. Argocd supports blue-green and canary deployments, allowing you to roll out updates to your applications in a controlled and safe manner.

Argocd also includes features such as policy-based automated sync, automatic drift detection, and automated rollbacks, providing a comprehensive solution for managing and deploying applications on Kubernetes.

Argocd provides a web-based UI for managing and visualizing your application deployments, making it easy to monitor the deployment status and roll back changes if necessary. Argocd supports multiple Kubernetes clusters, allowing you to manage and deploy applications across different clusters and environments.

Argocd is an open-source tool that is built on top of Kubernetes and is designed to integrate easily with your existing Kubernetes infrastructure. It is a popular tool for managing and deploying applications on Kubernetes, and it provides a comprehensive solution for automating deployments and enforcing gitops best practices.

COMMANDS

Cluster and Node Management

kubectl cluster-info: Displays information about the Kubernetes cluster, including the master and node URLs.
kubectl cluster-info

kubectl get nodes: Lists all nodes in the cluster.
kubectl get nodes

kubectl get node <node-name>: Displays details about a specific node.
kubectl get node node-1

kubectl delete node <node-name>: Deletes a node from the cluster.
kubectl delete node node-1

kubectl edit node <node-name>: Edits a node configuration.
kubectl edit node node-1

Pod Management

kubectl get pods: Lists all pods in the current namespace.
kubectl get pods

kubectl get pods -n <namespace>: Lists all pods in a specific namespace.
kubectl get pods -n default

kubectl create deployment <deployment-name> --image=<image-name>: Creates a deployment with a specified image.
kubectl create deployment myapp --image=nginx:latest

kubectl rollout restart deployment <deployment-name>: Restarts a deployment.
kubectl rollout restart deployment myapp

kubectl delete pod <pod-name>: Deletes a pod.
kubectl delete pod myapp-123456

kubectl describe pod <pod-name>: Displays detailed information about a pod.
kubectl describe pod myapp-123456

Service and Ingress Management

kubectl get svc: Lists all services in the current namespace.
kubectl get svc

kubectl get svc -n <namespace>: Lists all services in a specific namespace.
kubectl get svc -n default

kubectl create svc <service-name> --type=<service-type> --target-port=<port>: Creates a service with a specified type and port.
kubectl create svc myapp --type=LoadBalancer --target-port=80

kubectl delete svc <service-name>: Deletes a service.
kubectl delete svc myapp

kubectl get ing: Lists all ingress resources in the current namespace.
kubectl get ing

kubectl create ing <ingress-name> --rule=<rule>: Creates an ingress resource with a specified rule.
kubectl create ing myapp-ing --rule=Host myapp.local

Secret and ConfigMap Management

kubectl get secret: Lists all secrets in the current namespace.
kubectl get secret

kubectl get secret -n <namespace>: Lists all secrets in a specific namespace.
kubectl get secret -n default

kubectl create secret generic <secret-name> --from-literal=<key>=<value>: Creates a secret with a specified key-value pair.
kubectl create secret generic mysecret --from-literal=DB_PASSWORD=mysecretpassword

kubectl get configmap: Lists all configmaps in the current namespace.
kubectl get configmap

kubectl create configmap <configmap-name> --from-file=<file>: Creates a configmap from a file.
kubectl create configmap myconfigmap --from-file=app.properties

** Rollouts and Deployments**

kubectl rollout status deployment <deployment-name>: Displays the status of a deployment rollout.
kubectl rollout status deployment myapp

kubectl rollout history deployment <deployment-name>: Displays the history of a deployment rollout.
kubectl rollout history deployment myapp

kubectl apply -f <manifest-file>: Applies a manifest file to the cluster.
kubectl apply -f deployment.yaml

kubectl rollout undo deployment <deployment-name>: Undoes a deployment rollout.
kubectl rollout undo deployment myapp

** Monitoring and Logging**

kubectl get logs -f <pod-name>: Displays the logs of a pod in real-time.
kubectl get logs -f myapp-123456

kubectl get csv: Lists all Cluster Scheduler Versions (CSVs) in the cluster.
kubectl get csv

kubectl describe node <node-name> | grep.allocataire: Displays allocation details for a node.
kubectl describe node node-1 | grep allocataire

kubectl get clusterinfo: Displays cluster information, including the master and node URLs.
kubectl get clusterinfo

kubectl get events: Lists all events in the cluster.
kubectl get events

*************** Debugging and Troubleshooting********************

kubectl exec -it <pod-name> --/bin/bash: Executes a command in a pod using a Bash shell.
kubectl exec -it myapp-123456 --bin/bash

kubectl attach -it <pod-name>: Attaches to a pod's stdin, stdout, and stderr streams.
kubectl attach -it myapp-123456

kubectl port-forward <pod-name> <local-port>:<container-port>: Forwards traffic from a local port to a container port.
kubectl port-forward myapp-123456 8080:80

kubectl describe pod <pod-name> | grep events: Displays events related to a pod.
kubectl describe pod myapp-123456 | grep events

kubectl get pod <pod-name> -o yaml: Displays a pod's configuration in YAML format.
kubectl get pod myapp-123456 -o yaml