Skip to content

Pod, Service, and Deployment

1. Big Picture

  • In Docker, you run containers directly.
    • Docker Swarm manages containers
  • In Kubernetes, you run pods, which wrap account containers.
    • A node is a machine (physical or virtual) where pods live.

2. Pods: Containers and Node Abstraction

What is a Pod?
  • A pod includes one or more containers plus additional share resources.
    • Shared resources include, but are not limited to, shared IP (internal/external) and storage volumes.
    • Example pod: one nginx webserver container plus one log-shipper sidecar container plus shared volume for log data.
  • Pods are scheduled onto nodes by the kube-scheduler.
Pod and physical (virtual) node
  • Each node runs a kubelet agent.
  • Kubelet talks to the container runtime (containerd in Rancher Desktop).
  • When a pod is deployed, kubelet pulls the associated container image(s) and runs it(them).
    graph TD
        subgraph Node1["Node (VM)"]
            Kubelet1["kubelet + runtime"]
            subgraph Pod1["Pod"]
                Nginx["Container: Nginx"]
            end
        end

        subgraph Node2["Node (VM)"]
            Kubelet2["kubelet + runtime"]
            subgraph Pod2["Pod"]
                Redis["Container: Redis"]
            end
        end

        Kubelet1 --> Pod1
        Kubelet2 --> Pod2
Hands-on with Rancher Desktop
  • Verify that your Rancher Desktop is up and running
kubectl get nodes -o wide
  • Create a file called nginx-pod.yaml with the following content
apiVersion: v1
kind: Pod
metadata:
    name: nginx
    labels:
        app: nginx
spec:
    containers:
    - name: nginx
        image: nginx:latest   # pulls Docker/OCI image
        ports:
        - containerPort: 80
  • Run kubectl and provide path to your nginx-pod.yaml. In the example below, I am in the same directory as my file.
kubectl apply -f nginx-pod.yaml
kubectl get pods -o wide


3. Services: Stable Access to Pods

At this point, if we try to access the above pod using the containerPort 80, it will fail.

Problem

  • Pods are ephemeral
    • Pods can restart and be rescheduled onto different nodes.
    • Each pod gets a random IP inside the cluster.
    • How does a client reliably connect to nginx if its internal IP changes?
  • Docker: -P and -p is not adequate for this.

Solution

  • Kubernetes Service
  • A Service provides Pods with a stable virtual IP and DNS name.
  • Service load-balances traffic to all matching Pods via label.

Connection to Physical Node

  • A ClusterIP service gives access only inside the cluster.
  • A NodePort service opens a port on every node's IP.
  • A LoadBalancer (on cloud) provisions an external IP (if available).
kubectl get nodes
Hands-on with Rancher Desktop: Adding service to pod
  • Create a file called nginx-svc.yaml with the following content
apiVersion: v1
kind: Service
metadata:
    name: nginx
spec:
    type: NodePort
    selector:
        app: nginx
    ports:
    - protocol: TCP
        port: 80
        targetPort: 80
        nodePort: 30007

12: Your Goal This Week

  • Rebuild the RAMCoin miner in Kubernetes
  • Learn:

  • YAML object creation

  • Pod and service exposure
  • Logging and troubleshooting

From design to deployment: this is cloud engineering in action.


13: Coming Up

  • Kubernetes Dashboard
  • Namespaces
  • CI/CD pipelines and Helm
  • Securing services

14: Key Takeaways

  • Design is only the beginning
  • Deployment is the real engineering
  • Kubernetes is not just a tool — it's your platform

You don’t run containers. You orchestrate services.


3. Kubernetes: Pods, Service, Deployment

Info

  • Pod: group of containers sharing IP and volumes
  • Service: stable access to dynamic pod groups
  • Deployment: declarative state, rolling updates
  • NodePort*- and Ingress**: access from outside the cluster
Overview
  • This is done via Kubernetes Objects, described through YAML files.
  • Kubernetes objects are persistent entities in the Kubernetes system, which represent the state of your cluster.
    • What containerized applications are running (and on which nodes)
    • The resources available to those applications
    • The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
  • A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's desired state.
  • Documentation
Terminologies: Pods
  • Pod: Smallest deployable units of computings that can be created and managed in Kubernetes
  • A pod can contain one or more containers with shared storage and network resources and specifications for how to run the containers.
  • Create the following file called nginx.yml

  • Deploy the pod:
kubectl apply -f nginx.yml
kubectl get pods
kubectl describe pods nginx
  • How to access this nginx container?
Terminologies: Service
  • Service: A method of exposing network application running as one or more Pods without making any changes to the application.
  • Each Service object defines a logical set of endpoints (usually endpoints are pods) along with a policy about how to make those pods accessible.
  • For exposing Service to an external IP address, there are several types:
    • ClusterIP
    • NodePort
    • LoadBalancer
    • ExternalName
  • ClusterIP: expose service onto a cluster-internal (Kubernetes) IP. For world access, Ingress needs to be used.
  • NodePort: Expose the service on each Node (worker computers) at a static port.

  • Deploy the service:
kubectl apply -f nginx-svc.yml
kubectl get svc
kubectl describe svc nginx
  • Visit the headnode of your Kubernetes cluster at the nodePort 30007.
Terminologies: Deployment
  • Deployment: provides declarative updates for Pods (and ReplicaSets) .
    • A desired state is described in a Deployment.

  • Deploy the application:
kubectl delete pods nginx
kubectl delete svc nginx
kubectl apply -f nginx-deployment.yml
kubectl get deployment nginx-deployment
kubectl get svc
kubectl describe svc nginx
kubectl get pods
kubectl describe pods nginx
  • Visit the headnode of your Kubernetes cluster again at the nodePort 30007.
  • Delete the deployment
kubectl delete deployment nginx-deployment

4. Kubernetes Dashboard

Deploy the dashboard
  • Run the following commands
cd /local/repository
bash launch_dashboard.sh
  • Go to the head node URL at port 30082 for kubernetes-dashboard
  • Hit skip to omit security (don't do that at your job!).
Kubernetes namespace
  • Provides a mechanism for isolating groups of resources within a single cluster.
  • Uniqueness is enforced only within a single namespace for namespaced objects (Deployment and Services)
  • Uniquess of other cluster-wide objects (StorageClass, Nodes, PersistentVolumes, etc) is enforced across namespaces.
  • Run the following commands from inside the ram_coin directory
  • namespaces, namespace or ns
kubectl get namespaces
kubectl get ns
kubectl get namespace
  • Using --namespace or -n let you specify a namespace and look at objects within that namespace.
  • Without any specification, it is the default namespace (default)
kubectl get ns
kubectl get pods -n kubernetes-dashboard
kubectl get pods
kubectl get services --namespace kubernetes-dashboard
kubectl get services

5. More Kubernetes Deployment

Launch ram_coin on Kubernetes
  • First, we deploy a registry service. This is equivalent to a local version of Docker Hub.
    • Only one member of the team does this
cd
kubectl create deployment registry --image=registry
kubectl expose deploy/registry --port=5000 --type=NodePort
kubectl get deployment
kubectl get pods
kubectl get svc

Pod registry

  • We can patch configurations of deployed services to set a specific external port.
kubectl patch service registry --type='json' --patch='[{"op": "replace", "path": "/spec/ports/0/nodePort", "value":30000}]'
kubectl get svc
  • You can see the external port has now been changed (patched)

Patched pod registry

Building and pushing images for ramcoin
  • We test our local registry by pulling busybox from Docker Hub and then tag/push it to our local registry.
docker pull busybox
docker tag busybox 127.0.0.1:30000/busybox
docker push 127.0.0.1:30000/busybox
curl 127.0.0.1:30000/v2/_catalog
  • Next, we clone ramcoin repository
cd
git clone https://github.com/CSC468-WCU/ram_coin.git
cd ram_coin
docker-compose -f docker-compose.images.yml build
docker-compose -f docker-compose.images.yml push
curl 127.0.0.1:30000/v2/_catalog
kubectl create deployment redis --image=redis
for SERVICE in hasher rng webui worker; do kubectl create deployment $SERVICE --image=127.0.0.1:30000/$SERVICE:v0.1; done
kubectl expose deployment redis --port 6379
kubectl expose deployment rng --port 80
kubectl expose deployment hasher --port 80
kubectl expose deploy/webui --type=NodePort --port=80
kubectl get svc
  • Identify the port mapped to port 80/TCP for webui service. You can use this port and the hostname of the head node from CloudLab to access the now operational ram coin service.
$ kubectl get svc
$ kubectl get pods
Exercise
  • Patch the webui service so that it uses port 30080 as the external port

6. Automated Kubernetes Deployment

Ramcoin dpeployment
kubectl create namespace ramcoin
kubectl create -f ramcoin.yaml --namespace ramcoin
kubectl get pods -n ramcoin
kubectl create -f ramcoin-service.yaml --namespace ramcoin
kubectl get services --namespace ramcoin
Automated recovery
  • Check status and deployment locations of all pods on the head node
kubectl get pods -n ramcoin -o wide
  • SSH into worker-1 and reset the Kubelet. Enter y when asked.
sudo kubeadm reset
  • Run the following commands on head to observe the events
    • After a few minutes, worker-1 becomes NotReady via kubectl get nodes
    • After five minutes, kubectl get pods -n ramcoin -o wide will show that pods on worker-1 being terminated and replications are launched on worker-2 to recover the desired state of ramcoins.
    • The five-minute duration can be set by the --pod-eviction-timeout parameter.
kubectl get nodes
kubectl get pods -n ramcoin -o wide