Pod, Service, and Deployment
1. Big Picture
- In Docker, you run containers directly.
- Docker Swarm manages containers
- In Kubernetes, you run pods, which wrap account containers.
- A node is a machine (physical or virtual) where pods live.
2. Pods: Containers and Node Abstraction
What is a Pod?
- A pod includes one or more containers plus additional share resources.
- Shared resources include, but are not limited to, shared IP (internal/external) and storage volumes.
- Example pod: one
nginx
webserver container plus onelog-shipper
sidecar container plus shared volume for log data.
- Pods are scheduled onto
nodes
by the kube-scheduler.
Pod and physical (virtual) node
- Each node runs a kubelet agent.
- Kubelet talks to the container runtime (containerd in Rancher Desktop).
- When a pod is deployed, kubelet pulls the associated container image(s) and runs it(them).
graph TD
subgraph Node1["Node (VM)"]
Kubelet1["kubelet + runtime"]
subgraph Pod1["Pod"]
Nginx["Container: Nginx"]
end
end
subgraph Node2["Node (VM)"]
Kubelet2["kubelet + runtime"]
subgraph Pod2["Pod"]
Redis["Container: Redis"]
end
end
Kubelet1 --> Pod1
Kubelet2 --> Pod2
Hands-on with Rancher Desktop
- Verify that your Rancher Desktop is up and running
- Create a file called
nginx-pod.yaml
with the following content
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest # pulls Docker/OCI image
ports:
- containerPort: 80
- Run
kubectl
and provide path to yournginx-pod.yaml
. In the example below, I am in the same directory as my file.
3. Services: Stable Access to Pods
At this point, if we try to access the above pod using the containerPort
80, it will fail.
Problem
- Pods are
ephemeral
- Pods can restart and be rescheduled onto different nodes.
- Each pod gets a random IP inside the cluster.
- How does a client reliably connect to
nginx
if its internal IP changes?
- Docker:
-P
and-p
is not adequate for this.
Solution
- Kubernetes Service
- A
Service
providesPods
with astable virtual IP
andDNS name
. Service
load-balances traffic to all matching Pods vialabel
.
Connection to Physical Node
- A
ClusterIP
service gives access only inside the cluster. - A
NodePort
service opens a port on every node's IP. - A
LoadBalancer
(on cloud) provisions an external IP (if available).
Hands-on with Rancher Desktop: Adding service to pod
- Create a file called
nginx-svc.yaml
with the following content
12: Your Goal This Week
- Rebuild the RAMCoin miner in Kubernetes
-
Learn:
-
YAML object creation
- Pod and service exposure
- Logging and troubleshooting
From design to deployment: this is cloud engineering in action.
13: Coming Up
- Kubernetes Dashboard
- Namespaces
- CI/CD pipelines and Helm
- Securing services
14: Key Takeaways
- Design is only the beginning
- Deployment is the real engineering
- Kubernetes is not just a tool — it's your platform
You don’t run containers. You orchestrate services.
3. Kubernetes: Pods, Service, Deployment
Info
- Pod: group of containers sharing IP and volumes
- Service: stable access to dynamic pod groups
- Deployment: declarative state, rolling updates
- NodePort*- and Ingress**: access from outside the cluster
Overview
- This is done via Kubernetes Objects, described through YAML files.
- Kubernetes objects are persistent entities in the Kubernetes system, which represent the
state of your cluster.
- What containerized applications are running (and on which nodes)
- The resources available to those applications
- The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
- A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's desired state.
- Documentation
Terminologies: Pods
Pod
: Smallest deployable units of computings that can be created and managed in Kubernetes- A pod can contain one or more containers with shared storage and network resources and specifications for how to run the containers.
- Create the following file called
nginx.yml
- Deploy the pod:
- How to access this nginx container?
Terminologies: Service
Service
: A method of exposing network application running as one or more Pods without making any changes to the application.- Each
Service
object defines a logical set of endpoints (usually endpoints are pods) along with a policy about how to make those pods accessible. - For exposing
Service
to an external IP address, there are several types:- ClusterIP
- NodePort
- LoadBalancer
- ExternalName
ClusterIP
: expose service onto a cluster-internal (Kubernetes) IP. For world access,Ingress
needs to be used.NodePort
: Expose the service on eachNode
(worker computers) at a static port.
- Deploy the service:
- Visit the headnode of your Kubernetes cluster at the nodePort 30007.
Terminologies: Deployment
Deployment
: provides declarative updates forPods
(andReplicaSets
) .- A desired state is described in a Deployment.
- Deploy the application:
kubectl delete pods nginx
kubectl delete svc nginx
kubectl apply -f nginx-deployment.yml
kubectl get deployment nginx-deployment
kubectl get svc
kubectl describe svc nginx
kubectl get pods
kubectl describe pods nginx
- Visit the headnode of your Kubernetes cluster again at the nodePort 30007.
- Delete the deployment
4. Kubernetes Dashboard
Deploy the dashboard
- Run the following commands
- Go to the
head
node URL at port30082
forkubernetes-dashboard
- Hit
skip
to omit security (don't do that at your job!).
Kubernetes namespace
- Provides a mechanism for isolating groups of resources within a single cluster.
- Uniqueness is enforced only within a single namespace for namespaced objects
(
Deployment
andServices
) - Uniquess of other cluster-wide objects (
StorageClass
,Nodes
,PersistentVolumes
, etc) is enforced across namespaces. - Run the following commands from inside the
ram_coin
directory namespaces
,namespace
orns
- Using
--namespace
or-n
let you specify a namespace and look at objects within that namespace. - Without any specification, it is the default namespace (
default
)
5. More Kubernetes Deployment
Launch ram_coin on Kubernetes
- First, we deploy a registry service. This is equivalent to a local
version of Docker Hub.
- Only one member of the team does this
cd
kubectl create deployment registry --image=registry
kubectl expose deploy/registry --port=5000 --type=NodePort
kubectl get deployment
kubectl get pods
kubectl get svc
- We can patch configurations of deployed services to set a specific external port.
kubectl patch service registry --type='json' --patch='[{"op": "replace", "path": "/spec/ports/0/nodePort", "value":30000}]'
kubectl get svc
- You can see the external port has now been changed (patched)
Building and pushing images for ramcoin
- We test our local registry by pulling
busybox
from Docker Hub and then tag/push it to our local registry.
docker pull busybox
docker tag busybox 127.0.0.1:30000/busybox
docker push 127.0.0.1:30000/busybox
curl 127.0.0.1:30000/v2/_catalog
- Next, we clone ramcoin repository
cd
git clone https://github.com/CSC468-WCU/ram_coin.git
cd ram_coin
docker-compose -f docker-compose.images.yml build
docker-compose -f docker-compose.images.yml push
curl 127.0.0.1:30000/v2/_catalog
kubectl create deployment redis --image=redis
for SERVICE in hasher rng webui worker; do kubectl create deployment $SERVICE --image=127.0.0.1:30000/$SERVICE:v0.1; done
kubectl expose deployment redis --port 6379
kubectl expose deployment rng --port 80
kubectl expose deployment hasher --port 80
kubectl expose deploy/webui --type=NodePort --port=80
kubectl get svc
- Identify the port mapped to port 80/TCP for webui service. You can
use this port and the hostname of the
head
node from CloudLab to access the now operational ram coin service.
Exercise
- Patch the webui service so that it uses port 30080 as the external port
6. Automated Kubernetes Deployment
Ramcoin dpeployment
Automated recovery
- Check status and deployment locations of all pods on the
head
node
- SSH into
worker-1
and reset the Kubelet. Entery
when asked.
- Run the following commands on
head
to observe the events- After a few minutes,
worker-1
becomesNotReady
viakubectl get nodes
- After five minutes,
kubectl get pods -n ramcoin -o wide
will show that pods onworker-1
being terminated and replications are launched onworker-2
to recover the desired state of ramcoins. - The five-minute duration can be set by the
--pod-eviction-timeout
parameter.
- After a few minutes,