Kubernetes Distilled, Part 1: Deployments and Services
Kubernetes documentation is thorough, and you should read it when you have the time. Until then, this is a condensed overview aimed at developers using a Kubernetes platform that should get you comfortable enough, quickly enough, to have enough of a foothold to experiment and understand the official documentation where more detail is needed.
I'm not an expert, so YMMV.
Overview
Kubernetes (or "k8s"–which is to Kubernetes as a11y is to accessibility) runs on top of a cluster of nodes, where a node is machine as in physical or virtual machines, and a cluster is a datacenter or part of one. Nodes are the resources k8s uses to run its Control Plane and the workloads it schedules. You'll interact with k8s through the control plane API, usually from a command line client like kubectl, or from client libraries in a language of your choice. On top of k8s, there are often services like OpenShift which provide yet another layer of abstraction, and can for example handle provisioning nodes and clusters running k8s for you.Objects
K8s APIs are declarative. You do not say exactly how your application will run. Instead, you describe what your needs are in terms of objects (sometimes referred to as "resources" such as in kubectl help), each with a kind, a specification (or simply "spec"), and metadata. At k8s core, there is a basic, generic framework around these objects and listening to changes in its spec or status. Upon this framework, k8s builds its abstractions as decoupled extensions. There are low level kinds of objects like Pods, usually managed by high level objects like Deployments. Objects can manage other objects by means of controllers. Controller-backed objects like Deployments and Services are usually where developers spend their time interfacing with k8s as they provide a high level of abstraction about common needs. Specs are usually provided via the kubectl command line client and yaml files that look something like this:apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Controllers constantly watch the status and the spec of objects they manage, and try to keep them in sync. It's how your updates are recognized and how failures are recovered. For this reason you may find if you go "below" an abstraction and try to change a lower level object's spec directly, your changes may quickly be undone as k8s thinks it's "recovering" your objects that strayed from their specs. It is also technically possible to create situations where the same objects may have multiple conflicting states specified by other objects, causing controllers to constantly change their states back and forth between the differing specs.
All objects' metadata includes a name, lower case strings made up of alphanumerics, dashes, and dots, unique among other objects of the same type, and a uid, unique among all objects over the lifetime of the cluster. Name is required. Uids are provisioned automatically. Metadata requirements vary by object kind.
Most of your kubectl usage will be via the create, get, and replace subcommands which work with objects, their specs and statuses (for example kubectl get -o yaml deployments my-deployment).
Pods
A pod defines a single deployable unit as one or more containers that share networking and storage. This is where your code runs. A pod is to a container like a VM is to your application's process(es). Most pods will run one container, and most containers will run a single main process. Each pod gets its own IP address. Like VMs, pods are your unit of horizontal scaling: pods are replicated by a kind of controller, like a ReplicaSet. Unlike VMs, pods are always ephemeral: they are short lived, and they don't maintain state or their IP addresses after they are destroyed. Non-volatile, persistent storage is provided by a different object, a PersistentVolume. A load balanced virtual IP is provided by a Service. Pods created directly are not maintained by a specific controller, so you likely will spec and create pods indirectly through templates inside other objects' specs. Templates tell controllers, like the DeploymentController (which uses a PodTemplateSpec inside a DeploymentSpec), how to define PodSpecs for pods they manage.Deployments
Deployments accomplish deploying and updating your application as a set of containers with various resource requirements to a number of scheduled pods. Generally, your first steps into k8s will be by defining a DeploymentSpec. Technically, a Deployment manages ReplicaSets, and each ReplicaSet manages its own set of Pods. In addition to usual spec requirements (apiVersion, kind, metadata) a basic Deployment spec includes...- spec.template
- A PodTemplateSpec, which defines the containers and volumes about a pod. A container spec includes the image to use, and the ports to be exposed, like so:
Changing the template will result in a rollout. This will create a new ReplicaSet with pods using the updated template, scale it up to the number of desired replicas, and scale down the old ReplicaSet to 0. Deployments have a DeploymentStrategy which defaults to RollingUpdate that maintains at least 75% and at most 125% of desired replicas up at all times (rounded).template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
- spec.selector
- An immutable label selector that is intended for developers to group pods to be managed by a single Deployment. Multiple deployments should never select the same pod(s). Generally this will be the same as the pods' label:
selector: matchLabels: app: nginx
- spec.replicas
- The number of pods to run ("replicas") among pods matching the selector.
replicas: 3
Services, Endpoints, and discovery
Deploying your application may be all you need if it does purely background work. However if your application provides a remote API, you can use a Service object to define a virtual IP (with resolvable domain name, if you're using KubeDNS) that load balances among the service's selected pods. A service spec selects pods the same way deployments do, via label selectors. Under the hood, the ServiceController maintains an Endpoint which lists the IPs and ports of healthy pods with each Service. Nodes in the cluster are configured to load balance connections to the single virtual IP (called "cluster IP") among the pods, via simple round robin (at least by default).kind: Service
apiVersion: v1
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Services can be discovered using docker-style environment variables or via DNS.
To get domain names, you must use KubeDNS. KubeDNS is an addon service and deployment that runs on k8s like any other, additionally configuring pods to use it for name resolution, and "watches the Kubernetes API for new Services and creates a set of DNS records for each". KubeDNS assigns a domain name with the format of "${service.name}.${service.namespace}.svc.${cluster.DNSDomain}" with an A record pointing at the cluster IP. The service name and namespace come from metadata. If no explicit namespace provided, "default" is used. The cluster DNSDomain comes from the KubeDNS config map (more on config maps later). The default is "cluster.local". With defaults, the example above would be resolvable from pods within the cluster at "nginx.default.svc.cluster.local". Pods' DNS resolution has some additional defaults configured, so technically pods in the same namespace and the same cluster could simply use "nginx" domain name.
Services have different types. By default, the ClusterIP type is used, which does nothing more than assign a cluster IP and expose it to the cluster, but only the cluster. To expose services outside of the cluster, use the LoadBalancer. type. While there is a LoadBalancer type, most services will do some kind of load balancing.
Summary
To recap the basics:- Kubernetes uses a framework of "objects" with "metadata" and "specifications."
- Many objects are managed by "controllers" which are processes running within the Kubernetes control plane that watch objects status and specifications, automating the work necessary to keep the resources described by the objects in sync with their specifications.
- Your application runs as a set of containers inside replicated, ephemeral Pods. The PodSpec has which image to use and the ports to expose.
- You can deploy and replicate your application using a Deployment and a PodSpecTemplate.
- You can expose your application to other pods using a Service which creates a persistent, virtual IP routable within the cluster and, if KubeDNS is used, a domain name resolvable within the cluster's Pods.
Comments