From a ‘Vanilla’ Kubernetes perspective; where all nodes in the cluster run on Linux, only containers based on Linux images can run. As of version 1.9 of Kubernetes, we are currently on 1.12 at the time of writing, the ability to run windows containers is both stable and supported. However, as is often the case in life, a more detailed answer comes with “It depends” style caveats.
To recap on what the Kubernetes architecture looks like at a high level, it consists of master node(s), the brain of the cluster and worker nodes, the things that run containers:
The “Control plane”; the master node(s) and etcd instance(s) (key value store databases that store the cluster state always have to run on Linux. The worker nodes can run on Linux and / or Windows. The minimum version of windows that is supported is 2016, but the Kubernetes documentation states that version 1709 is preferred.
Mixed Worker Nodes and Pod Scheduling
If the cluster contains mixed worker nodes, pod specifications for windows containers need to use what is known as a “Node selector”. A node selector is an instruction that tells the scheduler that the pod can only run on a node with the label specified . . . by the selector:
apiVersion: v1 kind: Pod metadata: name: hostpath-volume-pod spec: containers: - name: my-hostpath-volume-pod image: microsoft/windowsservercore:1709 volumeMounts: - name: foo mountPath: "C:\\etc\\foo" readOnly: true nodeSelector: beta.kubernetes.io/os: windows volumes: - name: foo hostPath: path: "C:\\etc\\foo"
With the caveat that this information is correct at the time of writing, the following points should be noted:
- The control plane and worker nodes should always be on the same release of Kubernetes.
- The control plane can only run on Linux
- The minimum version of Windows 2016 RTM is required for worker nodes, but version 1709 is preferred.
A full list of restriction can be found here.
But here is something particularly significant for anyone wishing to deploy highly available SQL Server infrastructures to Kubernetes via availability groups:
The StatefulSet functionality for stateful applications is not supported
the bottom line being that availability groups are not supported with windows server containers, or hyper-v containers for that matter.
In my first blog post in the series, readers may recall that I tried to foster an “Entering the world of open source with ones eyes wide open” type attitude. For on premises Kubernetes installations that require certainty over the delivery of bug fixes, a PaaS built on top of Kubernetes, such as the OpenShift container platform should be preferred. And this is where another gotcha comes in, Windows container support for OpenShift is on the road map, but yet to be delivered. And if you want to play around with SQL Server 2019 big data clusters . . . you guessed it, you will need to use Linux container images.
The TL;DR version of this is that in the world of containers and Kubernetes, Linux is a first class citizen and Windows is not.
Some of the people I meet during the course of my day job work in IT departments split down the lines of Linux and Widows. This may (or may not) present a challenge when working with a platform whose head (the control plane) is looked after by the Linux/Unix functional silo, whilst its body (the worker nodes) are looked after by the Windows functional silo. The inference being that a platform whose body is looked after in its entirety by the same team may work better.