Deploy your docker-compose stack with Helm.
If you ever ask yourself, what do this thousand lines of k8s manifest or that monstrous helm chart does behind the scene, this chart may be what you were waiting for so long.
helm repo add link https://linktohack.github.io/helm-stack/
kubectl create namespace your-stack
# docker stack deploy -c docker-compose.yaml your_stack
helm -n your-stack upgrade --install your-stack link/stack -f docker-compose.yaml
While the inter-container communication is enabled in swarm either by network or link, in k8s if you have more than one service and they need to communicate together, you will need to expose the ports explicitly by --set services.XXX.expose={YYYY}
The chart is features complete and I was able to deploy complex stacks with it including traefik and kubernetes-dashboard. In all cases, there is a mechanism to override the generated manifests with full possibilities of k8s API (see below.)
Acceptable configurations can be found in the test:.
deploy.placement.constraints) including:
node.rolenode.hostnamenode.labels (==, !=, has)deploy.resources.reservations map to request anddeploy.resources.limits map to limit (accept both cpus and cpu keys)deploy.placement.tolerations with kubectl taint syntaxports expose LoadBlancer by defaultexpose exposes ClusterIP servicesnodePorts expose NodePort servicestraefik (1.7) labels (deploy.labels) as input with annotations support, including basic auth, PathPrefixStrip, customRequestHeaders, customResponseHeaders…CertManager’s Issuer and ClusterIssuer via extra labels traefik.issuer and traefik.cluster-issuerIngress class via extra label traefik.ingress-classsegment labels for services that expose multiple ports traefik.port, traefik.first.port, traefik.second.port…volumeClaimTemplates for StatefulSet (really useful if combine with cloud provider’s dynamic provisioner).volumes.XXX.driver_opts.type maps directly to storageClassName including treatments for:
none (default storage class)nfsemptyDirnone (map to hostPath if volumes.XXX.driver_opts.device presents) and nfs (if addr presents in volumes.XXX.driver_opts.o and volumes.XXX.driver_opts.device prensents) static provisioner.readOnly attribute (volume:/path:ro style)data external keyfile to null. See Advance: full override to see how to insert more than one filesdata and stringData external keysfile to null. See Advance: full override to see how to insert more than one filesshell and exec form. For advace features /.e.g/ httpGet, please use full override bellowschedule is set as */1 * * * * but it can be easily overwritten with CronJob.spec.schedule.stop_grace_period → terminationGracePeriodSecondsextra_hosts → hostAliasesread_only → securityContext.readOnlyRootFilesystemuser → securityContext.runAsUser/runAsGroup (supports uid or uid:gid format)working_dir → workingDirtmpfs → emptyDir with medium: Memory (supports size limit via tmpfs: /path:size=100M)deploy.endpoint_mode: dnsrr → clusterIP: Nonedeploy.resources.reservations.devices with driver: nvidia → resources.limits.nvidia.com/gpuTested in a K3s cluster with local-path provisioner.
❯ helm -n com-linktohack-docker-on-compose upgrade --install sample link/stack -f test/docker-compose-dockersamples.yaml
Release "sample" does not exist. Installing it now.
NAME: sample
LAST DEPLOYED: Tue Jan 14 18:38:42 2020
NAMESPACE: com-linktohack-docker-on-compose
STATUS: deployed
REVISION: 1
TEST SUITE: None
❯ kubectl get all -n com-linktohack-docker-on-compose stack/git/master
NAME READY STATUS RESTARTS AGE
pod/svclb-web-loadbalancer-tcp-hk9sb 1/1 Running 0 2m2s
pod/web-57bbd888fb-dvqxj 1/1 Running 0 2m2s
pod/db-769769498d-6zqx8 1/1 Running 0 2m2s
pod/words-6465f956d-kmk9c 1/1 Running 0 2m2s
pod/words-6465f956d-sw9t2 1/1 Running 0 2m2s
pod/words-6465f956d-vchlm 1/1 Running 0 2m2s
pod/words-6465f956d-l9lnd 1/1 Running 0 2m2s
pod/words-6465f956d-2lsbz 1/1 Running 0 2m2s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web-loadbalancer LoadBalancer 10.43.235.241 2.56.99.175 33000:31908/TCP 2m4s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/svclb-web-loadbalancer 1 1 1 1 1 <none> 2m4s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 1/1 1 1 2m4s
deployment.apps/db 1/1 1 1 2m4s
deployment.apps/words 5/5 5 5 2m4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-57bbd888fb 1 1 1 2m4s
replicaset.apps/db-769769498d 1 1 1 2m4s
replicaset.apps/words-6465f956d 5 5 5 2m4s
Please see below.
These keys are either not existed in docker-compose format or have the meaning changed. They’re should be set via --set or second values.yaml.
services.XXX.kind (string, overrides automatic kind detection: Deployment, DaemonSet, StatefulSet)services.XXX.imagePullSecrets (string, name of the secret)services.XXX.imagePullPolicy (string)services.XXX.serviceAccountName (string)services.XXX.expose (array, ports to be exposed for other services via ClusterIP)services.XXX.ports (array, ports to be exposed via LoadBalancer)services.XXX.nodePorts (ports to be exposed as NodePort)services.XXX.containers (array, same spec as services.XXX, additional containers to run in the same Pod)services.XXX.initContainers (array, same spec as services.XXX.containers, populates pod.spec.initContainers)services.XXX.volumes[].subPath (string, subPath support)volumes.XXX.storage (string, default 1Gi for dynamic provisioner)volumes.XXX.subPath (string, use services.XXX.volumes long syntax with extra key subPath if you want multiple subPathsconfig.XXX.file (string |
null, required by swarm, can be set to null to mount config as a directory) |
config.XXX.data (string)secrets.XXX.file (string |
null, required by swarm, can be set to null to mount secret as a directory) |
secrets.XXX.data (string)secrets.XXX.stringData (string)deploy.placement.tolerations (string[], see kubectl taint -h for syntax)chdir (string, required in case of rusing relative paths in volumes)Raw (array, manifests that should be deployed as is)Raw property allows us to deploy arbitrary manifests, but most of time, there is a better way.
The properties of the manifests can be overridden (merged) with the values from services.XXX.Kind and volumes.XXX.Kind…
You will now have full control of the output manifests. While this is a deep merge operation, the item in the list will be also merged if existed, new items will be also inserted.
The full list of all the Kinds can be found in the listing below, please note that services.XXX.imagePullPolicy, volumes.XXX.storage, configs.XXX.data secrets.XXX.stringData are already recognized as extra keys.
services:
redmine:
ClusterIP: {}
NodePort: {}
LoadBalancer: {}
Ingress:
default: {} # default segement
seg1: {} # segment seg1
Auth:
default: {}
seg: {} # segment seg1
Deployment:
spec:
template:
spec:
containers:
- name: override-name
imagePullPolicy: Always # supported as an extra key already
DaemonSet:
spec:
StatefulSet:
spec:
Job:
spec:
CronJob:
spec:
schedule: '*/1 * * * *' # mostly require
volumes:
db:
PV:
spec:
capacity:
storage: 10Gi # supported as an extra key already
persistentVolumeReclaimPolicy: Retain
PVC:
spec:
resources:
requests:
storage: 10Gi # supported as an extra key already
configs:
redmine_config:
ConfigMap:
data:
hello.yaml: there
secrets:
with_string_data:
Secret:
stringData: ""
Golang template + Sprig are quite a pleasure to work as a full-feature language.
Blog post https://linktohack.com/posts/evaluate-options-to-migrate-from-swarm-to-k8s/
The same technique can be applied via a proper language instead of using a Helm template but why not standing on the shoulders of giant(s). By using Helm (the de facto package manager) we’re having the ability rollback and so on… for free.
traefik headersdocker-compose and extra keysThis example contains almost all the possible configurations of this stack.
helm -n com-linktohack-redmine upgrade --install redmine link/stack -f test/docker-compose-redmine.yaml -f test/docker-compose-redmine-override.yaml \
--set services.db.expose={3306:3306} \
--set services.db.ports={3306:3306} \
--set services.db.deploy.placement.constraints={node.role==manager} \
--set services.redmine.deploy.placement.constraints={node.role==manager} \
--set chdir=/stack --debug --dry-run
services.XXX.ports will be exposed as LoadBalancer (if needed)services.XXX.expose will be exposed as ClusterIP portshelm -n kube-system upgrade --install traefik link/stack -f test/docker-compose-traefik.yml -f test/docker-compose-traefik-override.yml
kubernetes-dashboard service accountcluster-admin rolehelm -n kubernetes-dashboard upgrade --install dashboard link/stack -f test/docker-compose-kubernetes-dashboard.yml
helm -n com-linktohack-redmine template redmine link/stack -f test/docker-compose-redmine.yaml -f test/docker-compose-redmine-override.yaml \
--set services.db.expose={3306:3306} \
--set services.db.ports={3306:3306} \
--set services.db.deploy.placement.constraints={node.role==manager} \
--set services.redmine.deploy.placement.constraints={node.role==manager} \
--set chdir=/stack --debug > test/docker-compose-redmine.manifest.yaml
kubectl -n com-linktohack-redmine apply -f test/docker-compose-redmine.manifest.yaml
stop_grace_period → terminationGracePeriodSecondsextra_hosts → hostAliasesread_only → securityContext.readOnlyRootFilesystemuser → securityContext.runAsUser/runAsGroupworking_dir → workingDirtmpfs → emptyDir with medium: Memorydeploy.endpoint_mode: dnsrr → headless service (clusterIP: None)deploy.resources.reservations.devices → nvidia.com/gpuxxx-loadbalancer-tcp/xxx-loadbalancer-udp to xxx-loadbalancerLoadBalancer: {} instead of LoadBalancer: { tcp: {}, udp: {} }v1 (require k8s v1.25)v1 (require k8s v1.22)networking.k8s.io/v1beta1ingressClassNamechidir + constraints quotationtraefik.frontend.rule=PathPrefixStrip behavior for ingress-nginx.PathPrefix support for ingress-nginx.deploy.placement.tolerations using kubectl taint style.services.XXX.deploy.resources.initContainers.hostNetwork: true via docker-compose’ network_mode: host.volumes mount.subPath mount.external: true (not create a volumeClaimTemplate).containers key, with mergeDeepOvewriteJob & CronJobStatefulSet.CertManagerRaw propertyxxxx-yyyy:zzzz-tttt/udpThis project is licensed under Fair Source License with a revenue-based threshold.
See LICENSE.md for full terms.
TL;DR: