As mentioned in my first post on this new blog setup, I run this on some small private kubernetes cluster. And as usual, when you run stuff on kubernetes, there are some caveats. Mainly about persistence of data and deployment and backups.
The basics are explained in the "How to run Ghost in Kubernetes" post by Luis Mendoza, so I won't repeat them here.
First Deployment
I want to be able to just deploy my locally developed or changed theme via a command line, without having to go through the ghost web admin, make a zip, upload a zip and so on. I also want to use the ghost storage adapter for rokka, which needs to be somewhere in the file system, including the rokka npm package. And maybe add further enhancements in the future. This all meant, I needed my custom docker image. Created with the following Dockerfile
FROM ghost:3.3-alpine
RUN cd /var/lib/ghost/current && yarn add rokka ghost-ignition && yarn cache clean
ADD ghost-storage-rokka /var/lib/ghost/current/core/server/adapters/storage/ghost-storage-rokka
ADD chregu /var/lib/ghost/current/content/themes/chregu
And I copy the chregu theme and the ghost-storage-rokka files to this directory before building with the following script:
rsync -avr ../content/adapters/storage/ghost-storage-rokka .
rsync -avr --exclude=node_modules/ --delete ../content/themes/chregu .
docker build -t docker.gitlab.liip.ch/chregu/repo/ghost-chregu-tv:3.3 .
Push the image to our private docker repo and deploy to kubernetes and all should be fine (see my full kubernetes deployment config below)
Next deployments
It is fine, until you try to redeploy changes with the kubernetes default settings. Since the rook-ceph storage I am using for volumes doesn't support multiple read/write mounts (I'd have to use for example NFS for that), the new container won't start with the default strategy "RollingUpdate". That strategy totally makes sense usually, it can guarantee a downtime free deployment. But in this case, the new container doesn't start, because the old container is still running with that mounted volume. Switching to the strategy "Recreate" first shuts down the old container and then creates a new one, which now can mount that volume. Good enough, even if it means a downtime of a few seconds. I can live with that (if someone has a better idea, I'm all ears)
All I have to do now for a new deployment is pushing that created image to the docker repo and then run the following to trigger a redeployment.
kubectl patch deployment ghost-chregu-tv -n chregu-tv -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$(date +%s)\"}}}}}"
Backup
I prefer backups of my content, you never know what happens, especially if your data is in some esoteric distributed filesystem. Usually I just run a kubernetes CronJob, do whatever has to be done (database dump for example) and push that stuff to an S3 bucket. Here again, I couldn't mount that volume, where all my content is for the same reason as above (again, all ears if this can be done somehow, in this case I'd just need readonly access).
The solution was to create a sidecar to that deployment, which mounts the content directory from the main pod, zip that and uploads it to S3. I used jobber as a cron replacement, since there are easy to use docker images available for it. The Dockerfile for this:
FROM jobber:1.4-alpine3.10
USER root
RUN apk -Uuv add --no-cache groff less python py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm -f /var/cache/apk/*
ADD jobberfile /home/jobberuser/.jobber
ADD backup.sh /home/jobberuser/backup.sh
RUN chown jobberuser:jobberuser /home/jobberuser/.jobber
RUN chown jobberuser:jobberuser /home/jobberuser/backup.sh
USER jobberuser
And that just runs the following script once a day. First making a .tgz archive and then uploading it to s3
cd /tmp
TIMESTAMP="$(date +%Y%m%d%H%M%S)"
tar -czf /tmp/backup-ghost-blog-$TIMESTAMP.tgz /content
aws s3 cp /tmp/backup-ghost-blog-$TIMESTAMP.tgz s3://YOUR_BUCKET/daily/
rm /tmp/backup-ghost-blog-$TIMESTAMP.tgz
I use the sqlite storage for now, if I would use for example MySQL, I also should dump and store that before uploading to S3.
The deployment.yml
And here comes my full deployment.yml for completeness sake. It includes some ghost config settings for the rokka storage manager and also sets up Let's Encrypt certificates for the https endpoints, in case you have set that up in your cluster.
With all that, I'm quite happy how ghost runs and can be updated to my needs.
apiVersion: v1
kind: Namespace
metadata:
name: chregu-tv
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost-chregu-tv
labels:
app: ghost-chregu-tv
namespace: chregu-tv
spec:
selector:
matchLabels:
app: ghost-chregu-tv
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost-chregu-tv
spec:
containers:
- name: ghost-chregu-tv
imagePullPolicy: Always
image: docker.gitlab.liip.ch/chregu/repo/ghost-chregu-tv:3.3
ports:
- containerPort: 2368
env:
- name: url
value: https://chregu.tv/
- name: storage__active
value: ghost-storage-rokka
- name: storage__ghost-storage-rokka__key
value: YOUR_KEY
- name: storage__ghost-storage-rokka__organization
value: YOUR_ORG
- name: storage__ghost-storage-rokka__defaultStack
value: SomeStack
- name: NODE_ENV
value: production
livenessProbe:
tcpSocket:
port: 2368
initialDelaySeconds: 30
periodSeconds: 60
timeoutSeconds: 5
volumeMounts:
- mountPath: /var/lib/ghost/content
name: ghost-chregu-tv-storage
- image: docker.gitlab.liip.ch/chregu/repo/ghost-chregu-tv-jobber:1.4
imagePullPolicy: Always
name: backup
env:
- name: AWS_ACCESS_KEY_ID
value: YOUR_KEY
- name: AWS_SECRET_ACCESS_KEY
value: YOUR_SECRET_KEY
- name: AWS_DEFAULT_REGION
value: eu-central-1
volumeMounts:
- mountPath: /content
name: ghost-chregu-tv-storage
readOnly: true
imagePullSecrets:
- name: regcred
volumes:
- name: ghost-chregu-tv-storage
persistentVolumeClaim:
claimName: ghost-chregu-tv-claim
---
apiVersion: v1
kind: Service
metadata:
name: ghost-chregu-tv-service
namespace: chregu-tv
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 2368
protocol: TCP
selector:
app: ghost-chregu-tv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-chregu-tv-claim
namespace: chregu-tv
spec:
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
cert-manager.io/issuer-kind: ClusterIssuer
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
nginx.org/client-max-body-size: "50m"
name: ghost-chregu-tv
namespace: chregu-tv
spec:
rules:
- host: www.chregu.tv
http:
paths:
- backend:
serviceName: ghost-chregu-tv-service
servicePort: 80
- host: chregu.tv
http:
paths:
- backend:
serviceName: ghost-chregu-tv-service
servicePort: 80
tls:
- hosts:
- chregu.tv
- www.chregu.tv
secretName: ghost-chregu-tv-cert