Browse Source

updated README, added docker files and configmap

fillkalpa 6 months ago

+ 1
- 0
.gitignore View File

@@ -0,0 +1 @@

+ 18
- 6 View File

@@ -39,7 +39,7 @@ Continue reading for an overview of the deployment and basic usage.

## Overview of deployment

Multi-tier deployment consisting of:
This is a Multi-tier deployment for after dark in order to run on k3s or other kubernetes cluster in a microservices way. Currently is meant to run on a single node cluster but you can still test a few things (see below) on a multinode cluster. Your site is being built by hugo and served by a separate nginx web server which is exposed as a service inside your cluster. Combined with a traefik ingress host rule it can be faced against the web abstractingly from its backend. The deployment is consisted of the following manifests:

* ``after-dark-k3s-hugo.yaml`` deploys a pod using two containers. First, an ephemeral initialization tasked with downloading After Dark from [source repo]( and, finally, the actual hugo container which kicks in and installs the site. When done it runs [`hugo server`]( in _watch mode_ so [Hugo]( rebuilds After Dark site as files change.
* ``after-dark-nginx.yaml`` deploys an [nginx]( web server that serves the content of the rendered site. As file changes occur Hugo will rebuild the After Dark site and nginx will pick up the changes.
@@ -49,15 +49,18 @@ Multi-tier deployment consisting of:

### Operating your k3s After Dark site

* First retrieve your pod name and store it in a variable for your convenience ``$ AD_POD=$(kubectl get pods -l app=after-dark-hugo -o jsonpath='{.items[0]}')``
* First make sure pods have status ``running`` with ``$ kubectl get po``
* Now retrieve your pod name and store it in a variable for your convenience ``$ AD_POD=$(kubectl get pods -l app=after-dark-hugo -o jsonpath='{.items[0]}')``
* Create a new post
``$ kubectl exec $AD_POD -- hugo new post/ -c /my-content -d /output`` or you can go ahead and copy your stuff under ``/my-content`` on your host
``$ kubectl exec $AD_POD -- hugo new post/``
* Apply custom styling
``$ kubectl cp custom.css $AD_POD:/after-dark/flying-toasters/assets/css/custom.css``
* Build your draft posts
``$ kubectl exec $AD_POD -- hugo -D -c /my-content -d /output``
* Full rebuilt your site e.g after new styles appied ``$ kubectl exec $AD_POD -- hugo -c /my-content -d /output`` Note this will also revert drafts.
* Edit your posts ``$ kubectl exec -it $AD_POD -- vi /my-content/`` or directly inside your host's ``/my-content``
``$ kubectl exec $AD_POD -- hugo -D -d /output``
* Full rebuilt your site e.g after new styles appied ``$ kubectl exec $AD_POD -- hugo -d /output`` Note this will also revert drafts.
* Edit your posts ``$ kubectl exec -it $AD_POD -- vi content/post/`` or write your post locallt then push it to your pod. ``$ kubectl cp $AD_POD:content/post``. Make sure you've got your [front matter]( right.
* Backup your posts. Create a backup folder locally on your host then ``$ kubectl cp $AD_POD:content/post backup/``
* Check the stream of logs for troubleshooting ``$ kubectl logs -f $AD_POD``

### Browsing to your site

@@ -68,3 +71,12 @@ To view your site run `kubectl get svc` to locate the NodePort exposed to nginx.
| after-dark-service | NodePort | | `<none>` | 8080:32146/TCP | 1h |

Grab the port number next to `8080:` and use it to browse to your site using the node IP or, locally on the host, using the loopback e.g. `http://localhost:32146`. (Note: Ignore the <samp>Cluster-IP</samp> and the <samp>External-IP</samp>, you want the real IP of your host.)

### Other noteable features

* Except from the ``/rendered-site`` folder which , you will find locally on your server node and holds the built site, all other stuff are located inside the pod's overlay filesystem which is [ephemeral]( Therefore make sure you backup your posts.
* In case you delete your hugo pod or it crashes for other reasons, your site will still be served by the nginx which picks ups the last known built from the output folder mentioned above.
Whenever you delete your hugo pod, K8s scheduler will recreate it from scratch so you will start fresh. You will then need to copy back your stuff.
* If you are running in a multi node cluster (not currenty recommended, unless you have refactored the deployments to use shared [persistent volumes]( both hugo and nginx will always be scheduled on the same node to ensure nginx is fed properly.
* The [nodeport]( this deployment uses to expose our web server is bound on all worker nodes of your cluster if you have more than one. This means you can reach you site form any node ([k8s service discovery and loadbalancing]( In the event you loose the node that actually holds the site your deployment will still be up as it will be rescheduled on an available worker but you will have to restore yours posts.
* Edit the provided ingress manifest in folder x to match your domain name e.g. so that your site is publicly available in a nice URL instead of the nodeport

+ 0
- 6
after-dark-k3s-hugo.yaml View File

@@ -25,8 +25,6 @@ spec:
- name: HUGO_WATCH # changed to real time build
value: "true"
- name: site-content
mountPath: /my-content
- name: rendered-site
mountPath: /output
- name: repo
@@ -45,10 +43,6 @@ spec:
- name: repo
emptyDir: {}
- name: site-content
path: /my-content
type: DirectoryOrCreate
- name: rendered-site
path: /rendered-site

+ 6
- 6
after-dark-nginx.yaml View File

@@ -27,7 +27,7 @@ spec:
- after-dark-hugo
topologyKey: ""
- image: jojomi/nginx-static
- image: tkalpakid/after-dark-nginx-64
- containerPort: 80
name: after-dark-nginx-container
@@ -37,8 +37,8 @@ spec:
- mountPath: /var/www
name: output
- mountPath: /etc/nginx/sites-enabled
name: default-site
#- mountPath: /etc/nginx/sites-enabled
# name: default-site
- name: output
@@ -46,6 +46,6 @@ spec:
path: /rendered-site
# this field is optional
type: Directory
- name: default-site
name: default-site
#- name: default-site
# configMap:
# name: default-site

nginx-cm.yaml → configmaps/nginx-cm.yaml View File

+ 35
- 0
dockerfiles/hugo/Dockerfile View File

@@ -0,0 +1,35 @@
# Use Alpine Linux as our base image so that we minimize the overall size our final container, and minimize the surface area of packages that could be out of date.
FROM alpine:3.9@sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8

LABEL description="Docker container for building static sites with the Hugo static site generator."
LABEL maintainer="Johannes Mitlmeier <>"

# config
ENV HUGO_TYPE=_extended

COPY ./ /
ADD${HUGO_VERSION}/${HUGO_ID}_Linux-64bit.tar.gz /tmp
RUN tar -xf /tmp/${HUGO_ID}_Linux-64bit.tar.gz -C /tmp \
&& mkdir -p /usr/local/sbin \
&& mv /tmp/hugo /usr/local/sbin/hugo \
&& rm -rf /tmp/${HUGO_ID}_linux_amd64 \
&& rm -rf /tmp/${HUGO_ID}_Linux-64bit.tar.gz \
&& rm -rf /tmp/ \
&& rm -rf /tmp/

RUN apk add --update git asciidoctor libc6-compat libstdc++ \
&& apk upgrade \
&& apk add --no-cache ca-certificates \
&& chmod 0777 /

# VOLUME /src
#VOLUME /after-dark/flying-toasters
VOLUME /output

WORKDIR /after-dark/flying-toasters
CMD ["/"]

#EXPOSE 1313

+ 5
- 0
dockerfiles/hugo/renovate.json View File

@@ -0,0 +1,5 @@
"extends": [

+ 31
- 0
dockerfiles/hugo/ View File

@@ -0,0 +1,31 @@

echo "ARGS" $@

echo "Hugo path: $HUGO"
/after-dark/bin/install .
while [ true ]
if [[ $HUGO_WATCH != 'false' ]]; then
echo "Watching..."
$HUGO server --watch=true --source="/after-dark/flying-toasters" --theme="$HUGO_THEME" --destination="$HUGO_DESTINATION" --baseURL="$HUGO_BASEURL" --bind="" "$@" || exit 1
echo "Building one time..."
$HUGO --source="/after-dark/flying-toasters" --theme="$HUGO_THEME" --destination="$HUGO_DESTINATION" --baseURL="$HUGO_BASEURL" "$@" || exit 1

if [[ $HUGO_REFRESH_TIME == -1 ]]; then
exit 0
echo "Sleeping for $HUGO_REFRESH_TIME seconds..."
sleep $SLEEP

+ 56
- 0
dockerfiles/hugo/ View File

@@ -0,0 +1,56 @@
# set -o xtrace



# set version in Dockerfile
sed -i "s/HUGO_VERSION=[0-9.]\+/HUGO_VERSION=$VERSION/g" Dockerfile

# cleanup container
docker stop "$NAME"
docker rm "$NAME"
rm -rf test-output

# build image
docker build --no-cache=true --pull --tag jojomi/hugo:latest .

# verify image build
docker images | grep jojomi/hugo | grep latest

# run container
mkdir --parents test-output
docker run \
--env HUGO_WATCH=true \
--env HUGO_BASEURL=http://localhost:$PORT \
--name "$NAME" \
--volume "$(pwd)/test-src:/src" \
--volume "$(pwd)/test-output:/output" \
--publish "$PORT:1313" \
--detach \
docker ps | grep "$NAME"

# verify output
xdg-open http://localhost:$PORT > /dev/null

# ask for continuation
read -r -p "Does it work? [y/N] " prompt

# cleanup container
docker stop "$NAME"
docker rm "$NAME"
sudo rm -rf test-output

if [[ $prompt == "y" || $prompt == "Y" || $prompt == "yes" || $prompt == "Yes" ]]
# git: commit, tag, push
git add Dockerfile && git commit -m "version $VERSION" && git tag $VERSION && git push && git push --tags
# open
xdg-open > /dev/null
xdg-open > /dev/null

+ 17
- 0
dockerfiles/nginx/Dockerfile View File

@@ -0,0 +1,17 @@
FROM armhf/alpine:latest

RUN apk add --update \
nginx \
&& rm -rf /var/cache/apk/*
RUN adduser www-data -G www-data -H -s /bin/false -D && mkdir /tmp/nginx/

ADD nginx.conf /etc/nginx/nginx.conf
ADD default.conf /etc/nginx/sites-enabled/default


VOLUME ["/var/www", "/var/log/nginx"]
WORKDIR /etc/nginx

CMD ["nginx"]

+ 15
- 0
dockerfiles/nginx/default.conf View File

@@ -0,0 +1,15 @@
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;

root /var/www;
index index.html index.htm;

location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /404.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules

+ 27
- 0
dockerfiles/nginx/nginx.conf View File

@@ -0,0 +1,27 @@
daemon off;
user www-data;
worker_processes 1;

error_log /var/log/nginx/error.log;
pid /var/run/;

events {
worker_connections 1024;

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

sendfile on;

keepalive_timeout 65;
tcp_nodelay on;

gzip on;

include /etc/nginx/sites-enabled/*;