You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
fillkalpa dd22e61270 updated README, removed baseurl env преди 2 месеца
configmaps updated README, bumped hugo ver in dockerfiles, added option for configmaps - more on this later... преди 5 месеца
dockerfiles updated README, removed baseurl env преди 2 месеца
ingress added ingress manifest преди 6 месеца
.gitignore updated README, added docker files and configmap преди 6 месеца
README.md updated README, removed baseurl env преди 2 месеца
after-dark-k3s-hugo.yaml updated Readme. dockerfiles,runsh and oether fixes преди 3 месеца
after-dark-nginx.yaml updated README, added docker files and configmap преди 6 месеца
after-dark-service.yaml initial commit преди 6 месеца
test-mirror Update 'test-mirror' преди 5 месеца

README.md

After Dark K3s

Run After Dark in a lightweight K3s cluster providing the power of Kubernetes container orchestration for a fraction of the resources.

Installation

In two steps. First setup k3s, then deploy After Dark to your cluster.

Step 1: Download and install k3s via script

Download and install k3s via script as suggested on the K3s website:

curl -sfL https://get.k3s.io | sh -
# Check for Ready node, takes maybe 30 seconds
kubectl get node

Run on host machine you wish to use to manage your cluster. This could be locally for development, on an RPi or even a VPS with 512 MB RAM. The K3s installation script gives you access to the kubectl command giving you the ability to perform container orchestration straight-away.

Step 2: Deploy After Dark to your k3s cluster

Pull down Deployment files to host machine:

git clone https://git.habd.as/teowood/after-dark-k3s-amd64

Use kubectl to apply the deployment files and get things kicked off:

cd after-dark-k3s-amd64 && kubectl apply -f .

Your cluster is now running and you have a containerized version of After Dark running inside your cluster and accessible via browser.

Continue reading for an overview of the deployment and basic usage.

Overview of deployment

This is a Multi-tier deployment for after dark in order to run on k3s or other kubernetes cluster in a microservices way. Currently is meant to run on a single node cluster but you can still test a few things (see below) on a multinode cluster. Your site is being built by hugo and served by a separate nginx web server which is exposed as a service inside your cluster. Combined with a traefik ingress host rule it can be faced against the web abstracting its backend. The deployment is consisted of the following manifests:

  • after-dark-k3s-hugo.yaml deploys a pod containing hugo that will first begin with downloading After Dark from source repo, install it and finally kick hugo in watch mode. This means that our pod will serve as our builder since Hugo rebuilds After Dark site as files change.
  • after-dark-nginx.yaml deploys an nginx web server that serves the content of the rendered site. As file changes occur Hugo will rebuild the After Dark site and nginx will pick up the changes.
  • after-dark-service.yaml exposes nginx to a NodePort so we can actually reach the site from a browser.

Usage

Operating your k3s After Dark site

  • First make sure your pods have status running issuing command
 kubectl get po
  • Now retrieve your pod name and store it in a variable for your convenience
AD_POD=$(kubectl get pods -l app=after-dark-hugo -o jsonpath='{.items[0].metadata.name}')
  • Create a new post
kubectl exec $AD_POD -- hugo -c flying-toasters/content/post new new-post.md
  • Apply custom styling
kubectl exec $AD_POD -- mkdir flying-toasters/assets
kubectl exec $AD_POD -- mkdir flying-toasters/assets/css
kubectl cp custom.css $AD_POD:/after-dark/flying-toasters/assets/css/
  • Build your draft posts
kubectl exec  $AD_POD -- hugo -D -d /output
  • Full rebuilt your site e.g after new styles appied
$ kubectl exec $AD_POD -- hugo --source=flying-toasters/ -d /output 

Note: this will also revert drafts. Also in case you recreate your hugo pod this will clear output from previous builds.

  • Edit your posts
kubectl exec -it $AD_POD -- vi flying-toasters/content/post/new-post.md

or write your post locally then push it to your pod.

kubectl cp hello-world.md $AD_POD:flying-toasters/content/post

Make sure you’ve got your front matter right.

  • Edit Hugo’s config.toml
kubectl exec -it $AD_POD -- vi flying-toasters/config.toml
  • Backup your posts. Create a backup folder locally on your host then
kubectl cp $AD_POD:flying-toasters/content/post backup/
  • Check the stream of logs for troubleshooting
kubectl logs -f $AD_POD

Browsing to your site

To view your site run kubectl get svc to locate the NodePort exposed to nginx. You should see output like:

Name Type Cluster-IP External-IP Port(s) Age
after-dark-service NodePort 10.43.129.249 <none> 8080:32146/TCP 1h

Grab the port number next to 8080: and use it to browse to your site using the node IP or, locally on the host, using the loopback e.g. http://localhost:32146. (Note: Ignore the Cluster-IP and the External-IP, you want the real IP of your host.)

Other noteable features

  • Except from the /rendered-site folder which , you will find locally on your server node and holds the built site, all other stuff are located inside the pod’s overlay filesystem which is ephemeral. Therefore make sure you backup your posts.
  • In case you delete your hugo pod or it crashes for other reasons, your site will still be served by the nginx which picks ups the last known built from the output folder mentioned above. Whenever you delete your hugo pod, K8s scheduler will recreate it from scratch so you will start fresh. You will then need to copy back your stuff.
  • If you are running in a multi node cluster (not currenty recommended, unless you have refactored the deployments to use shared persistent volumes) both hugo and nginx will always be scheduled on the same node to ensure nginx is fed properly.
  • The nodeport this deployment uses to expose our web server is bound on all worker nodes of your cluster if you have more than one. This means you can reach you site form any node (k8s service discovery and loadbalancing). In the event you loose the node that actually holds the site your deployment will still be up as it will be rescheduled on an available worker but you will have to restore yours posts.
  • Edit the provided ingress-after-dark.yaml manifest in folder hugo to match your domain name e.g. example.com so that your site is publicly available in a nice URL instead of the nodeport