Skip to content

15th August 2023



Bash snippets

Configure Docker to host the application

  • New: Add healthcheck to your dockers.

    Health checks allow a container to expose its workload’s availability. This stands apart from whether the container is running. If your database goes down, your API server won’t be able to handle requests, even though its Docker container is still running.

    This makes for unhelpful experiences during troubleshooting. A simple docker ps would report the container as available. Adding a health check extends the docker ps output to include the container’s true state.

    You configure container health checks in your Dockerfile. This accepts a command which the Docker daemon will execute every 30 seconds. Docker uses the command’s exit code to determine your container’s healthiness:

    • 0: The container is healthy and working normally.
    • 1: The container is unhealthy; the workload may not be functioning.

    Healthiness isn’t checked straightaway when containers are created. The status will show as starting before the first check runs. This gives the container time to execute any startup tasks. A container with a passing health check will show as healthy; an unhealthy container displays unhealthy.

    In docker-compose you can write the healthchecks like the next snippet:

    version: '3.4'
        image: linuxserver/jellyfin:latest
        container_name: jellyfin
        restart: unless-stopped
          test: curl http://localhost:8096/health || exit 1
          interval: 10s
          retries: 5
          start_period: 5s
          timeout: 10s
  • New: List the dockers of a registry.

    List all repositories (effectively images):

    $: curl -X GET https://myregistry:5000/v2/_catalog
    > {"repositories":["redis","ubuntu"]}

    List all tags for a repository:

    $: curl -X GET https://myregistry:5000/v2/ubuntu/tags/list
    > {"name":"ubuntu","tags":["14.04"]}

    If the registry needs authentication you have to specify username and password in the curl command

    curl -X GET -u <user>:<pass> https://myregistry:5000/v2/_catalog
    curl -X GET -u <user>:<pass> https://myregistry:5000/v2/ubuntu/tags/list




  • New: Remove tags.

    To delete a tag you can run:

    git tag -d {{tag_name}}

    To remove them remotely do

    git push --delete origin {{ tag_name }}


Infrastructure as Code

Ansible Snippets

  • New: Run command on a working directory.

    - name: Change the working directory to somedir/ and run the command as db_owner
      ansible.builtin.command: /usr/bin/ db_user db_name
      become: yes
      become_user: db_owner
        chdir: somedir/
        creates: /path/to/database
  • New: Run handlers in the middle of the tasks file.

    If you need handlers to run before the end of the play, add a task to flush them using the meta module, which executes Ansible actions:

      - name: Some tasks go here ...
      - name: Flush handlers
        meta: flush_handlers
      - name: Some other tasks ...

    The meta: flush_handlers task triggers any handlers that have been notified at that point in the play.

    Once handlers are executed, either automatically after each mentioned section or manually by the flush_handlers meta task, they can be notified and run again in later sections of the play.

  • New: Run command idempotently.

    - name: Register the runner in gitea
      become: true
      command: act_runner register --config config.yaml --no-interactive --instance {{ gitea_url }} --token {{ gitea_docker_runner_token }}
        creates: /var/lib/gitea_docker_runner/.runner
  • New: Get the correct architecture string.

    If you have an amd64 host you'll get x86_64, but sometimes you need the amd64 string. On those cases you can use the next snippet:

      aarch64: arm64
      x86_64: amd64
    - name: Download the act runner binary
      become: True
        url:{{ deb_architecture[ansible_architecture] }}
        dest: /usr/bin/act_runner
        mode: '0755'
  • New: Check the instances that are going to be affected by playbook run.

    Useful to list the instances of a dynamic inventory

    ansible-inventory -i aws_ec2.yaml --list
  • New: Check if variable is defined or empty.

    In Ansible playbooks, it is often a good practice to test if a variable exists and what is its value.

    Particularity this helps to avoid different “VARIABLE IS NOT DEFINED” errors in Ansible playbooks.

    In this context there are several useful tests that you can apply using Jinja2 filters in Ansible.

  • New: Check if Ansible variable is defined (exists).

    - shell: echo "The variable 'foo' is defined: '{{ foo }}'"
      when: foo is defined
    - fail: msg="The variable 'bar' is not defined"
      when: bar is undefined
  • New: Check if Ansible variable is empty.

    - fail: msg="The variable 'bar' is empty"
      when: bar|length == 0
    - shell: echo "The variable 'foo' is not empty: '{{ foo }}'"
      when: foo|length > 0
  • New: Check if Ansible variable is defined and not empty.

    - shell: echo "The variable 'foo' is defined and not empty"
      when: (foo is defined) and (foo|length > 0)
    - fail: msg="The variable 'bar' is not defined or empty"
      when: (bar is not defined) or (bar|length == 0)
  • New: Download a file.

    - name: Download foo.conf
        dest: /etc/foo.conf
        mode: '0440'


  • Correction: Configure the gitea actions.

    So far there is only one possible runner which is based on docker and act. Currently, the only way to install act runner is by compiling it yourself, or by using one of the pre-built binaries. There is no Docker image or other type of package management yet. At the moment, act runner should be run from the command line. Of course, you can also wrap this binary in something like a system service, supervisord, or Docker container.

    You can create the default configuration of the runner with:

    ./act_runner generate-config > config.yaml

    You can tweak there for example the capacity so you are able to run more than one workflow in parallel.

    Before running a runner, you should first register it to your Gitea instance using the following command:

    ./act_runner register --config config.yaml --no-interactive --instance <instance> --token <token>

    Finally, it’s time to start the runner.

    ./act_runner --config config.yaml daemon

    If you want to create your own act docker, you can start with this dockerfile:

    FROM node:16-bullseye
    LABEL prune=false
    RUN mkdir /root/.aws
    COPY files/config /root/.aws/config
    COPY files/credentials /root/.aws/credentials
    RUN apt-get update && apt-get install -y \
      python3 \
      python3-pip \
      python3-venv \
      screen \
      vim \
      && python3 -m pip install --upgrade pip \
      && rm -rf /var/lib/apt/lists/*
    RUN pip install \
      molecule==5.0.1 \
      ansible==8.0.0 \
      ansible-lint \
      yamllint \
      molecule-plugins[ec2,docker,vagrant] \
      boto3 \
      botocore \
      testinfra \
    RUN wget \
      && tar xvzf docker-24.0.2.tgz \
      && cp docker/* /usr/bin \
      && rm -r docker docker-*

    It's prepared for:

    • Working within an AWS environment
    • Run Ansible and molecule
    • Build dockers
  • New: Build a docker within a gitea action.

    Assuming you're using the custom gitea_runner docker proposed above you can build and upload a docker to a registry with this action:

    name: Publish Docker image
    "on": [push]
        runs-on: ubuntu-latest
          - name: Checkout code
          - name: Login to Docker Registry
              username: ${{ secrets.REGISTRY_USERNAME }}
              password: ${{ secrets.REGISTRY_PASSWORD }}
          - name: Set up QEMU
          - name: Set up Docker Buildx
          - name: Extract metadata (tags, labels) for Docker
            id: meta
          - name: Build and push
            uses: docker/build-push-action@v2
              context: .
              platforms: linux/amd64,linux/arm64
              push: true
              cache-from: type=registry,
              cache-to: type=registry,,mode=max
              tags: ${{ steps.meta.outputs.tags }}
              labels: ${{ steps.meta.outputs.labels }}

    It uses a pair of nice features:

    • Multi-arch builds
    • Cache to speed up the builds

    As it reacts to all events it will build and push:

    • A tag with the branch name on each push to that branch
    • A tag with the tag on tag push
  • New: Bump the version of a repository on commits on master.

    • Create a SSH key for the CI to send commits to protected branches.
    • Upload the private key to a repo or organization secret called DEPLOY_SSH_KEY.
    • Upload the public key to the repo configuration deploy keys
    • Create the bump.yaml file with the next contents:

      name: Bump version
            - main
          if: "!startsWith(github.event.head_commit.message, 'bump:')"
          runs-on: ubuntu-latest
          name: "Bump version and create changelog"
            - name: Check out
              uses: actions/checkout@v3
                fetch-depth: 0  # Fetch all history
            - name: Configure SSH
              run: |
                  echo "${{ secrets.DEPLOY_SSH_KEY }}" > ~/.ssh/deploy_key
                  chmod 600 ~/.ssh/deploy_key
                  dos2unix ~/.ssh/deploy_key
                  ssh-agent -a $SSH_AUTH_SOCK > /dev/null
                  ssh-add ~/.ssh/deploy_key
            - name: Bump the version
              run: cz bump --changelog --no-verify
            - name: Push changes
              run: |
                git remote add ssh
                git pull ssh main
                git push ssh main
                git push ssh --tags

      It assumes that you have cz (commitizen) and dos2unix installed in your runner.

  • New: Skip gitea actions job on changes of some files.

    There are some expensive CI pipelines that don't need to be run for example if you changed a line in the, to skip a pipeline on changes of certain files you can use the paths-ignore directive:

    name: Ansible Testing
          - 'meta/**'
          - Makefile
          - renovate.json
          - .cz.toml
          - '.gitea/workflows/**'
        name: Test
        runs-on: ubuntu-latest

    The only downside is that if you set this pipeline as required in the branch protection, the merge button will look yellow instead of green when the pipeline is skipped.

  • New: Molecule doesn't find the molecule.yaml file.

    This is expected default behavior since Molecule searches for scenarios using the molecule/*/molecule.yml glob. But if you would like to change the suffix to yaml, you can do that if you set the MOLECULE_GLOB environment variable like this:

    export MOLECULE_GLOB='molecule/*/molecule.yaml'


  • New: Create a list of resources based on a list of strings.

    variable "subnet_ids" {
      type = list(string)
    resource "aws_instance" "server" {
      # Create one instance for each subnet
      count = length(var.subnet_ids)
      ami           = "ami-a1b2c3d4"
      instance_type = "t2.micro"
      subnet_id     = var.subnet_ids[count.index]
      tags = {
        Name = "Server ${count.index}"

    If you want to use this generated list on another resource extracting for example the id you can use


Infrastructure Solutions

Kubectl Commands

  • New: Run a pod in a defined node.

    Get the node hostnames with kubectl get nodes, then override the node with:

    kubectl run mypod --image ubuntu:18.04 --overrides='{"apiVersion": "v1", "spec": {"nodeSelector": { "": "my-node.internal" }}}' --command -- sleep 100000000000000

Automating Processes


  • New: Introduce copier.

    Copier is a library and CLI app for rendering project templates.

    • Works with local paths and Git URLs.
    • Your project can include any file and Copier can dynamically replace values in any kind of text file.
    • It generates a beautiful output and takes care of not overwriting existing files unless instructed to do so.

    This long article covers:



  • New: Rename or move a dataset.

    NOTE: if you want to rename the topmost dataset look at rename the topmost dataset instead. File systems can be renamed by using the zfs rename command. You can perform the following operations:

    • Change the name of a file system.
    • Relocate the file system within the ZFS hierarchy.
    • Change the name of a file system and relocate it within the ZFS hierarchy.

    The following example uses the rename subcommand to rename of a file system from kustarz to kustarz_old:

    zfs rename tank/home/kustarz tank/home/kustarz_old

    The following example shows how to use zfs rename to relocate a file system:

    zfs rename tank/home/maybee tank/ws/maybee

    In this example, the maybee file system is relocated from tank/home to tank/ws. When you relocate a file system through rename, the new location must be within the same pool and it must have enough disk space to hold this new file system. If the new location does not have enough disk space, possibly because it has reached its quota, rename operation fails.

    The rename operation attempts an unmount/remount sequence for the file system and any descendent file systems. The rename command fails if the operation is unable to unmount an active file system. If this problem occurs, you must forcibly unmount the file system.

    You'll loose the snapshots though, as explained below.

  • New: Rename the topmost dataset.

    If you want to rename the topmost dataset you need to rename the pool too as these two are tied.

    $: zpool status -v
      pool: tets
     state: ONLINE
     scrub: none requested
            NAME        STATE     READ WRITE CKSUM
            tets        ONLINE       0     0     0
              c0d1      ONLINE       0     0     0
              c1d0      ONLINE       0     0     0
              c1d1      ONLINE       0     0     0
    errors: No known data errors

    To fix this, first export the pool:

    $ zpool export tets

    And then imported it with the correct name:

    $ zpool import tets test

    After the import completed, the pool contains the correct name:

    $ zpool status -v
      pool: test
     state: ONLINE
     scrub: none requested
            NAME        STATE     READ WRITE CKSUM
            test        ONLINE       0     0     0
              c0d1      ONLINE       0     0     0
              c1d0      ONLINE       0     0     0
              c1d1      ONLINE       0     0     0
    errors: No known data errors

    Now you may need to fix the ZFS mountpoints for each dataset

    zfs set mountpoint="/opt/zones/[Newmountpoint]" [ZFSPOOL/[ROOTor other filesystem]
  • New: Rename or move snapshots.

    If the dataset has snapshots you need to rename them too. They must be renamed within the same pool and dataset from which they were created though. For example:

    zfs rename tank/home/cindys@083006 tank/home/cindys@today

    In addition, the following shortcut syntax is equivalent to the preceding syntax:

    zfs rename tank/home/cindys@083006 today

    The following snapshot rename operation is not supported because the target pool and file system name are different from the pool and file system where the snapshot was created:

    $: zfs rename tank/home/cindys@today pool/home/cindys@saturday
    cannot rename to 'pool/home/cindys@today': snapshots must be part of same

    You can recursively rename snapshots by using the zfs rename -r command. For example:

    $: zfs list
    NAME                         USED  AVAIL  REFER  MOUNTPOINT
    users                        270K  16.5G    22K  /users
    users/home                    76K  16.5G    22K  /users/home
    users/home@yesterday            0      -    22K  -
    users/home/markm              18K  16.5G    18K  /users/home/markm
    users/home/markm@yesterday      0      -    18K  -
    users/home/marks              18K  16.5G    18K  /users/home/marks
    users/home/marks@yesterday      0      -    18K  -
    users/home/neil               18K  16.5G    18K  /users/home/neil
    users/home/neil@yesterday       0      -    18K  -
    $: zfs rename -r users/home@yesterday @2daysago
    $: zfs list -r users/home
    NAME                        USED  AVAIL  REFER  MOUNTPOINT
    users/home                   76K  16.5G    22K  /users/home
    users/home@2daysago            0      -    22K  -
    users/home/markm             18K  16.5G    18K  /users/home/markm
    users/home/markm@2daysago      0      -    18K  -
    users/home/marks             18K  16.5G    18K  /users/home/marks
    users/home/marks@2daysago      0      -    18K  -
    users/home/neil              18K  16.5G    18K  /users/home/neil
    users/home/neil@2daysago       0      -    18K  -
  • New: See the differences between two backups.

    To identify the differences between two snapshots, use syntax similar to the following:

    $ zfs diff tank/home/tim@snap1 tank/home/tim@snap2
    M       /tank/home/tim/
    +       /tank/home/tim/fileB

    The following table summarizes the file or directory changes that are identified by the zfs diff command.

    File or Directory Change Identifier
    File or directory has been modified or file or directory link has changed M
    File or directory is present in the older snapshot but not in the more recent snapshot
    File or directory is present in the more recent snapshot but not in the older snapshot +
    File or directory has been renamed R
  • New: Create a cold backup of a series of datasets.

    If you've used the -o keyformat=raw -o keylocation=file:///etc/zfs/keys/home.key arguments to encrypt your datasets you can't use a keyformat=passphase encryption on the cold storage device. You need to copy those keys on the disk. One way of doing it is to:

    • Create a 100M LUKS partition protected with a passphrase where you store the keys.
    • The rest of the space is left for a partition for the zpool.
  • New: Clear a permanent ZFS error in a healthy pool.

    Sometimes when you do a zpool status you may see that the pool is healthy but that there are "Permanent errors" that may point to files themselves or directly to memory locations.

    You can read this long discussion on what does these permanent errors mean, but what solved the issue for me was to run a new scrub

    zpool scrub my_pool

    It takes a long time to run, so be patient.

  • New: ZFS pool is in suspended mode.

    Probably because you've unplugged a device without unmounting it.

    If you want to remount the device you can follow these steps to symlink the new devfs entries to where zfs thinks the vdev is. That way you can regain access to the pool without a reboot.

    So if zpool status says the vdev is /dev/disk2s1, but the reattached drive is at disk4, then do the following:

    cd /dev
    sudo rm -f disk2s1
    sudo ln -s disk4s1 disk2s1
    sudo zpool clear -F WD_1TB
    sudo zpool export WD_1TB
    sudo rm disk2s1
    sudo zpool import WD_1TB

    If you don't care about the zpool anymore, sadly your only solution is to reboot the server. Real ugly, so be careful when you umount zpools.

  • New: Prune snapshots.

    If you want to manually prune the snapshots after you tweaked sanoid.conf you can run:

    sanoid --prune-snapshots
  • New: Send encrypted backups to a encrypted dataset.

    syncoid's default behaviour is to create the destination dataset without encryption so the snapshots are transferred and can be read without encryption. You can check this with the zfs get encryption,keylocation,keyformat command both on source and destination.

    To prevent this from happening you have to [pass the --sendoptions='w']( tosyncoidso that it tells zfs to send a raw stream. If you do so, you also need to [transfer the key file]( to the destination server so that it can do azfs loadkey` and then mount the dataset. For example:

    server-host:$ sudo zfs list -t filesystem
    NAME                    USED  AVAIL     REFER  MOUNTPOINT
    server_data             232M  38.1G      230M  /var/server_data
    server_data/log         111K  38.1G      111K  /var/server_data/log
    server_data/mail        111K  38.1G      111K  /var/server_data/mail
    server_data/nextcloud   111K  38.1G      111K  /var/server_data/nextcloud
    server_data/postgres    111K  38.1G      111K  /var/server_data/postgres
    server-host:$ sudo zfs get keylocation server_data/nextcloud
    NAME                   PROPERTY     VALUE                                    SOURCE
    server_data/nextcloud  keylocation  file:///root/zfs_dataset_nextcloud_pass  local
    server-host:$ sudo syncoid --recursive --skip-parent --sendoptions=w server_data root@
    INFO: Sending oldest full snapshot server_data/log@autosnap_2021-06-18_18:33:42_yearly (~ 49 KB) to new target filesystem:
    17.0KiB 0:00:00 [1.79MiB/s] [=================================================>                                                                                                  ] 34%
    INFO: Updating new target filesystem with incremental server_data/log@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:12:55 (~ 15 KB):
    41.2KiB 0:00:00 [78.4KiB/s] [===================================================================================================================================================] 270%
    INFO: Sending oldest full snapshot server_data/mail@autosnap_2021-06-18_18:33:42_yearly (~ 49 KB) to new target filesystem:
    17.0KiB 0:00:00 [ 921KiB/s] [=================================================>                                                                                                  ] 34%
    INFO: Updating new target filesystem with incremental server_data/mail@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:13:14 (~ 15 KB):
    41.2KiB 0:00:00 [49.4KiB/s] [===================================================================================================================================================] 270%
    INFO: Sending oldest full snapshot server_data/nextcloud@autosnap_2021-06-18_18:33:42_yearly (~ 49 KB) to new target filesystem:
    17.0KiB 0:00:00 [ 870KiB/s] [=================================================>                                                                                                  ] 34%
    INFO: Updating new target filesystem with incremental server_data/nextcloud@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:13:42 (~ 15 KB):
    41.2KiB 0:00:00 [50.4KiB/s] [===================================================================================================================================================] 270%
    INFO: Sending oldest full snapshot server_data/postgres@autosnap_2021-06-18_18:33:42_yearly (~ 50 KB) to new target filesystem:
    17.0KiB 0:00:00 [1.36MiB/s] [===============================================>                                                                                                    ] 33%
    INFO: Updating new target filesystem with incremental server_data/postgres@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:14:11 (~ 15 KB):
    41.2KiB 0:00:00 [48.9KiB/s] [===================================================================================================================================================] 270%
    server-host:$ sudo scp /root/zfs_dataset_nextcloud_pass
    backup-host:$ sudo zfs set keylocation=file:///root/zfs_dataset_nextcloud_pass  backup_pool/nextcloud
    backup-host:$ sudo zfs load-key backup_pool/nextcloud
    backup-host:$ sudo zfs mount backup_pool/nextcloud

    If you also want to keep the encryptionroot you need to let zfs take care of the recursion instead of syncoid. In this case you can't use syncoid's stuff like --exclude from the manpage of zfs:

    -R, --replicate
       Generate a replication stream package, which will replicate the specified file system, and all descendent file systems, up to the named snapshot.  When received, all properties, snap‐
       shots, descendent file systems, and clones are preserved.
       If the -i or -I flags are used in conjunction with the -R flag, an incremental replication stream is generated.  The current values of properties, and current snapshot and file system
       names are set when the stream is received.  If the -F flag is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed.
       If the -R flag is used to send encrypted datasets, then -w must also be specified.

    In this case this should work:

    /sbin/syncoid --recursive --force-delete --sendoptions="Rw" zpool/backups zfs-recv@

ZFS Prometheus exporter

  • Correction: Update the alerts to the more curated version.
  • New: Useful inhibits.

    Some you may want to inhibit some of these rules for some of your datasets. These subsections should be added to the alertmanager.yml file under the inhibit_rules field.

    Ignore snapshots on some datasets: Sometimes you don't want to do snapshots on a dataset

    - target_matchers:
        - alertname = ZfsDatasetWithNoSnapshotsError
        - hostname = my_server_1
        - filesystem = tmp

    Ignore snapshots growth: Sometimes you don't mind if the size of the data saved in the filesystems doesn't change too much between snapshots doesn't change much specially in the most frequent backups because you prefer to keep the backup cadence. It's interesting to have the alert though so that you can get notified of the datasets that don't change that much so you can tweak your backup policy (even if zfs snapshots are almost free).

      - target_matchers:
        - alertname =~ "ZfsSnapshotType(Frequently|Hourly)SizeError"
        - filesystem =~ "(media/(docs|music))"



  • New: Disregard monitorization.

    I've skimmed through the prometheus metrics exposed at :9300/metrics in the core and they aren't that useful :(

Operating Systems


Linux Snippets

  • New: Get the current git branch.

    git branch --show-current
  • New: Install latest version of package from backports.

    Add the backports repository:

    vi /etc/apt/sources.list.d/bullseye-backports.list
    deb bullseye-backports main contrib
    deb-src bullseye-backports main contrib

    Configure the package to be pulled from backports

    vi /etc/apt/preferences.d/90_zfs
    Package: src:zfs-linux
    Pin: release n=bullseye-backports
    Pin-Priority: 990
  • New: Rename multiple files matching a pattern.

    There is rename that looks nice, but you need to install it. Using only find you can do:

    find . -name '*yml' -exec bash -c 'echo mv $0 ${0/yml/yaml}' {} \;

    If it shows what you expect, remove the echo.

  • New: Force ssh to use password authentication.

    ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no
    feat(linux_snippets#Do a tail -f with grep): Do a tail -f with grep

    tail -f file | grep --line-buffered my_pattern
  • New: Check if a program exists in the user's PATH.

    command -v <the_command>

    Example use:

    if ! command -v <the_command> &> /dev/null
        echo "<the_command> could not be found"
  • New: Add interesting tools to explore.

    • qbittools: a feature rich CLI for the management of torrents in qBittorrent.
    • qbit_manage: tool will help manage tedious tasks in qBittorrent and automate them.


  • New: Introduce DiffView.

    Diffview is a single tabpage interface for easily cycling through diffs for all modified files for any git rev.


    If you're using it with NeoGit and Packer use:

      use {
        requires = {


    Calling :DiffviewOpen with no args opens a new Diffview that compares against the current index. You can also provide any valid git rev to view only changes for that rev.


    • :DiffviewOpen
    • :DiffviewOpen HEAD~2
    • :DiffviewOpen HEAD~4..HEAD~2
    • :DiffviewOpen d4a7b0d
    • :DiffviewOpen d4a7b0d^!
    • :DiffviewOpen d4a7b0d..519b30e
    • :DiffviewOpen origin/main...HEAD

    You can also provide additional paths to narrow down what files are shown :DiffviewOpen HEAD~2 -- lua/diffview plugin.

    Additional commands for convenience:

    • :DiffviewClose: Close the current diffview. You can also use :tabclose.
    • :DiffviewToggleFiles: Toggle the file panel.
    • :DiffviewFocusFiles: Bring focus to the file panel.
    • :DiffviewRefresh: Update stats and entries in the file list of the current Diffview.

    With a Diffview open and the default key bindings, you can:

    • Cycle through changed files with <tab> and <s-tab>
    • You can stage changes with -
    • Restore a file with X
    • Refresh the diffs with R
    • Go to the file panel with <leader>e


  • New: Guide on how to start using it.

    You’ll want to set a few basic options before you start using beets. The configuration is stored in a text file. You can show its location by running beet config -p, though it may not exist yet. Run beet config -e to edit the configuration in your favorite text editor. The file will start out empty, but here’s good place to start:

    directory: ~/music
    library: ~/data/musiclibrary.db

    The default configuration assumes you want to start a new organized music folder (that directory above) and that you’ll copy cleaned-up music into that empty folder using beets’ import command. But you can configure beets to behave many other ways:

    • Start with a new empty directory, but move new music in instead of copying it (saving disk space). Put this in your config file:

          move: yes
    • Keep your current directory structure; importing should never move or copy files but instead just correct the tags on music. Put the line copy: no under the import: heading in your config file to disable any copying or renaming. Make sure to point directory at the place where your music is currently stored.

    • Keep your current directory structure and do not correct files’ tags: leave files completely unmodified on your disk. (Corrected tags will still be stored in beets’ database, and you can use them to do renaming or tag changes later.) Put this in your config file:

          copy: no
          write: no

      to disable renaming and tag-writing.

  • New: Importing your library.

    The next step is to import your music files into the beets library database. Because this can involve modifying files and moving them around, data loss is always a possibility, so now would be a good time to make sure you have a recent backup of all your music. We’ll wait.

    There are two good ways to bring your existing library into beets. You can either: (a) quickly bring all your files with all their current metadata into beets’ database, or (b) use beets’ highly-refined autotagger to find canonical metadata for every album you import. Option (a) is really fast, but option (b) makes sure all your songs’ tags are exactly right from the get-go. The point about speed bears repeating: using the autotagger on a large library can take a very long time, and it’s an interactive process. So set aside a good chunk of time if you’re going to go that route.

    If you’ve got time and want to tag all your music right once and for all, do this:

    beet import /path/to/my/music

    (Note that by default, this command will copy music into the directory you specified above. If you want to use your current directory structure, set the import.copy config option.) To take the fast, un-autotagged path, just say:

    beet import -A /my/huge/mp3/library

    Note that you just need to add -A for “don’t autotag”.



  • New: Introduce grafana.

    Grafana is a web application to create dashboards.

    Installation: We're going to install it with docker-compose and connect it to Authentik.

    Create the Authentik connection:

    Assuming that you have the terraform authentik provider configured, use the next terraform code:

    variable "grafana_name" {
      type        = string
      description = "The name shown in the Grafana application."
      default     = "Grafana"
    variable "grafana_redirect_uri" {
      type        = string
      description = "The redirect url configured on Grafana."
    variable "grafana_icon" {
      type        = string
      description = "The icon shown in the Grafana application"
      default     = "/application-icons/grafana.svg"
    resource "authentik_application" "grafana" {
      name              = var.grafana_name
      slug              = "grafana"
      protocol_provider =
      meta_icon         = var.grafana_icon
      lifecycle {
        ignore_changes = [
          # The terraform provider is continuously changing the attribute even though it's set
    resource "authentik_provider_oauth2" "grafana" {
      name               = var.grafana_name
      client_id          = "grafana"
      authorization_flow =
      property_mappings = [,,,
      redirect_uris = [
      signing_key =
      access_token_validity = "minutes=120"
    data "authentik_certificate_key_pair" "default" {
      name = "authentik Self-signed Certificate"
    data "authentik_flow" "default-authorization-flow" {
      slug = "default-provider-authorization-implicit-consent"
    output "grafana_oauth_id" {
      value = authentik_provider_oauth2.grafana.client_id
    output "grafana_oauth_secret" {
      value = authentik_provider_oauth2.grafana.client_secret


  • New: Introduce Jellyfin Desktop.

    • Download the latest deb package from the releases page
    • Install the dependencies
    • Run dpkg -i

    If you're on a TV you may want to enable the TV mode so that the remote keys work as expected. The play/pause/next/prev won't work until this issue is solved, but it's not that bad to use the "Ok" and then navigate with the arrow keys.

  • New: Introduce Jellycon.

    JellyCon is a lightweight Kodi add-on that lets you browse and play media files directly from your Jellyfin server within the Kodi interface. It can be thought of as a thin frontend for a Jellyfin server.

    It's not very pleasant to use though.


  • New: Introduce Kodi.

    Kodi is a entertainment center software. It basically converts your device into a smart tv



  • New: Introduce MediaTracker.

    MediaTracker is a self hosted media tracker for movies, tv shows, video games, books and audiobooks


    With docker compose:

    version: "3"
        container_name: mediatracker
          - 7481:7481
          - /home/YOUR_HOME_DIRECTORY/.config/mediatracker/data:/storage
          - assetsVolume:/assets
          SERVER_LANG: en
          TMDB_LANG: en
          AUDIBLE_LANG: us
          TZ: Europe/London
        image: bonukai/mediatracker:latest
      assetsVolume: null

    If you attach more than one docker network the container becomes unreachable :S.

    Install the jellyfin plugin:

    They created a Jellyfin plugin so that all scrobs are sent automatically to the mediatracker

    • Add new Repository in Jellyfin (Dashboard -> Plugins -> Repositories -> +) from url
    • Install MediaTracker plugin from Catalogue (Dashboard -> Plugins -> Catalogue)

    Some tips on usage:

    • Add the shows you want to watch to the watchlist so that it's easier to find them
    • When you're ending an episode, click on the episode number on the watchlist element and then rate the episode itself.

    • You can create public lists to share with the rest of the users, the way to share it though is a bit archaic so far, it's only through the list link, in the interface they won't be able to see it.



Video Gaming

Age of Empires