Skip to content



Collaborating tools

Life Management

Life Management

Life planning

  • New: Introduce the month planning process.

    The objectives of the month plan are:

    • Define the month objectives according to the trimester plan and the insights gathered in the past month review.
    • Make your backlog and todo list match the month objectives
    • Define the philosophical topics to address
    • Define the topics to learn
    • Define the are of habits to incorporate?
    • Define the checks you want to do at the end of the month.
    • Plan when is it going to be the next review.

    It's interesting to do the plannings on meaningful days such as the first one of the month. Usually we don't have enough flexibility in our life to do it exactly that day, so schedule it the closest you can to that date. It's a good idea to do both the review and the planning on the same day.

    We'll divide the planning process in these phases:

    • Prepare
    • Clarify your state
    • Decide the month objectives


    It's important that you prepare your environment for the planning. You need to be present and fully focused on the process itself. To do so you can:

    • Make sure you don't get interrupted:
      • Check your task manager tools to make sure that you don't have anything urgent to address in the next hour.
      • Disable all notifications
    • Set your analysis environment:
      • Put on the music that helps you get in the zone.
      • Get all the things you may need for the review:
        • The checklist that defines the process of your planning (this document in my case).
        • Somewhere to write down the insights.
        • Your task manager system
        • Your habit manager system
        • Your Objective list.
        • Your Thinking list.
        • Your Reading list.
      • Remove from your environment everything else that may distract you

    Clarify your state:

    To be able to make a good decision on your month's path you need to sort out which is your current state. To do so:

    • Clean your inbox: Refile each item until it's empty
    • Clean your todo: Review each todo element by deciding if they should still be in the todo. If they do and they belong to a month objective, add it. If they don't need to be in the todo, refile it.
    • Clean your someday: Review each relevant someday element (not the ones that are archive at greater levels than month) and decide if they should be refiled elsewhere and if they are part of a month objective that should be dealt with this month.
    • Adress each of the trimester objectives by creating month objectives that get you closer to the desired objective.

    Decide the next steps:

    For each of your month objectives:

    • Decide wheter it makes sense to address it this month. If not, archive it
    • Create a clear plan of action for this month on that objective
    • Tweak your things to think about list.
    • Tweak your reading list.
    • Tweak your habit manager system.

Task Management

Org Mode

  • New: Introduce Nvim Org Mode.

    nvim-orgmode is a Orgmode clone written in Lua for Neovim. Org-mode is a flexible note-taking system that was originally created for Emacs. It has gained wide-spread acclaim and was eventually ported to Neovim.

    The article includes:

  • New: Capture all your stuff.

    The focus of this process is to capture everything that has your attention, otherwise some part of you will still not totally trust that you're working with the whole picture. While you're doing it, create a list of all the sources of inputs in your world.

    What you're going to do is methodically go through each piece of your life and search for anything that doesn’t permanently belong where it is, the way it is, and put it into your in-tray. You’ll be gathering things that are incomplete or things that have some decision about potential action tied to them. They all go into your “inbox”, so they’ll be available for later processing.

    Be patient, this process may take between 1 and 6 hours, and at the end you'll have a huge pile of stuff in your inbox. You might be scared and get the thought of "what am I doing with my life?", but don't worry you'll get everything in order soon :).

    The steps described in the section so far are:

  • New: Digital general reference.

    It is very helpful to have a visual map sorted in ways that make sense, either by indexes or data groups organized effectively, usually in an alphabetic order.

    The biggest issue for digitally oriented people is that the ease of capturing and storing has generated a write-only syndrome: all they’re doing is capturing information—not actually accessing and using it intelligently. Some consciousness needs to be applied to keep one’s potentially huge digital library functional, versus a black hole of data easily dumped in there with a couple of keystrokes.

    You need to consistently check how much room to give yourself so that the content remains meaningfully and easily accessible, without creating a black hole of an inordinate amount of information amorphously organized.

  • New: Physical general reference.

    One idea is to have one system/place where you order the content alphabetically, not multiple ones. People have a tendency to want to use their files as a personal management system, and therefore they attempt to organize them in groupings by projects or areas of focus. This magnifies geometrically the number of places something isn’t when you forget where you filed it.

  • New: Use telescope plugin for refiling.

    Refiling lets you easily move around elements of your org file, such as headings or TODOs. You can refile with <leader>r with the next snippet:

    org = {
      org_refile = '<leader>r',

    When you press the refile key binding you are supposed to press <tab> to see the available options, once you select the correct file, if you will be shown a autocomplete with the possible items to refile it to. Luckily there is a Telescope plugin.

    Install it by adding to your plugin config:

    use 'joaomsa/telescope-orgmode.nvim'

    Then install it with :PackerInstall.

    You can setup the extension by doing:


    To replace the default refile prompt:

    vim.api.nvim_create_autocmd('FileType', {
      pattern = 'org',
      group = vim.api.nvim_create_augroup('orgmode_telescope_nvim', { clear = true })
      callback = function()
        vim.keymap.set('n', '<leader>r', require('telescope').extensions.orgmode.refile_heading)
        vim.keymap.set('n', '<leader>g', require('telescope').extensions.orgmode.search_headings)

    If the auto command doesn't override the default orgmode one, bind it to another keys and never use it.

    The plugin also allows you to use telescope to search through the headings of the different files with search_headings, with the configuration above you'd use <leader>g.

  • New: Define the Todo list.

    This list contains all the next actions and projects you are going to actively work on. Projects are any desired result that can be accomplished within a year that requires more than one action step. This means that some rather small things you might not normally call projects are going to be on your Projects list, as well as some big ones. If one step won’t complete something, some kind of goalpost needs to be set up to remind you that there’s something still left to do. If you don’t have a placeholder to remind you about it, it will slip back into your head. The reason for the one-year time frame is that anything you are committed to finish within that scope needs to be reviewed weekly to feel comfortable about its status. Another way to think of this is as a list of open loops, no matter what the size. This is going to be one of the lists that you'll review more often, and it needs to be manageable, if the items start to grow you may want to track the elements you want to do in the semester, or trimester.

    Projects do not initially need to be listed in any particular order, by size, or by priority. They just need to be on a master list so you can review them regularly enough to ensure that appropriate next actions have been defined for each of them. That being said, I like to order them a little bit so that I don't need to read the whole list to choose what to do.

    There may be reasons to sort your projects into different subcategories, based upon different areas of your focus, but initially creating a single list of all of them will make it easier to customize your system appropriately as you get more comfortable with its usage. To sort them use tags instead of hierarchical structures, they are more flexible. For example you could use tags for:

    • Context: Where can you do the element: home, computer, mobile, ...
    • Area: Broad categories where the element falls in: activism, caring, self-caring, home, digital services, ...
    • Type: I like to separate the tasks that are meant to survive (maintenance) from the ones that are meant to improve things (improvement)
    • Mood, energy level, time: It's useful to have a quick way to see the tasks you can work on when you don't have that much time (small), you don't have that much mental energy (brainless), when you're sad, ...

    For many of your projects, you will accumulate relevant information that you will want to organize by theme or topic or project name. Your Projects list will be merely an index. All of the details, plans, and supporting information that you may need as you work on your various projects should be contained in your References system.

  • New: Define the calendar.

    The calendar holds reminders of actions you need to take fall into two categories: those about things that have to happen on a specific day or time, and those about things that just need to get done as soon as possible. Your calendar handles the first type of reminder.

    These things go on your calendar:

    • Time-Specific actions or appointments.
    • Day-Specific actions: These are things that you need to do sometime on a certain day, but not necessarily at a specific time.
    • Day-Specific information: Information that may be useful on a certain date. This might include directions for appointments, activities that other people will be involved in then, or events of interest. It’s helpful to put short-term tickler information here, too, such as a reminder to call someone after he or she returns from vacation. This is also where you would want to park important reminders about when something might be due, or when something needs to be started, given a determined lead time.

    Daily to-do lists don't belong to the calendar because:

    • Constant new input and shifting tactical priorities reconfigure daily work so consistently that it’s virtually impossible to nail down to-do items ahead of time. Having a working game plan as a reference point is always useful, but it must be able to be renegotiated at any moment. Trying to keep a list on the calendar, which must then be reentered on another day if items don’t get done, is demoralizing and a waste of time. The Next Actions lists will hold all of those action reminders, even the most time-sensitive ones. And they won’t have to be rewritten daily.

    • If there’s something on a daily to-do list that doesn’t absolutely have to get done that day, it will dilute the emphasis on the things that truly do. The calendar should be sacred territory. If you write something there, it must get done that day or not at all.

    That said, there’s absolutely nothing wrong with creating a quick, informal, short list of “if I have time, I’d really like to . . .” kinds of things, picked from your Next Actions inventory. It just should not be confused with your “have-tos,” and it should be treated lightly enough to discard or change quickly as the inevitable surprises of the day unfold.

  • New: Give an overview of how I'm using gtd.

    Before you start moving stuff around it's a good idea to get the first design of your whole system, in my case I'm going to heavily rely on org-mode to track most of the stuff with a repository with the next structure:

    ├── calendar
    │   ├──
    │   │   ├── One time events
    │   │   ├── Recurring events
    │   │   ├── Birthdays
    │   │   └── Deathdays
    │   ├──
    │   ├──
    │   ├──
    │   ├──
    │   ├──
    │   ├──
    ├── inbox
    │   ├──
    │   ├──
    │   └──
    ├── reference
    │   ├── blue
    │   ├──
    │   └── red
    └── todo


    • The subtrees behind the .org files are the heading trees.
    • All org files go with their respective org_archive ones, they're not shown in the above diagram to keep it simple.
    • calendar/ is my personal calendar.
    • calendar/ is my day planner.
  • New: Define how to use the Personal calendar.

    The calendar/ file holds:

    • Appointments: Meant to be used for elements of the org file that have a defined date to occur. You whether do it that date or not do it at all. Avoid using dates to organize your tasks and if you don't do it that day, reschedule it to another date, it's a waste of time, use next actions in your todo instead. If you need to act on it use a TODO element, otherwise a headline is enough An example would be.
    * TODO Meet with Marie
    <2023-02-24 Fri>
    * Internet's birthday
    <2023-03-13 Mon>
    • Recurring events: Events that not only happen on the given date, but again and again after a certain interval of N hours (h), days (d), weeks (w), months (m), or years (y). The following shows up in the agenda every Wednesday:
    * TODO Go to pilates
      <2007-05-16 Wed 12:30 +1w>

    Each section has it's own tag: :recurring:, :day:, :birthday:, :deathday:, and the whole file has the :event: tag for easy filtering.

    In rare cases you may want to use the DEADLINE property if you want to be warned in the agenda some days before the date arrives or the SCHEDULED one in case you want to see in the agenda when you start working on the task. Again, don't waste time postponing these dates, if you do, you're using the system wrong.

  • New: Define how to use the Day planner.

    Some of my day events are semi-regular, meaning that the recurrence options are not powerful enough. For example, I usually go to pilates on Tuesdays, but some weeks I go at 18:00 and others at 19:00. In the past I used a script that interacts with ikhal to create the elements of the day based on some questionary. The idea is to migrate the tool to create appointments in the day under the file using a datetree structure:

    * 2010
    ** 2010-12 December
    *** 2010-12-20 Tuesday
    **** TODO Go to pilates
        <2010-12-20 Tue 19:00-20:00>

    I also use this file to add any diary annotations for my life log. Once this issue is solved it will be really easy to add diary thoughts through the capture feature

  • New: Define how to use the todo files.

    The todo files are where you track the todo list, which holds your projects and their next steps to work on. The todo/, todo/ and todo/ files of the above schema will be divided into these level 1 headings:

    • * Necessary: These projects need to be dealt with immediately and finished as soon as possible
    • * Desirable: Here is where most of your elements will be, these are the ones that you think it's important to work on but there is no hard pressure.
    • * Optional: These are the projects that it would be nice to work on, but if you don't it's fine.

    Projects are any the second level headings with TODO keywords. To see the list of your projects just fold all the items in the file.

    Inside each section the elements are more less ordered by what I want to work on first. But all projects are actionable, so if I'm not in the mood to do the first ones, I tackle the rest. As such, I try not to spend too much time ordering them.

    I find useful to split the tasks between my life silos, so that I don't even have a chance to think of anything of work_1 when I'm doing my personal stuff or work_2 stuff.

  • New: Define how to work with projects.

    Given the broad definition of what we consider a project and how they are usually cooked, the system that represents it must be equally flexible, quick to interact with and easy to evolve.

    Every project starts with the title:

    * TODO Improve task management system

    Optionally you can add a description

    * TODO Improve task management system
      Using Openprojects is uncomfortable, I need to find a better system.

    You may have noticed that the description doesn't follow the rules we defined for next actions, that's fine as you don't act on projects, but on their underlying actions. Nevertheless I like to start them with a verb. It may even make sense not to use TODO items but simple headings to define your projects. On one side you don't act on projects so it would make sense to use headings, on the other, it's also interesting to know the project state, which can be easily tracked with the TODO keywords. If you could tell apart headings from TODO items in the agenda views it would make sense to use them. Right now nvim-orgmode let's you select in the agenda views only TODO items or TODO and headings, but you can't select only headings, so at the moment I don't see any good reason not to use TODO items for the projects.

    To define the next actions of a project you can use checklists

    * TODO Improve task management system
      - [-] Read David Allen's GTD book
        - [x] Read chapters 6 and 7
        - [ ] Read chapters 8 and 9
      - [ ] Sum up the book in the blue book

    As your checklists grow they may start to be uncomfortable, for example if it has:

    • More than two levels of indentation: It may be hard to follow the logic of the task structure.
    • A lot of elements: You can't archive parts of checklists, so as you complete elements, they will still be shown diverting your attention from the things you can actually act upon or making you loose time scrolling to find where they are.

    In these cases it makes sense to promote the first level of headings to subprojects:

    * TODO Improve task management system
      * DOING Read David Allen's GTD book
        - [x] Read chapters 6 and 7
        - [ ] Read chapters 8 and 9
      * TODO up the book in the blue book

    That way when Read David Allen's GTD book is done, you can archive it and forget about it.

    If the project starts having many subprojects, it may help to have a section "Outcomes" to define what do you want to achieve with the project. It can be accompanied with a "Next Steps" section to add any subproject or action that doesn't match the defined outcomes, and once you finish the project, you can refile them into new projects.

  • New: The NEXT state.

    It's useful to have a NEXT state to track the first next action you need to deal with for each project. That way when you open the file, you can go to the top of it and search for NEXT and it will lead you directly to where you need to work on.

  • New: Define how to manage tags.

    As explained in the todo list section, you can use tags to filter your tasks. I'm using the next ones:

    • Area: Broad categories where the element falls in: activism, caring, self-caring, home, digital services, ...
    • Type: I like to separate the tasks that are meant to survive (maintenance) from the ones that are meant to improve things (improvement). I use these only in the big projects.
    • :long_break:: I'm using this tag to track the small projects that can be done in the long pomodoro breaks. Depending on the kind of long break that I need I then filter for the next tags:
    • brainless: If I want to keep on thinking on what I was doing, an example could be emptying the dishwasher, watering the plants, ...
    • call: If I want to completely change context and want some social interaction. For example call mom.
  • New: Define how to manage waiting tasks.

    Waiting actions are elements that are blocked for any reason. I use the WAITING TODO keyword to track this state. Under each element you should add that reason and optionally the process you want to follow to unblock it.

    If you need to actively track the evolution of the WAITING status, leave it on the top of your todo. Otherwise set the date you want to check its status and move it to the file.

  • New: Define how to use the tickler.

    The tickler is a system where you postpone actions to a specific date, but not with a calendar mindset where the action needs to be done at that date. With the tickler you schedule the action to enter your inbox that day to decide what are you going to do with it.

    To implement this in orgmode you can add the :tickler: tag to any element that is tracked in the agenda files and once a day you can look at the day's agenda and decide what to do with the action. It's important though that whatever you do with it, you have to remove it from the agenda view in order to only keep the elements that you need to do in the day. You can follow this workflow by:

    • Opening the agenda view gaa
    • Go to the view of the day vd
    • Go to today .
    • Search by tickler /tickler

    It can also help to review in the weeklies all the ticklers of the week to avoid surprises.

    If you want to make the project go away from your todo or someday until the tickler date, move it to the file.

  • Correction: Keep on defining steps to capture all your stuff.

    As you engage in the capturing step, you may run into one or more of the following problems:

    • An item is too big to go in the in-tray: create a post it that represents it or add it as an entry in your digital inbox. If you can, add the date too
    • The pile is too big to fit the in-tray: Create visually distinct stacks around the in-tray, even on the floor.
    • Doubts whether to trash something: When in doubt keep it, you'll decide about it later when you process the in-tray. What you need to avoid is to get caught up in deciding what to do with the element. That's going to be the next step in the process, let's go one at a time.
    • Getting caught up in cleaning and organizing: If it doesn't take that long it's fine but remember the purpose of this process and the fact that we want to finish it as soon as possible. If you discover things you want to change, add them to the in-tray.
    • If you encounter stuff that is already on lists and organizers, treat them as everything else in the "in".

    Now that the process it's clear let's start.

    Start with the space where you actually do stuff, scan the place centimeter by centimeter with the mindset defined above, check your desk, drawers, floors, walls, shelves, equipment, furniture, fixtures...Then repeat the process with each room of your home.

  • New: Explain how to capture all your mental stuff.

    Once you already have a nice pile of stuff, think of what has your attention that isn’t represented by something already in your in-tray and record each thought, each idea, each project or thing that occurs you and add it to the inbox.

    To assist in clearing your head, you may want to review the following the next trigger list, item by item, to see if you’ve forgotten anything.

  • New: Define priorities from A to D.

    I feel more comfortable with these priorities:

    • A: Critical
    • B: High
    • C: Normal
    • D: Low

    This gives you room to usually work on priorities B-D and if something shows up that is really really important, you can use A. You can set this setting with the next snippet:

      org_priority_highest = 'A',
      org_priority_default = 'C',
      org_priority_lowest = 'D',
  • New: Warn against using DEADLINE.

    Using too many tasks with a DEADLINE will clutter your agenda. Use it only for the actions that you need to have a reminder, instead try to using appointment dates instead.

    If you need a different warning period for a special task, you can specify it. For example setting a warning period of 5 days DEADLINE: <2004-02-29 Sun -5d>. To configure the default number of days add:

      org_deadline_warning_days = 10,
  • Correction: Improve how to use tags.

    When you press the tag key binding you can type:

    • tag1: It will add :tag1:.
    • tag1:tag2: It will add :tag1:tag2:.
    • Press ESC: It will remove all tags from the item.

    Tags make use of the hierarchical structure of outline trees. If a heading has a certain tag, all subheadings inherit the tag as well. For example, in the list

    * Meeting with the French group      :work:
    ** Summary by Frank                  :boss:notes:
    *** TODO Prepare slides for him      :action:

    The final heading has the tags work, boss, notes, and action even though the final heading is not explicitly marked with those tags. You can also set tags that all entries in a file should inherit just as if these tags were defined in a hypothetical level zero that surrounds the entire file. Using a line like the next one:

     #+FILETAGS: :Peter:Boss:Secret:
  • New: How to use links.

    One final aspect of the org file syntax are links. Links are of the form [[link][description]], where link can be an:

    A link that does not look like a URL refers to the current document. You can follow it with gx when point is on the link (Default <leader>oo) if you use the next configuration.

    org = {
      org_open_at_point = 'gx',
  • New: Internal document links.

    Org provides several refinements to internal navigation within a document. Most notably:

    • [[*Some section]]: points to a headline with the name Some section.
    • [[#my-custom-id]]: targets the entry with the CUSTOM_ID property set to my-custom-id.

    When the link does not belong to any of the cases above, Org looks for a dedicated target: the same string in double angular brackets, like <<My Target>>.

    If no dedicated target exists, the link tries to match the exact name of an element within the buffer. Naming is done, unsurprisingly, with the NAME keyword, which has to be put in the line before the element it refers to, as in the following example

     #+NAME: My Target
    | a  | table      |
    | of | four cells |

    Ultimately, if none of the above succeeds, Org searches for a headline that is exactly the link text but may also include a TODO keyword and tags, or initiates a plain text search.

    Note that you must make sure custom IDs, dedicated targets, and names are unique throughout the document. Org provides a linter to assist you in the process, if needed, but I have not searched yet one for nvim.

  • New: How to use properties.

    Properties are key-value pairs associated with an entry. They live in a special drawer with the name PROPERTIES. Each property is specified on a single line, with the key (surrounded by colons) first, and the value after it:

    * CD collection
    ** Classic
    *** Goldberg Variations
        :Title:     Goldberg Variations
        :Composer:  J.S. Bach
        :Publisher: Deutsche Grammophon
        :NDisks:    1

    You may define the allowed values for a particular property Xyz by setting a property Xyz_ALL. This special property is inherited, so if you set it in a level 1 entry, it applies to the entire tree. When allowed values are defined, setting the corresponding property becomes easier and is less prone to typing errors. For the example with the CD collection, we can pre-define publishers and the number of disks in a box like this:

    * CD collection
      :NDisks_ALL:  1 2 3 4
      :Publisher_ALL: "Deutsche Grammophon" Philips EMI

    If you want to set properties that can be inherited by any entry in a file, use a line like:

     #+PROPERTY: NDisks_ALL 1 2 3 4

    This can be interesting for example if you want to track when was a header created:

    *** Title of header
       :CREATED: <2023-03-03 Fri 12:11>
  • New: How to do Agenda searches.

    When using the search agenda view you can:

    • Search by TODO states with /WAITING
    • Search by tags +home. The syntax for such searches follows a simple boolean logic:

    • |: or

    • &: and
    • +: include matches
    • -: exclude matches

    Here are a few examples:

    • +computer&+urgent: Returns all items tagged both computer and urgent.
    • +computer|+urgent: Returns all items tagged either computer or urgent.
    • +computer&-urgent: Returns all items tagged computer and not urgent.

    As you may have noticed, the syntax above can be a little verbose, so org-mode offers convenient ways of shortening it. First, - and + imply and if no boolean operator is stated, so example three above could be rewritten simply as:


    Second, inclusion of matches is implied if no + or - is present, so example three could be further shortened to:


    Example number two, meanwhile, could be shortened to:


    There is no way (as yet) to express search grouping with parentheses. The and operators (&, +, and -) always bind terms together more strongly than or (|). For instance, the following search


    Results in all headlines tagged either with computer or both work and email. An expression such as (computer|work)&email is not supported at the moment. You can construct a regular expression though:

    • Search by properties: You can search by properties with the PROPERTY="value" syntax. Properties with numeric values can be queried with inequalities PAGES>100. To search by partial searches use a regular expression, for example if the entry had :BIB_TITLE: Mysteries of the Amazon you could use BIB_TITLE={Amazon}
  • New: How to use Capture in orgmode.

    Capture lets you quickly store notes with little interruption of your work flow. It works the next way:

    • Open the interface with ;c (Default <leader>oc) that asks you what kind of element you want to capture.
    • Select the template you want to use. By default you only have the Task template, that introduces a task into the same file where you're at, select it by pressing t.
    • Fill up the template.
    • Choose what to do with the captured content:
    • Save it to the configured file by pressing ;w (Default <control>c)
    • Refile it to a file by pressing ;r (Default <leader>or).
    • Abort the capture ;q (Default <leader>ok).
    mappings = {
      global = {
        org_capture = ';c',
      capture = {
        org_capture_finalize = ';w',
        org_capture_refile = ';r',
        org_capture_kill = ';q',
  • New: Configure the capture templates.

    Capture lets you define different templates for the different inputs. Each template has the next elements:

    • Keybinding: Keys to press to activate the template
    • Description: What to show in the capture menu to describe the template
    • Template: The actual template of the capture, look below to see how to create them.
    • Target: The place where the captured element will be inserted to. For example ~/org/ If you don't define it it will go to the file configured in org_default_notes_file.
    • Headline: An optional headline of the Target file to insert the element.

    For example:

    org_capture_templates = {
      t = { description = 'Task', template = '* TODO %?\n  %u' }

    For the template you can use the next variables:

    • %?:Default cursor position when template is opened
    • %t: Prints current date (Example: <2021-06-10 Thu>)
    • %T: Prints current date and time (Example: <2021-06-10 Thu 12:30>)
    • %u: Prints current date in inactive format (Example: [2021-06-10 Thu])
    • %U: Prints current date and time in inactive format (Example: [2021-06-10 Thu 12:30])
    • %<FORMAT>: Insert current date/time formatted according to lua date format (Example: %<%Y-%m-%d %A> produces 2021-07-02 Friday)
    • %x: Insert content of the clipboard via the "+" register (see :help clipboard)
    • %^{PROMPT|DEFAULT|COMPLETION...}: Prompt for input, if completion is provided an :h inputlist will be used
    • %(EXP): Runs the given lua code and inserts the result. NOTE: this will internally pass the content to the lua load() function. So the body inside %() should be the body of a function that returns a string.
    • %f: Prints the file of the buffer capture was called from.
    • %F: Like %f but inserts the full path.
    • %n: Inserts the current $USER
    • %a: File and line number from where capture was initiated (Example: [[file:/home/user/projects/myfile.txt +2]])

    For example:

      T = {
        description = 'Todo',
        template = '* TODO %?\n %u',
        target = '~/org/'
     j = {
        description = 'Journal',
        template = '\n*** %<%Y-%m-%d %A>\n**** %U\n\n%?',
        target = '~/sync/org/'
      -- Nested key example:
      e =  'Event',
      er = {
        description = 'recurring',
        template = '** %?\n %T',
        target = '~/org/',
        headline = 'recurring'
      eo = {
        description = 'one-time',
        template = '** %?\n %T',
        target = '~/org/',
        headline = 'one-time'
      -- Example using a lua function
      r = {
        description = "Repo URL",
        template = "* [[%x][%(return string.match('%x', '([^/]+)$'))]]%?",
        target = "~/org/",
  • New: Synchronize with external calendars.

    You may want to synchronize your calendar entries with external ones shared with other people, such as nextcloud calendar or google.

    The orgmode docs have a tutorial to sync with google and suggests some orgmode packages that do that, sadly it won't work with nvim-orgmode. We'll need to go the "ugly way" by:

  • New: Comparison with Markdown.

    What I like of Org mode over Markdown:

    • The whole interface to interact with the elements of the document through key bindings:
    • Move elements around.
    • Create elements
    • The TODO system is awesome
    • The Agenda system
    • How it handles checkboxes <3
    • Easy navigation between references in the document
    • Archiving feature
    • Refiling feature
    • # is used for comments.
    • Create internal document links is easier, you can just copy and paste the heading similar to [[*This is the heading]] on markdown you need to edit it to [](#this-is-the-heading).

    What I like of markdown over Org mode:

    • The syntax of the headings ## Title better than ** Title. Although it makes sense to have # for comments.
    • The syntax of the links: [reference](link) is prettier to read and write than [[link][reference]], although this can be improved if only the reference is shown by your editor (nvim-orgmode doesn't do his yet)
  • New: Note the importance of isolated containers.

    It's critical that all of these containers be kept distinct from one another. They each represent a discrete type of agreement we make with ourselves, to be reminded of at a specific time and in a specific way, and if they lose their edges and begin to blend, much of the value of organizing will be lost. That's why capturing and clarifying what your relationship to them is primary to getting organized.

  • New: Inbox management.

    Inbox is the container where you capture your stuff. I've found myself capturing stuff in each of my devices: computer, mobile phone and tablet. Each of them has their own org file under the inbox directory. Each of these files has the #+FILETAGS: :inbox: heading so that all elements share the tag.

    Part of the daily planning is to check the computer and mobile inboxes to see if there is anything that needs to be processed on the day. I don't check the tablet inbox as there's usually no urgent stuff there. The rest of the elements will be processed on the weekly review leaving all the inbox files empty.

  • New: Computer inbox management.

    nvim-orgmode has an awesome feature called capture which lets you capture thoughts with a keystroke. This is awesome as no matter what are you doing inside neovim you can quickly record your thought, action or idea and keep on doing whatever you were doing. It's a very efficient way to record your stuff at the same time as you keep your focus.

    You can use the next capture template:

      org_capture_templates = {
        i = {
          description = "Inbox",
          template = "* TODO %?\n %U",
          target = "~/org/inbox/",
  • New: Mobile and tablet inbox management.

    To capture the content on the go you can use the orgzly and then sync them with your computer through syncthing.

  • New: Calendar management.

    You need to trust your calendar as sacred territory, reflecting the exact hard edges of your day's commitments, which should be noticeable at a glance while you're on the run.

    So for each element you encounter in the calendar ask yourself, does this element need to be done on this hard date? If the answer is no, then the calendar is not the correct place for the element to be.

    Using dates to order your tasks it's a waste of time, because there will always be a thousand of reasons why you can't do all the things you allocate to that day. As these not done issues start piling up, you'll start to get stressed with a lot of things that you were not able to do on the dates you deceived yourself you were supposed to do at and then you need to spend time defining a new date. Use next actions in your todo instead.

  • New: Priority management.

    You shouldn’t bother to create some external structuring of the priorities on your lists that you’ll then have to rearrange or rewrite as things change. Attempting to impose such scaffolding has been a big source of frustration in many people’s organizing. You’ll be prioritizing more intuitively as you see the whole list against quite a number of shifting variables. The list is just a way for you to keep track of the total inventory of active things to which you have made a commitment, and to have that inventory available for review.

    Therefore I'm going to try not to use orgmode's priorities for the tasks.

  • New: Soft recurrent tasks.

    There are some tasks that have a soft recurrence meaning that once you do them you don't want them to show up in your list of actions until a specific time has passed. You could use a recurrent DEADLINE or SCHEDULED but as we've seen earlier that will clutter your calendar pretty soon. Try following the next workflow with these tasks:

    • Add the :soft_recurrence: tag to keep them tracked.
    • Add them to the tickler file with a recurrent appointment date <2023-03-13 Mon ++1w> and the :tickler: tag so that it doesn't show up in the agenda view even if you move it to another file.
    • When the appointed day comes you'll review the tickler elements as part of your day's routine. If you think it's time to do it, refile it to the file, if not, adjust the recurrence period and set the next date. Even though this workflow is reproducing the "kick the can forward" that we want to avoid, the idea is that once you get the period right you'll never have to do it again. If you see that after some iterations the period keeps on changing, maybe this workflow is not working for that kind of task and you may need to think of a better system ¯\(°_o)/¯.
    • Once you complete the item, the new one will be spawned, once it has refile it to the tickler file again.

    We use appointments instead of DEADLINE or SCHEDULED so that they don't clutter the tickler view if you don't do them on the appointment date.

    Another option is not to archive the DONE tasks and in the weekly reset them to TODO the ones that you want to do the next week.

  • New: Create an issue in the orgmode repository.

  • New: Refile from the capture window.

    If you refile from the capture window, until this issue is solved, your task will be refiled but the capture window won't be closed.

    Be careful that it only refiles the first task there is, so you need to close the capture before refiling the next

  • New: Code blocks syntax.

    Org offers two ways to structure source code in Org documents: in a source code block, and directly inline. Both specifications are shown below.

    A source code block conforms to this structure:


    You need to use snippets for this to be usable.

    An inline code block conforms to this structure:



    src_<language>[<header arguments>]{<body>}


    • #+NAME: <name>: (Optional) Names the source block so it can be called, like a function, from other source blocks or inline code to evaluate or to capture the results. Code from other blocks, other files.
    • #+BEGIN_SRC’ … ‘#+END_SRC: (Mandatory) They mark the start and end of a block that Org requires.
    • <language>: (Mandatory) It is the identifier of the source code language in the block. See Languages for identifiers of supported languages.
    • <switches>: (Optional) Switches provide finer control of the code execution, export, and format.
    • <header arguments>: (Optional) Heading arguments control many aspects of evaluation, export and tangling of code blocks. Using Org’s properties feature, header arguments can be selectively applied to the entire buffer or specific subtrees of the Org document.
    • <body>: Source code in the dialect of the specified language identifier.
  • New: Narrow/Widen to subtree.

    It's not yet supported to focus or zoom on one task.

  • New: Interesting things to investigate.

  • New: The orgmode repository file organization.

    How to structure the different orgmode files is something that has always confused me, each one does it's own way, and there are no good posts on why one structure is better than other, people just state what they do.

    I've started with a typical gtd structure with a directory for the todo another for the calendar then another for the references. In the todo I had a file for personal stuff, another for each of my work clients, and the Soon making the internal links was cumbersome so I decided to merge the personal and the into the same file and use folds to hide uninteresting parts of the file. The reality is that I feel that orgmode is less responsive and that I often feel lost in the file.

    I'm now more into the idea of having files per project in a flat structure and use an file to give it some sense in the same way I do with the mkdocs repositories. Then I'd use internal links in the file to organize the priorities of what to do next.


    • As we're using a flat structure at file level, the links between the files are less cumbersome file:./*heading. We only need to have unique easy to remember names for the files, instead of having to think on which directory was the file I want to make the link to. The all in one file structure makes links even easier, just *heading, but the disadvantages make it not worth it.
    • You have the liberty to have a generic link like Work on project or if you want to fine grain it, link the specific task of the project
    • The todo file will get smaller.
    • It has been the natural evolution of other knowledge repositories such as blue


    • Filenames must be unique. It hasn't been a problem in blue.
    • Blue won't be flattened into Vida as it's it's own knowledge repository
  • New: Syncronize orgmode repositories.

    I use orgmode both at the laptop and the mobile, I want to syncronize some files between both with the next requisites:

    • The files should be available on the devices when I'm not at home
    • The synchronization will be done only on the local network
    • The synchronization mechanism will only be able to see the files that need to be synched.
    • Different files can be synced to different devices. If I have three devices (laptop, mobile, tablet) I want to sync all mobile files to the laptop but just some to the tablet).

    Right now I'm already using syncthing to sync files between the mobile and my server, so it's tempting to use it also to solve this issue. So the first approach is to spawn a syncthing docker at the laptop that connects with the server to sync the files whenever I'm at home.

    I've investigated the next options:


  • Correction: Suggest not to use openproject.

    I've decided to use orgmode instead.

Knowledge Management

Spaced Repetition


  • New: How to install the latest version.

    Install the dependencies:

    sudo apt-get install zstd

    Download the latest release package.

    Open a terminal and run the following commands, replacing the filename as appropriate:

    tar xaf Downloads/anki-2.1.XX-linux-qt6.tar.zst
    cd anki-2.1.XX-linux-qt6
    sudo ./
  • New: How long to do study sessions.

    I have two study modes:

    • When I'm up to date with my cards, I study them until I finish, but usually less than 15 minutes.
    • If I have been lazy and haven't checked them in a while (like now) I assume I'm not going to see them all and define a limited amount of time to review them, say 10 to 20 minutes depending on the time/energy I have at the moment.

    The relief thought you can have is that as long as you keep a steady pace of 10/20 mins each day, inevitably you'll eventually finish your pending cards as you're more effective reviewing cards than entering new ones

  • New: What to do with "hard" cards.

    If you're afraid to be stuck in a loop of reviewing "hard" cards, don't be. In reality after you've seen that "hard" card three times in a row you won't mark it as hard again, because you will remember. If you don't maybe there are two reasons:

    • The card has too much information that should be subdivided in smaller cards.
    • You're not doing a good process of memorizing the contents once they show up.
  • New: What to do with unneeded cards.

    You have three options:

    • Suspend: It stops it from showing up permanently until you reactivate it through the browser.
    • Bury: Just delays it until the next day.
    • Delete: It deletes it forever.

    Unless you're certain that you are not longer going to need it, suspend it.

  • New: Configure self hosted synchronization.

    Explain how to install anki-sync-server and how to configure Ankidroid and Anki. In the end I dropped this path and used Ankidroid alone with syncthing as I didn't need to interact with the decks from the computer. Also the ecosystem of synchronization in Anki at 2023-11-10 is confusing as there are many servers available, not all are compatible with the clients and Anki itself has released it's own so some of the community ones will eventually die.

Computer configuration management

  • New: Introduce configuration management.

    Configuring your devices is boring, disgusting and complex. Specially when your device dies and you need to reinstall. You usually don't have the time or energy to deal with it, you just want it to work.

    To have a system that allows you to recover from a disaster it's expensive in both time and knowledge, and many people have different solutions.

    This article shows the latest step of how I'm doing it.

Game Theory

  • New: Add the evolution of trust game theory game.

    Evolution of trust

    Game theory shows us the three things we need for the evolution of trust:

    • Repeat interactions: Trust keeps a relationship going, but you need the knowledge of possible future repeat interactions before trust can evolve
    • Possible win-wins: You must be playing a non-zero-sum game, a game where it's at least possible that both players can be better off -- a win-win.
    • Low miscommunication: If the level of miscommunication is too high, trust breaks down. But when there's a little bit of miscommunication, it pays to be more forgiving



Bash snippets

  • New: Move a file.

    Use one of the following

    import os
    import shutil
    os.rename("path/to/current/", "path/to/new/destination/for/")
    os.replace("path/to/current/", "path/to/new/destination/for/")
    shutil.move("path/to/current/", "path/to/new/destination/for/")
  • New: Get the root path of a git repository.

    git rev-parse --show-toplevel
  • New: Get epoch gmt time.

    date -u '+%s'
  • New: Check the length of an array with jq.

    echo '[{"username":"user1"},{"username":"user2"}]' | jq '. | length'
  • New: Exit the script if there is an error.

    set -eu
  • New: Prompt the user for data.

    read -p "Ask whatever" choice
  • New: How to deal with HostContextSwitching alertmanager alert.

    A context switch is described as the kernel suspending execution of one process on the CPU and resuming execution of some other process that had previously been suspended. A context switch is required for every interrupt and every task that the scheduler picks.

    Context switching can be due to multitasking, Interrupt handling , user & kernel mode switching. The interrupt rate will naturally go high, if there is higher network traffic, or higher disk traffic. Also it is dependent on the application which every now and then invoking system calls.

    If the cores/CPU's are not sufficient to handle load of threads created by application will also result in context switching.

    It is not a cause of concern until performance breaks down. This is expected that CPU will do context switching. One shouldn't verify these data at first place since there are many statistical data which should be analyzed prior to looking into kernel activities. Verify the CPU, memory and network usage during this time.

    You can see which process is causing issue with the next command:

    10:15:24 AM     UID     PID     cswch/s         nvcswch/s       Command
    10:15:27 AM     0       1       162656.7        16656.7         systemd
    10:15:27 AM     0       9       165451.04       15451.04        ksoftirqd/0
    10:15:27 AM     0       10      158628.87       15828.87        rcu_sched
    10:15:27 AM     0       11      156147.47       15647.47        migration/0
    10:15:27 AM     0       17      150135.71       15035.71        ksoftirqd/1
    10:15:27 AM     0       23      129769.61       12979.61        ksoftirqd/2
    10:15:27 AM     0       29      2238.38         238.38          ksoftirqd/3
    10:15:27 AM     0       43      1753            753             khugepaged
    10:15:27 AM     0       443     1659            165             usb-storage
    10:15:27 AM     0       456     1956.12         156.12          i915/signal:0
    10:15:27 AM     0       465     29550           29550           kworker/3:1H-xfs-log/dm-3
    10:15:27 AM     0       490     164700          14700           kworker/0:1H-kblockd
    10:15:27 AM     0       506     163741.24       16741.24        kworker/1:1H-xfs-log/dm-3
    10:15:27 AM     0       594     154742          154742          dmcrypt_write/2
    10:15:27 AM     0       629     162021.65       16021.65        kworker/2:1H-kblockd
    10:15:27 AM     0       715     147852.48       14852.48        xfsaild/dm-1
    10:15:27 AM     0       886     150706.86       15706.86        irq/131-iwlwifi
    10:15:27 AM     0       966     135597.92       13597.92        xfsaild/dm-3
    10:15:27 AM     81      1037    2325.25         225.25          dbus-daemon
    10:15:27 AM     998     1052    118755.1        11755.1         polkitd
    10:15:27 AM     70      1056    158248.51       15848.51        avahi-daemon
    10:15:27 AM     0       1061    133512.12       455.12          rngd
    10:15:27 AM     0       1110    156230          16230           cupsd
    10:15:27 AM     0       1192    152298.02       1598.02         sssd_nss
    10:15:27 AM     0       1247    166132.99       16632.99        systemd-logind
    10:15:27 AM     0       1265    165311.34       16511.34        cups-browsed
    10:15:27 AM     0       1408    10556.57        1556.57         wpa_supplicant
    10:15:27 AM     0       1687    3835            3835            splunkd
    10:15:27 AM     42      1773    3728            3728            Xorg
    10:15:27 AM     42      1996    3266.67         266.67          gsd-color
    10:15:27 AM     0       3166    32036.36        3036.36         sssd_kcm
    10:15:27 AM     119349  3194    151763.64       11763.64        dbus-daemon
    10:15:27 AM     119349  3199    158306          18306           Xorg
    10:15:27 AM     119349  3242    15.28           5.8             gnome-shell
    pidstat -wt 3 10  > /tmp/pidstat-t.out
    Linux 4.18.0-80.11.2.el8_0.x86_64 (hostname)    09/08/2020  _x86_64_    (4 CPU)
    10:15:15 AM   UID      TGID       TID   cswch/s   nvcswch/s  Command
    10:15:19 AM     0         1         -   152656.7   16656.7   systemd
    10:15:19 AM     0         -         1   152656.7   16656.7   |__systemd
    10:15:19 AM     0         9         -   165451.04  15451.04  ksoftirqd/0
    10:15:19 AM     0         -         9   165451.04  15451.04  |__ksoftirqd/0
    10:15:19 AM     0        10         -   158628.87  15828.87  rcu_sched
    10:15:19 AM     0         -        10   158628.87  15828.87  |__rcu_sched
    10:15:19 AM     0        23         -   129769.61  12979.61  ksoftirqd/2
    10:15:19 AM     0         -        23   129769.61  12979.33  |__ksoftirqd/2
    10:15:19 AM     0        29         -   32424.5    2445      ksoftirqd/3
    10:15:19 AM     0         -        29   32424.5    2445      |__ksoftirqd/3
    10:15:19 AM     0        43         -   334        34        khugepaged
    10:15:19 AM     0         -        43   334        34        |__khugepaged
    10:15:19 AM     0       443         -   11465      566       usb-storage
    10:15:19 AM     0         -       443   6433       93        |__usb-storage
    10:15:19 AM     0       456         -   15.41      0.00      i915/signal:0
    10:15:19 AM     0         -       456   15.41      0.00      |__i915/signal:0
    10:15:19 AM     0       715         -   19.34      0.00      xfsaild/dm-1
    10:15:19 AM     0         -       715   19.34      0.00      |__xfsaild/dm-1
    10:15:19 AM     0       886         -   23.28      0.00      irq/131-iwlwifi
    10:15:19 AM     0         -       886   23.28      0.00      |__irq/131-iwlwifi
    10:15:19 AM     0       966         -   19.67      0.00      xfsaild/dm-3
    10:15:19 AM     0         -       966   19.67      0.00      |__xfsaild/dm-3
    10:15:19 AM    81      1037         -   6.89       0.33      dbus-daemon
    10:15:19 AM    81         -      1037   6.89       0.33      |__dbus-daemon
    10:15:19 AM     0      1038         -   11567.31   4436      NetworkManager
    10:15:19 AM     0         -      1038   1.31       0.00      |__NetworkManager
    10:15:19 AM     0         -      1088   0.33       0.00      |__gmain
    10:15:19 AM     0         -      1094   1340.66    0.00      |__gdbus
    10:15:19 AM   998      1052         -   118755.1   11755.1   polkitd
    10:15:19 AM   998         -      1052   32420.66   25545     |__polkitd
    10:15:19 AM   998         -      1132   0.66       0.00      |__gdbus

    Then with help of PID which is causing issue, one can get all system calls details: Raw


    Let this command run for a few minutes while the load/context switch rates are high. It is safe to run this on a production system so you could run it on a good system as well to provide a comparative baseline. Through strace, one can debug & troubleshoot the issue, by looking at system calls the process has made.

  • New: Redirect stderr of all subsequent commands of a script to a file.

    } 2>&1 | tee -a $DEBUGLOG
  • New: Loop through a list of files found by find.

    For simple loops use the find -exec syntax:

    find . -name '*.txt' -exec process {} \;

    For more complex loops use a while read construct:

    find . -name "*.txt" -print0 | while read -r -d $'\0' file
        …code using "$file"

    The loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.

    The -print0 will use the NULL as a file separator instead of a newline and the -d $'\0' will use NULL as the separator while reading.

    How not to do it:

    If you try to run the next snippet:

    for file in $(find . -name "*.txt")
        …code using "$file"

    You'll get the next shellcheck warning:

    SC2044: For loops over find output are fragile. Use find -exec or a while read loop.

    You should not do this because:

    Three reasons:

    • For the for loop to even start, the find must run to completion.
    • If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names.
    • Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.


  • New: Custom file generation.

    During the build, you may want to generate other files or download resources from the internet. You can achieve this by the setup-script build configuration:

    ``toml [] setup-script = ""

    In the `` script, pdm-pep517 looks for a build function and calls it with two arguments:
    * `src`: the path to the source directory
    * `dst`: the path to the distribution directory
    def build(src, dst):
        target_file = os.path.join(dst, "mypackage/myfile.txt")
        os.makedirs(os.path.dirname(target_file), exist_ok=True)

    The generated file will be copied to the resulted wheel with the same hierarchy, you need to create the parent directories if necessary.

  • Correction: Custom file generation.

    Warning: this method only works if you install the package with pdm if you use pip or any other package manager the script won't be called. Thus a more generic approach is to run the initialization steps in a your_command init step or run the checks on each command.

  • New: Basic concepts of concurrency.

    Concurrency is best explained by an example stolen from Miguel Grinberg.

    Chess master Judit Polgár hosts a chess exhibition in which she plays multiple amateur players. She has two ways of conducting the exhibition: synchronously and asynchronously.


    • 24 opponents
    • Judit makes each chess move in 5 seconds
    • Opponents each take 55 seconds to make a move
    • Games average 30 pair-moves (60 moves total)

    Synchronous version: Judit plays one game at a time, never two at the same time, until the game is complete. Each game takes (55 + 5) * 30 == 1800 seconds, or 30 minutes. The entire exhibition takes 24 * 30 == 720 minutes, or 12 hours.

    Asynchronous version: Judit moves from table to table, making one move at each table. She leaves the table and lets the opponent make their next move during the wait time. One move on all 24 games takes Judit 24 * 5 == 120 seconds, or 2 minutes. The entire exhibition is now cut down to 120 * 30 == 3600 seconds, or just 1 hour.

    Async IO takes long waiting periods in which functions would otherwise be blocking and allows other functions to run during that downtime. (A function that blocks effectively forbids others from running from the time that it starts until the time that it returns.)

  • New: Basic concepts.

  • New: Introduce pytelegrambotapi.

    pyTelegramBotAPI is an synchronous and asynchronous implementation of the Telegram Bot API.


    pip install pyTelegramBotAPI
  • New: Create your bot.

    Use the /newbot command to create a new bot. @BotFather will ask you for a name and username, then generate an authentication token for your new bot.

    • The name of your bot is displayed in contact details and elsewhere.
    • The username is a short name, used in search, mentions and links. Usernames are 5-32 characters long and not case sensitive – but may only include Latin characters, numbers, and underscores. Your bot's username must end in 'bot’, like tetris_bot or TetrisBot.
    • The token is a string, like 110201543:AAHdqTcvCH1vGWJxfSeofSAs0K5PALDsaw, which is required to authorize the bot and send requests to the Bot API. Keep your token secure and store it safely, it can be used by anyone to control your bot.

    To edit your bot, you have the next available commands:

    • /setname: change your bot's name.
    • /setdescription: change the bot's description (short text up to 512 characters). Users will see this text at the beginning of the conversation with the bot, titled 'What can this bot do?'.
    • /setabouttext: change the bot's about info, a shorter text up to 120 characters. Users will see this text on the bot's profile page. When they share your bot with someone, this text is sent together with the link.
    • /setuserpic: change the bot's profile picture.
    • /setcommands: change the list of commands supported by your bot. Users will see these commands as suggestions when they type / in the chat with your bot. See commands for more info.
    • /setdomain: link a website domain to your bot. See the login widget section.
    • /deletebot: delete your bot and free its username. Cannot be undone.
  • New: Synchronous TeleBot.

    import telebot
    API_TOKEN = '<api_token>'
    bot = telebot.TeleBot(API_TOKEN)
    @bot.message_handler(commands=['help', 'start'])
    def send_welcome(message):
        bot.reply_to(message, """\
    Hi there, I am EchoBot.
    I am here to echo your kind words back to you. Just say anything nice and I'll say the exact same thing to you!\
    @bot.message_handler(func=lambda message: True)
    def echo_message(message):
        bot.reply_to(message, message.text)
  • New: Asynchronous TeleBot.

    from telebot.async_telebot import AsyncTeleBot
    bot = AsyncTeleBot('TOKEN')
    @bot.message_handler(commands=['help', 'start'])
    async def send_welcome(message):
        await bot.reply_to(message, """\
    Hi there, I am EchoBot.
    I am here to echo your kind words back to you. Just say anything nice and I'll say the exact same thing to you!\
    @bot.message_handler(func=lambda message: True)
    async def echo_message(message):
        await bot.reply_to(message, message.text)
    import asyncio


  • New: How to encrypt a file.

    gpg.encrypt_file('path/to/file', recipients)

    Where recipients is a List[str] of gpg Key IDs.

  • New: List the recipients that can decrypt a file.

    def list_recipients(self, path: Path) -> List['GPGKey']:
        """List the keys that can decrypt a file.
           path: Path to the file to check.
        keys = []
        for short_key in self.gpg.get_recipients_file(str(path)):
        return keys
    feat(requests#Use a proxy): Use a proxy

    http_proxy  = ""
    https_proxy = ""
    ftp_proxy   = ""
    proxies = {
      "http"  : http_proxy,
      "https" : https_proxy,
      "ftp"   : ftp_proxy
    r = requests.get(url, headers=headers, proxies=proxies)
  • New: Introduce aiohttp.

    aiohttp is an asynchronous HTTP Client/Server for asyncio and Python. Think of it as the requests for asyncio.

  • New: Receive keys from a keyserver.

    import_result = gpg.recv_keys('server-name', 'keyid1', 'keyid2', ...)

Configure Docker to host the application

  • New: Troubleshoot Docker python not showning prints.

    Use CMD ["python","-u",""] instead of CMD ["python",""].

  • New: [Get the difference of two lists.](../coding/python/python_project_template/







If we want to substract the elements of one list from the other you can use:

for x in b:
  if x in a:
  • New: Override entrypoint.

    sudo docker run -it --entrypoint /bin/bash [docker_image]
  • New: Disable ipv6.

    sysctl net.ipv6.conf.all.disable_ipv6=1
    sysctl net.ipv6.conf.default.disable_ipv6=1
  • New: Remove the apt cache after installing a package.

    RUN apt-get update && apt-get install -y \
      python3 \
      python3-pip \
      && rm -rf /var/lib/apt/lists/*
  • New: Add the contents of a directory to the docker.

    ADD ./path/to/directory /path/to/destination
  • New: Add healthcheck to your dockers.

    Health checks allow a container to expose its workload’s availability. This stands apart from whether the container is running. If your database goes down, your API server won’t be able to handle requests, even though its Docker container is still running.

    This makes for unhelpful experiences during troubleshooting. A simple docker ps would report the container as available. Adding a health check extends the docker ps output to include the container’s true state.

    You configure container health checks in your Dockerfile. This accepts a command which the Docker daemon will execute every 30 seconds. Docker uses the command’s exit code to determine your container’s healthiness:

    • 0: The container is healthy and working normally.
    • 1: The container is unhealthy; the workload may not be functioning.

    Healthiness isn’t checked straightaway when containers are created. The status will show as starting before the first check runs. This gives the container time to execute any startup tasks. A container with a passing health check will show as healthy; an unhealthy container displays unhealthy.

    In docker-compose you can write the healthchecks like the next snippet:

    version: '3.4'
        image: linuxserver/jellyfin:latest
        container_name: jellyfin
        restart: unless-stopped
          test: curl http://localhost:8096/health || exit 1
          interval: 10s
          retries: 5
          start_period: 5s
          timeout: 10s
  • New: List the dockers of a registry.

    List all repositories (effectively images):

    $: curl -X GET https://myregistry:5000/v2/_catalog
    > {"repositories":["redis","ubuntu"]}

    List all tags for a repository:

    $: curl -X GET https://myregistry:5000/v2/ubuntu/tags/list
    > {"name":"ubuntu","tags":["14.04"]}

    If the registry needs authentication you have to specify username and password in the curl command

    curl -X GET -u <user>:<pass> https://myregistry:5000/v2/_catalog
    curl -X GET -u <user>:<pass> https://myregistry:5000/v2/ubuntu/tags/list
  • New: Searching by attribute and value.

    soup = BeautifulSoup(html)
    results = soup.findAll("td", {"valign" : "top"})
  • New: Install a specific version of Docker.

    Follow these instructions

    If that doesn't install the version of docker-compose that you want use the next snippet:

    VERSION=$(curl --silent | grep -Po '"tag_name": "\K.*\d')
    sudo curl -L${VERSION}/docker-compose-$(uname -s)-$(uname -m) -o $DESTINATION
    sudo chmod 755 $DESTINATION

    If you don't want the latest version set the VERSION variable.

  • New: Dockerize a PDM application.

    It is possible to use PDM in a multi-stage Dockerfile to first install the project and dependencies into __pypackages__ and then copy this folder into the final stage, adding it to PYTHONPATH.

    FROM python:3.11-slim-bookworm AS builder
    RUN pip install pdm
    COPY pyproject.toml pdm.lock /project/
    COPY src/ /project/src
    WORKDIR /project
    RUN mkdir __pypackages__ && pdm sync --prod --no-editable
    FROM python:3.11-slim-bookworm
    ENV PYTHONPATH=/project/pkgs
    COPY --from=builder /project/__pypackages__/3.11/lib /project/pkgs
    COPY --from=builder /project/__pypackages__/3.11/bin/* /bin/
    CMD ["python", "-m", "project"]


  • New: Split stdout from stderr in tests.

    By default the runner is configured to mix stdout and stderr, if you wish to tell apart both sources use:

    def test(runner: CliRunner):
      runner.mix_stderr = False
  • New: File system isolation.

    You may need to isolate the environment variables if your application read the configuration from them. To do that override the runner fixture:

    def fixture_runner() -> CliRunner:
        """Configure the Click cli test runner."""
        return CliRunner(
                'PASSWORD_STORE_DIR': '',
                'GNUPGHOME': '',
                'PASSWORD_AUTH_DIR': '',

    If you define the fixture in you may need to use another name than runner otherwise it may be skipped, for example cli_runner.


  • New: Import a table from another database.

    If you have an SQLite databases named database1 with a table t1 and database2 with a table t2 and want to import table t2 from database2 into database1. You need to open database1 with litecli:

    litecli database1

    Attach the other database with the command:

    ATTACH 'database2file' AS db2;

    Then create the table t2, and copy the data over with:

    INSERT INTO t2 SELECT * FROM db2.t2;


  • New: Add basic operations.

    Selecting series:

    • Select latest sample for series with a given metric name:
    • Select 5-minute range of samples for series with a given metric name:
    • Only series with given label values:
    • Complex label matchers (=: equality, !=: non-equality, =~: regex match, !~: negative regex match):
    • Select data from one day ago and shift it to the current time:
    process_resident_memory_bytes offset 1d

    Rates of increase for counters:

    • Per-second rate of increase, averaged over last 5 minutes:
    • Per-second rate of increase, calculated over last two samples in a 1-minute time window:
    • Absolute increase over last hour:

    Aggregating over multiple series:

    • Sum over all series:
    • Preserve the instance and job label dimensions:
    sum by(job, instance) (node_filesystem_size_bytes)
    • Aggregate away the instance and job label dimensions:
    sum without(instance, job) (node_filesystem_size_bytes)

    Available aggregation operators: sum(), min(), max(), avg(), stddev(), stdvar(), count(), count_values(), group(), bottomk(), topk(), quantile().


    • Get the Unix time in seconds at each resolution step:
    • Get the age of the last successful batch job run:
    time() - demo_batch_last_success_timestamp_seconds
    • Find batch jobs which haven't succeeded in an hour:
    time() - demo_batch_last_success_timestamp_seconds > 3600
  • New: Run operation only on the elements that match a condition.

    Imagine we want to run the zfs_dataset_used_bytes - zfs_dataset_used_by_dataset_bytes operation only on the elements that match zfs_dataset_used_by_dataset_bytes > 200e3. You can do this with and:

    zfs_dataset_used_bytes - zfs_dataset_used_by_dataset_bytes and zfs_dataset_used_by_dataset_bytes > 200e3
  • New: Substracting two metrics.

    To run binary operators between vectors you need them to match. Basically it means that it will only do the operation on the elements that have the same labels. Sometimes you want to do operations on metrics that don't have the same labels. In those cases you can use the on operator. Imagine that we want to substract the next vectors:



    sum by (hostname,filesystem) (zfs_dataset_used_bytes{type='snapshot'})

    That only have in common the labels hostname and filesystem`.

    You can use the next expression then:

    zfs_dataset_used_bytes{type='filesystem'} - on (hostname, filesystem) sum by (hostname,filesystem) (zfs_dataset_used_bytes{type='snapshot'})

    To learn more on Vector matching read this article

  • New: Ranges only allowed for vector selectors.

    You may need to specify a subquery range such as [1w:1d].


  • New: Introduce LogQL.

    LogQL is Grafana Loki’s PromQL-inspired query language. Queries act as if they are a distributed grep to aggregate log sources. LogQL uses labels and operators for filtering.

    There are two types of LogQL queries:

    • Log queries: Return the contents of log lines.
    • Metric queries: Extend log queries to calculate values based on query results.
  • New: Apply a pattern to the value of a label.

    Some logs are sent in json and then one of their fields can contain other structured data. You may want to use that structured data to further filter the logs.

    {app="ingress-nginx"} | json | line_format `{{.log}}` | pattern `<_> - - <_> "<method> <_> <_>" <status> <_> <_> "<_>" <_>` | method != `GET`
    • {app="ingress-nginx"}: Show only the logs of the ingress-nginx.
    • | json: Interpret the line as a json.
    • ``| line_format}| pattern<> - - <> " <> <>" <> <> "<>" <>```: interpret thelog` json field of the trace with the selected pattern
    • ``| method !=GET````: Filter the line using a key extracted by the pattern.
  • New: Count the unique values of a label.

    Sometimes you want to alert on the values of a log. For example if you want to make sure that you're receiving the logs from more than 20 hosts (otherwise something is wrong). Assuming that your logs attach a host label you can run

    sum(count by(host) (rate({host=~".+"} [24h])) > bool 0)

    This query will: - {host=~".+"}: Fetch all log lines that contain the label host - count by(host) (rate({host=~".+"} [24h]): Calculates the number of entries in the last 24h. - count by(host) (rate({host=~".+"} [24h])) > bool 0: Converts to 1 all the vector elements that have more than 1 message. - sum(count by(host) (rate({host=~".+"} [24h])) > bool 0): Sums all the vector elements to get the number of hosts that have more than one message.

    journald promtail parser is known to fail between upgrades, it's useful too to make an alert to make sure that all your hosts are sending the traces. You can do it with: sum(count by(host) (rate({job="systemd-journal"} [24h])) > bool 0)


  • New: Stop pytest right at the start if condition not met.

    Use the pytest_configure initialization hook.

    In your global

    import requests
    import pytest
    def pytest_configure(config):
        except requests.exceptions.ConnectionError:
            msg = 'FATAL. Connection refused: ES does not appear to be installed as a service (localhost port 9200)'
    • Note that the single argument of pytest_configure has to be named config.
    • Using pytest.exit makes it look nicer.
  • New: Introduce pytest-xprocess.

    pytest-xprocess is a pytest plugin for managing external processes across test runs.


    pip install pytest-xprocess


    Define your process fixture in

    import pytest
    from xprocess import ProcessStarter
    def myserver(xprocess):
        class Starter(ProcessStarter):
            # startup pattern
            pattern = "[Ss]erver has started!"
            # command to start process
            args = ['command', 'arg1', 'arg2']
        # ensure process is running and return its logfile
        logfile = xprocess.ensure("myserver", Starter)
        conn = # create a connection or url/port info to the server
        yield conn
        # clean up whole process tree afterwards

    Now you can use this fixture in any test functions where myserver needs to be up and xprocess will take care of it for you.

    Matching process output with pattern:

    In order to detect that your process is ready to answer queries, pytest-xprocess allows the user to provide a string pattern by setting the class variable pattern in the Starter class. pattern will be waited for in the process logfile for a maximum time defined by timeout before timing out in case the provided pattern is not matched.

    It’s important to note that pattern is a regular expression and will be matched using python

    Controlling Startup Wait Time with timeout:

    Some processes naturally take longer to start than others. By default, pytest-xprocess will wait for a maximum of 120 seconds for a given process to start before raising a TimeoutError. Changing this value may be useful, for example, when the user knows that a given process would never take longer than a known amount of time to start under normal circunstancies, so if it does go over this known upper boundary, that means something is wrong and the waiting process must be interrupted. The maximum wait time can be controlled through the class variable timeout.

       def myserver(xprocess):
           class Starter(ProcessStarter):
               # will wait for 10 seconds before timing out
               timeout = 10

    Passing command line arguments to your process with args:

    In order to start a process, pytest-xprocess must be given a command to be passed into the subprocess.Popen constructor. Any arguments passed to the process command can also be passed using args. As an example, if I usually use the following command to start a given process:

    $> myproc -name "bacon" -cores 4 <destdir>

    That would look like:

    args = ['myproc', '-name', '"bacon"', '-cores', 4, '<destdir>']

    when using args in pytest-xprocess to start the same process.

    def myserver(xprocess):
        class Starter(ProcessStarter):
            # will pass "$> myproc -name "bacon" -cores 4 <destdir>"  to the
            # subprocess.Popen constructor so the process can be started with
            # the given arguments
            args = ['myproc', '-name', '"bacon"', '-cores', 4, '<destdir>']
            # ...

Python Snippets

  • New: Substract two paths.

    It can also framed to how to get the relative path between two absolute paths:

    >>> from pathlib import Path
    >>> p = Path('/home/lyz/')
    >>> h = Path('/home/')
    >>> p.relative_to(h)
  • New: Copy files from a python package.

    pkgdir = sys.modules['<mypkg>'].__path__[0]
    fullpath = os.path.join(pkgdir, <myfile>)
    shutil.copy(fullpath, os.getcwd())
  • New: Sort the returned paths of glob.

    glob order is arbitrary, but you can sort them yourself.

    If you want sorted by name:


    sorted by modification time:

    import os
    sorted(glob.glob('*.png'), key=os.path.getmtime)

    sorted by size:

    import os
    sorted(glob.glob('*.png'), key=os.path.getsize)
  • New: Read file with Pathlib.

    file_ = Path('/to/some/file')
  • New: Get changed time of a file.

    import os
  • New: Configure the logging of a program to look nice.

    def load_logger(verbose: bool = False) -> None:  # pragma no cover
        """Configure the Logging logger.
            verbose: Set the logging level to Debug.
        logging.addLevelName(logging.INFO, "\033[36mINFO\033[0m")
        logging.addLevelName(logging.ERROR, "\033[31mERROR\033[0m")
        logging.addLevelName(logging.DEBUG, "\033[32mDEBUG\033[0m")
        logging.addLevelName(logging.WARNING, "\033[33mWARNING\033[0m")
        if verbose:
                format="%(asctime)s %(levelname)s %(name)s: %(message)s",
                datefmt="%Y-%m-%d %H:%M:%S",
            telebot.logger.setLevel(logging.DEBUG)  # Outputs debug messages to console.
                stream=sys.stderr, level=logging.INFO, format="%(levelname)s: %(message)s"
  • New: Get the modified time of a file with Pathlib.

    file_ = Path('/to/some/file')

    You can also access:

    • Created time: with st_ctime
    • Accessed time: with st_atime

    They are timestamps, so if you want to compare it with a datetime object use the timestamp method:

    assert - file_.stat().st_mtime < 60
  • New: Show the date in the logging module traces.

    To display the date and time of an event, you would place %(asctime)s in your format string:

    import logging
    logging.basicConfig(format='%(asctime)s %(message)s')
    logging.warning('is when this event was logged.')
  • New: Remove html url characters.

    To transform an URL string into normal string, for example replacing %20 with space use:

    >>> from urllib.parse import unquote
    >>> print(unquote("%CE%B1%CE%BB%20"))



  • Correction: Correct the watch directive.

    watch is a list of directories to watch while serving the documentation. So if any file is changed in those directories, the documentation is rebuilt.


Python Telegram

  • New: Analyze the different python libraries to interact with telegram.

    There are two ways to interact with Telegram through python:

    • Client libraries
    • Bot libraries

    Client libraries:

    Client libraries use your account to interact with Telegram itself through a developer API token.

    The most popular to use is Telethon.

    Bot libraries:

    Telegram lists many libraries to interact with the bot API, the most interesting are:

    If there comes a moment when we have to create the messages ourselves, telegram-text may be an interesting library to check.



    • Popular: 23k stars, 4.9k forks
    • Maintained: last commit 3 days ago
    • They have a developers community to get help in this telegram group
    • I like how they try to minimize third party dependencies, and how you can install the complements if you need them
    • Built on top of asyncio
    • Nice docs
    • Fully supports the Telegram bot API
    • Has many examples


    • Interface is a little verbose and complicated at a first look
    • Only to be run in a single thread (not a problem)


    • Package documentation is the technical reference for python-telegram-bot. It contains descriptions of all available classes, modules, methods and arguments as well as the changelog.
    • Wiki is home to number of more elaborate introductions of the different features of python-telegram-bot and other useful resources that go beyond the technical documentation.
    • Examples section contains several examples that showcase the different features of both the Bot API and python-telegram-bot
    • Source




    • Uses lambdas inside the decorators, I don't know why it does it.
    • The docs are not as throughout as python-telegram-bot one.




    • Popular: 3.8k stars, 717k forks
    • Maintained: last commit 4 days ago
    • Async support
    • They have a developers community to get help in this telegram group
    • Has type hints
    • Cleaner interface than python-telegram-bot
    • Fully supports the Telegram bot API
    • Has examples


    • Less popular than python-telegram-bot
    • Docs are written at a developer level, difficult initial barrier to understand how to use it.



    Even if python-telegram-bot is the most popular and with the best docs, I prefer one of the others due to the easier interface. aiograms documentation is kind of crap, and as it's the first time I make a bot I'd rather have somewhere good to look at.

    So I'd say to go first with pyTelegramBotAPI and if it doesn't go well, fall back to python-telegram-bot.


  • New: Autocomplete answers.

    If you want autocomplete with fuzzy finding use:

    import questionary
    from prompt_toolkit.completion import FuzzyWordCompleter
        "Save to (q to cancel): ",


  • New: Tree console view.

    Rich has a Tree class which can generate a tree view in the terminal. A tree view is a great way of presenting the contents of a filesystem or any other hierarchical data. Each branch of the tree can have a label which may be text or any other Rich renderable.

    The following code creates and prints a tree with a simple text label:

    from rich.tree import Tree
    from rich import print
    tree = Tree("Rich Tree")

    With only a single Tree instance this will output nothing more than the text “Rich Tree”. Things get more interesting when we call add() to add more branches to the Tree. The following code adds two more branches:


    The tree will now have two branches connected to the original tree with guide lines.

    When you call add() a new Tree instance is returned. You can use this instance to add more branches to, and build up a more complex tree. Let’s add a few more levels to the tree:

    baz_tree = tree.add("baz")


  • New: Solve element isn't clickable in headless mode.

    There are many things you can try to fix this issue. Being the first to configure the driver to use the full screen. Assuming you're using the undetectedchromedriver:

    import undetected_chromedriver.v2 as uc
    options = uc.ChromeOptions()
    driver = uc.Chrome(options=options)

    If that doesn't solve the issue use the next function:

    def click(driver: uc.Chrome, xpath: str, mode: Optional[str] = None) -> None:
        """Click the element marked by the XPATH.
            driver: Object to interact with selenium.
            xpath: Identifier of the element to click.
            mode: Type of click. It needs to be one of [None, position, wait]
        The different ways to click are:
        * None: The normal click of the driver.
        * wait: Wait until the element is clickable and then click it.
        * position: Deduce the position of the element and then click it with a javascript script.
        if mode is None:
           driver.find_element(By.XPATH, xpath).click()
        elif mode == 'wait':
            WebDriverWait(driver, 20).until(
                EC.element_to_be_clickable((By.XPATH, xpath))
        elif mode == 'position':
            element = driver.find_element(By.XPATH, xpath)
            driver.execute_script("arguments[0].click();", element)


  • New: Passing environmental variables to commands.

    The _env special kwarg allows you to pass a dictionary of environment variables and their corresponding values:

    import sh
    sh.google_chrome(_env={"SOCKS_SERVER": "localhost:1234"})

    _env replaces your process’s environment completely. Only the key-value pairs in _env will be used for its environment. If you want to add new environment variables for a process in addition to your existing environment, try something like this:

    import os
    import sh
    new_env = os.environ.copy()
    new_env["SOCKS_SERVER"] = "localhost:1234"
  • New: Use commands that return a SyntaxError.

    pass is a reserved python word so sh fails when calling the password store command pass.

    pass_command = sh.Command('pass')
    pass_command('show', 'new_file')


  • New: Print to stderr.

    You can print to "standard error" with a Rich Console(stderr=True)

    from rich.console import Console
    err_console = Console(stderr=True)
    err_console.print("error message")

Generic Coding Practices

How to code

  • New: Personal evolution on how I code.

    Over the years I've tried different ways of developing my code:

    • Mindless coding: write code as you need to make it work, with no tests, documentation or any quality measure.
    • TDD.
    • Try to abstract everything to minimize the duplication of code between projects.

    Each has it's advantages and disadvantages. After trying them all and given that right now I only have short spikes of energy and time to invest in coding my plan is to:

    • Make the minimum effort to design the minimum program able to solve the problem at hand. This design will be represented in an orgmode task.
    • Write the minimum code to make it work without thinking of tests or generalization, but with the domain driven design concepts so the code remains flexible and maintainable.
    • Once it's working see if I have time to improve it:
    • Create the tests to cover the critical functionality (no more 100% coverage).
    • If I need to make a package or the program evolves into something complex I'd use this scaffold template.

    Once the spike is over I'll wait for a new spike to come either because I have time or because something breaks and I need to fix it.


  • New: Set the upstream remote by default.

    git config --global --add push.default current
    git config --global --add push.autoSetupRemote true
  • New: Remove tags.

    To delete a tag you can run:

    git tag -d {{tag_name}}

    To remove them remotely do

    git push --delete origin {{ tag_name }}


Infrastructure as Code

Ansible Snippets

  • New: Introduce Forgejo.

    Forgejo is a self-hosted lightweight software forge. Easy to install and low maintenance, it just does the job. The awful name comes from forĝejo, the Esperanto word for forge. I kind of like though the concept of forge for the repositories.

    Brought to you by an inclusive community under the umbrella of Codeberg e.V., a democratic non-profit organization, Forgejo can be trusted to be exclusively Free Software. It is a "soft" fork of Gitea with a focus on scaling, federation and privacy.

    In October 2022 the domains and trademark of Gitea were transferred to a for-profit company without knowledge or approval of the community. Despite writing an open letter, the takeover was later confirmed. The goal of Forgejo is to continue developing the code with a healthy democratic governance.

    On the 15th of December of 2022 the project was born with these major objectives:

    • The community is in control, and ensures we develop to address community needs.
    • We will help liberate software development from the shackles of proprietary tools.

    One of the approaches to achieve the last point is through pushing for the Forgejo federation a much needed feature in the git web application ecosystem.

    On the 29th of December of 2022 they released the first stable release and they have released several security releases between then and now.

    Despite what you choose, the good thing is that as long as it's a soft fork migrating between these software should be straight forward.

    Forgejo outshines Gitea in:

    • Being built up by the people for the people. The project may die but it's not likely it will follow Gitea's path.
    • They are transparent regarding the gobernance of the project which is created through open community discussions.
    • It's a political project that fights for the people's rights, for example through federation and freely incorporating the new additions of Gitea
    • They'll eventually have a better license
    • They get all the features and fixes of Gitea plus the contributions of the developers of the community that run out of Gitea.

    Gitea on the other hand has the next advantages:

    • It's a more stable project, it's been alive for much more time and now has the back up of a company trying to make profit out of it. Forgejo's community and structure is still evolving to a stable state though, although it looks promising!
    • Quicker releases. As Forgejo needs to review and incorporate Gitea's contributions, it takes longer to do a release.

    Being a soft-fork has it's disadvantages too, for example deciding where to open the issues and pull requests, they haven't yet decided which is their policy around this topic.

  • New: Authorize an SSH key.

    - name: Authorize the sender ssh key
        user: syncoid
        state: present
        key: "{{ syncoid_receive_ssh_key }}"
  • New: Create a user.

    The following snippet creates a user with password login disabled.

    - name: Create the syncoid user
        name: syncoid
        state: present
        password: !
        shell: /usr/sbin/nologin

    If you don't set a password any user can do su your_user to set a random password use the next snippet:

    - name: Create the syncoid user
        name: syncoid
        state: present
        password: "{{ lookup('password', '/dev/null', length=50, encrypt='sha512_crypt') }}"
        shell: /bin/bash

    This won't pass the idempotence tests as it doesn't save the password anywhere (/dev/null) in the controler machine.

  • New: Create an ssh key.

    - name: Create .ssh directory
      become: true
        path: /root/.ssh
        state: directory
        mode: 700
    - name: Create the SSH key to directory
      become: true
        path: /root/.ssh/id_ed25519
        type: ed25519
      register: ssh
    - name: Show public key
        var: ssh.public_key
  • New: Skip ansible-lint for some tasks.

    - name: Modify permissions
      command: >
        chmod -R g-w /home/user
        - skip_ansible_lint
      sudo: yes
  • New: Start and enable a systemd service.

    - name: Start the service
      become: true
        name: zfs_exporter
        enabled: true
        daemon_reload: true
        state: started
  • New: Download an decompress a tar.gz.

    - name: Unarchive a file that needs to be downloaded (added in 2.0)
        dest: /usr/local/bin
        remote_src: yes

    If you want to only extract a file you can use the includes arg

    - name: Download the zfs exporter
      become: true
        src:{{ zfs_exporter_version }}/zfs_exporter-{{ zfs_exporter_version }}.linux-amd64.tar.gz
        dest: /usr/local/bin
        include: zfs_exporter
        remote_src: yes
        mode: 0755

    But that snippet sometimes fail, you can alternatively download it locally and copy it:

    - name: Test if zfs_exporter binary exists
        path: /usr/local/bin/zfs_exporter
      register: zfs_exporter_binary
    - name: Install the zfs exporter
        - name: Download the zfs exporter
          delegate_to: localhost
            src:{{ zfs_exporter_version }}/zfs_exporter-{{ zfs_exporter_version }}.linux-amd64.tar.gz
            dest: /tmp/
            remote_src: yes
        - name: Upload the zfs exporter to the server
          become: true
            src: /tmp/zfs_exporter-{{ zfs_exporter_version }}.linux-amd64/zfs_exporter
            dest: /usr/local/bin
            mode: 0755
      when: not zfs_exporter_binary.stat.exists
  • New: Run command on a working directory.

    - name: Change the working directory to somedir/ and run the command as db_owner
      ansible.builtin.command: /usr/bin/ db_user db_name
      become: yes
      become_user: db_owner
        chdir: somedir/
        creates: /path/to/database
  • New: Run handlers in the middle of the tasks file.

    If you need handlers to run before the end of the play, add a task to flush them using the meta module, which executes Ansible actions:

      - name: Some tasks go here ...
      - name: Flush handlers
        meta: flush_handlers
      - name: Some other tasks ...

    The meta: flush_handlers task triggers any handlers that have been notified at that point in the play.

    Once handlers are executed, either automatically after each mentioned section or manually by the flush_handlers meta task, they can be notified and run again in later sections of the play.

  • New: Run command idempotently.

    - name: Register the runner in gitea
      become: true
      command: act_runner register --config config.yaml --no-interactive --instance {{ gitea_url }} --token {{ gitea_docker_runner_token }}
        creates: /var/lib/gitea_docker_runner/.runner
  • New: Get the correct architecture string.

    If you have an amd64 host you'll get x86_64, but sometimes you need the amd64 string. On those cases you can use the next snippet:

      aarch64: arm64
      x86_64: amd64
    - name: Download the act runner binary
      become: True
        url:{{ deb_architecture[ansible_architecture] }}
        dest: /usr/bin/act_runner
        mode: '0755'
  • New: Check the instances that are going to be affected by playbook run.

    Useful to list the instances of a dynamic inventory

    ansible-inventory -i aws_ec2.yaml --list
  • New: Check if variable is defined or empty.

    In Ansible playbooks, it is often a good practice to test if a variable exists and what is its value.

    Particularity this helps to avoid different “VARIABLE IS NOT DEFINED” errors in Ansible playbooks.

    In this context there are several useful tests that you can apply using Jinja2 filters in Ansible.

  • New: Check if Ansible variable is defined (exists).

    - shell: echo "The variable 'foo' is defined: '{{ foo }}'"
      when: foo is defined
    - fail: msg="The variable 'bar' is not defined"
      when: bar is undefined
  • New: Check if Ansible variable is empty.

    - fail: msg="The variable 'bar' is empty"
      when: bar|length == 0
    - shell: echo "The variable 'foo' is not empty: '{{ foo }}'"
      when: foo|length > 0
  • New: Check if Ansible variable is defined and not empty.

    - shell: echo "The variable 'foo' is defined and not empty"
      when: (foo is defined) and (foo|length > 0)
    - fail: msg="The variable 'bar' is not defined or empty"
      when: (bar is not defined) or (bar|length == 0)
  • New: Download a file.

    - name: Download foo.conf
        dest: /etc/foo.conf
        mode: '0440'
  • New: Ansible condition that uses a regexp.

    - name: Check if an instance name or hostname matches a regex pattern
      when: inventory_hostname is not match('molecule-.*')
        msg: "not a molecule instance"
  • New: Ansible-lint doesn't find requirements.

    It may be because you're using requirements.yaml instead of requirements.yml. Create a temporal link from one file to the other, run the command and then remove the link.

    It will work from then on even if you remove the link. ¯\(°_o)/¯

  • New: Run task only once.

    Add run_once: true on the task definition:

    - name: Do a thing on the first host in a group.
        msg: "Yay only prints once"
      run_once: true
  • New: Ansible add a sleep.

    - name: Pause for 5 minutes to build app cache
        minutes: 5
  • New: Ansible lint skip some rules.

    Add a .ansible-lint-ignore file with a line per rule to ignore with the syntax path/to/file rule_to_ignore.


  • New: Introduce chezmoi.

    Chezmoi stores the desired state of your dotfiles in the directory ~/.local/share/chezmoi. When you run chezmoi apply, chezmoi calculates the desired contents for each of your dotfiles and then makes the minimum changes required to make your dotfiles match your desired state.

    What I like:

    What I don't like:

    In the article you can also find:

  • Correction: Update the project url of helm-secrets.

    From to

  • New: Disable the regular login, use only Oauth.

    You need to add a file inside your custom directory. The file is too big to add in this digest, please access the article to get it.

  • New: Configure it with terraform.

    Gitea can be configured through terraform too. There is an official provider that doesn't work, there's a fork that does though. Sadly it doesn't yet support configuring Oauth Authentication sources. Be careful gitea_oauth2_app looks to be the right resource to do that, but instead it configures Gitea to be the Oauth provider, not a consumer.

    In the article you can find how to configure and use it to:

  • New: Create an admin user through the command line.

    gitea --config /etc/gitea/app.ini admin user create --admin --username user_name --password password --email email

    Or you can change the admin's password:

    gitea --config /etc/gitea/app.ini admin user change-password -u username -p password
    feat(gtd): Introduce Getting things done

    First summary of David Allen's book Getting things done. It includes:

  • New: Gitea actions overview.

    We've been using Drone as CI runner for some years now as Gitea didn't have their native runner. On Mar 20, 2023 however Gitea released the version 1.19.0 which promoted to stable the Gitea Actions which is a built-in CI system like GitHub Actions. With Gitea Actions, you can reuse your familiar workflows and Github Actions in your self-hosted Gitea instance. While it is not currently fully compatible with GitHub Actions, they intend to become as compatible as possible in future versions. The typical procedure is as follows:

    • Register a runner (at the moment, act runners are the only option). This can be done on the following scopes:
    • site-wide (by site admins)
    • organization-wide (by organization owners)
    • repository-wide (by repository owners)
    • Create workflow files under .gitea/workflows/<workflow name>.yaml or .github/workflows/<workflow name>.yaml. The syntax is the same as the GitHub workflow syntax where supported.

    Gitea Actions advantages are:

    • Uses the same pipeline syntax as Github Actions, so it's easier to use for new developers
    • You can reuse existent Github actions.
    • Migration from Github repositories to Gitea is easier.
    • You see the results of the workflows in the same gitea webpage, which is much cleaner than needing to go to drone
    • Define the secrets in the repository configuration.

    Drone advantages are:

    • They have the promote event. Not critical as we can use other git events such as creating a tag.
    • They can be run as a service by default. The gitea runners will need some work to run on instance restart.
    • Has support for running kubernetes pipelines. Gitea actions doesn't yet support this
  • New: Setup Gitea actions.

    You need a Gitea instance with a version of 1.19.0 or higher. Actions are disabled by default (as they are still an feature-in-progress), so you need to add the following to the configuration file to enable it:


    Even if you enable at configuration level you need to manually enable the actions on each repository until this issue is solved.

    So far there is only one possible runner which is based on docker and act. Currently, the only way to install act runner is by compiling it yourself, or by using one of the pre-built binaries. There is no Docker image or other type of package management yet. At the moment, act runner should be run from the command line. Of course, you can also wrap this binary in something like a system service, supervisord, or Docker container.

    Before running a runner, you should first register it to your Gitea instance using the following command:

    ./act_runner register --no-interactive --instance <instance> --token <token>

    There are two arguments required, instance and token.

    instance refers to the address of your Gitea instance, like The runner and job containers (which are started by the runner to execute jobs) will connect to this address. This means that it could be different from the ROOT_URL of your Gitea instance, which is configured for web access. It is always a bad idea to use a loopback address such as or localhost, as we will discuss later. If you are unsure which address to use, the LAN address is usually the right choice.

    token is used for authentication and identification, such as P2U1U0oB4XaRCi8azcngmPCLbRpUGapalhmddh23. It is one-time use only and cannot be used to register multiple runners. You can obtain tokens from

    After registering, a new file named .runner will appear in the current directory. This file stores the registration information. Please do not edit it manually. If this file is missing or corrupted, you can simply remove it and register again.

    Finally, it’s time to start the runner.

    ./act_runner daemon
  • New: Use the gitea actions.

    Even if Actions is enabled for the Gitea instance, repositories still disable Actions by default. Enable it on the settings page of your repository.

    You will need to study the workflow syntax for Actions and write the workflow files you want.

    However, we can just start from a simple demo:

    name: Gitea Actions Demo
    run-name: ${{ }} is testing out Gitea Actions
    on: [push]
        runs-on: ubuntu-latest
          - run: echo "The job was automatically triggered by a ${{ gitea.event_name }} event."
          - run: echo "This job is now running on a ${{ runner.os }} server hosted by Gitea!"
          - run: echo "The name of your branch is ${{ gitea.ref }} and your repository is ${{ gitea.repository }}."
          - name: Check out repository code
            uses: actions/checkout@v3
          - run: echo "The ${{ gitea.repository }} repository has been cloned to the runner."
          - run: echo "The workflow is now ready to test your code on the runner."
          - name: List files in the repository
            run: |
              ls ${{ gitea.workspace }}
          - run: echo "This job's status is ${{ gitea.status }}."

    You can upload it as a file with the extension .yaml in the directory .gitea/workflows/ or .github/workflows of the repository, for example .gitea/workflows/demo.yaml.

    You may be aware that there are tens of thousands of marketplace actions in GitHub. However, when you write uses: actions/checkout@v3, it actually downloads the scripts from by default (not GitHub). This is a mirror of, but it’s impossible to mirror all of them. That’s why you may encounter failures when trying to use some actions that haven’t been mirrored.

    The good news is that you can specify the URL prefix to use actions from anywhere. This is an extra syntax in Gitea Actions. For example:

    • uses:
    • uses:
    • uses:

    Be careful, the https:// or http:// prefix is necessary!

  • New: Import organisations into terraform.

    To import organisations and teams you need to use their ID. You can see the ID of the organisations in the Administration panel. To get the Teams ID you need to use the API. Go to and enter the organisation name.

  • Correction: Give some tip to deal with big diffs.

    Sometimes the diff is too big and you need to work with it chuck by chunk. For each change you can either:

    • chezmoi add <target> if you want to keep the changes you've manually made to the files that match the <target>.
    • chezmoi apply <target> if you want to apply the changes that chezmoi proposes for the <target>.

    Here <target> is any directory or file listed in the diff.

  • New: Add systemd service for the actions runner.

    Description=Gitea Actions Runner
    ExecStart=/var/gitea/gitea/act_runner/main/act_runner-main-linux-amd64 daemon
  • New: Tweak the runner image.

    The gitea runner uses the node:16-bullseye image by default, in that image the setup-python action doesn't work. You can tweak the docker image that the runner runs by editing the .runner file that is in the directory where you registered the runner (probably close to the act_runner executable).

    If you open that up, you’ll see that there is a section called labels, and it (most likely) looks like this:

    "labels": [

    You can specify any other docker image. Adding new labels doesn't work yet.

  • New: Introduce molecule.

    Molecule is a testing tool for ansible roles.

  • New: CI configuration.

    Since gitea supports github actions you can use the setup-molecule and setup-lint actions. For example:

    name: Molecule
      PY_COLORS: "1"
        name: Lint
        runs-on: ubuntu-latest
          - name: Checkout the codebase
            uses: actions/checkout@v3
          - name: Setup Lint
            uses: bec-galaxy/setup-lint@{Version}
          - name: Run Lint tests
            run: ansible-lint
        name: Molecule
        runs-on: ubuntu-latest
        needs: lint
          - name: Checkout the codebase
            uses: actions/checkout@v3
          - name: Setup Molecule
            uses: bec-galaxy/setup-molecule@{Version}
          - name: Run Molecule tests
            run: molecule test

    That action installs the latest version of the packages, if you need to check a specific version of the packages you may want to create your own step or your own action.

  • New: Upgrade to v5.0.0.

    They've removed the lint command, the reason behind is that there are two different testing methods which are expected to be run in very different ways. Linting should be run per entire repository. Molecule executions are per scenario and one project can have even >100 scenarios. Running lint on each of them would not only slowdown but also increase the maintenance burden on linter configuration and the way is called.

    They recommend users to run ansible-lint using pre-commit with or without `tox. That gives much better control over how/when it is updated.

    You can see an example on how to do this in the CI configuration section.

  • Correction: Configure the gitea actions.

    So far there is only one possible runner which is based on docker and act. Currently, the only way to install act runner is by compiling it yourself, or by using one of the pre-built binaries. There is no Docker image or other type of package management yet. At the moment, act runner should be run from the command line. Of course, you can also wrap this binary in something like a system service, supervisord, or Docker container.

    You can create the default configuration of the runner with:

    ./act_runner generate-config > config.yaml

    You can tweak there for example the capacity so you are able to run more than one workflow in parallel.

    Before running a runner, you should first register it to your Gitea instance using the following command:

    ./act_runner register --config config.yaml --no-interactive --instance <instance> --token <token>

    Finally, it’s time to start the runner.

    ./act_runner --config config.yaml daemon

    If you want to create your own act docker, you can start with this dockerfile:

    FROM node:16-bullseye
    LABEL prune=false
    RUN mkdir /root/.aws
    COPY files/config /root/.aws/config
    COPY files/credentials /root/.aws/credentials
    RUN apt-get update && apt-get install -y \
      python3 \
      python3-pip \
      python3-venv \
      screen \
      vim \
      && python3 -m pip install --upgrade pip \
      && rm -rf /var/lib/apt/lists/*
    RUN pip install \
      molecule==5.0.1 \
      ansible==8.0.0 \
      ansible-lint \
      yamllint \
      molecule-plugins[ec2,docker,vagrant] \
      boto3 \
      botocore \
      testinfra \
    RUN wget \
      && tar xvzf docker-24.0.2.tgz \
      && cp docker/* /usr/bin \
      && rm -r docker docker-*

    It's prepared for:

    • Working within an AWS environment
    • Run Ansible and molecule
    • Build dockers
  • New: Build a docker within a gitea action.

    Assuming you're using the custom gitea_runner docker proposed above you can build and upload a docker to a registry with this action:

    name: Publish Docker image
    "on": [push]
        runs-on: ubuntu-latest
          - name: Checkout code
          - name: Login to Docker Registry
              username: ${{ secrets.REGISTRY_USERNAME }}
              password: ${{ secrets.REGISTRY_PASSWORD }}
          - name: Set up QEMU
          - name: Set up Docker Buildx
          - name: Extract metadata (tags, labels) for Docker
            id: meta
          - name: Build and push
            uses: docker/build-push-action@v2
              context: .
              platforms: linux/amd64,linux/arm64
              push: true
              cache-from: type=registry,
              cache-to: type=registry,,mode=max
              tags: ${{ steps.meta.outputs.tags }}
              labels: ${{ steps.meta.outputs.labels }}

    It uses a pair of nice features:

    • Multi-arch builds
    • Cache to speed up the builds

    As it reacts to all events it will build and push:

    • A tag with the branch name on each push to that branch
    • A tag with the tag on tag push
  • New: Bump the version of a repository on commits on master.

    • Create a SSH key for the CI to send commits to protected branches.
    • Upload the private key to a repo or organization secret called DEPLOY_SSH_KEY.
    • Upload the public key to the repo configuration deploy keys
    • Create the bump.yaml file with the next contents:

      name: Bump version
            - main
          if: "!startsWith(github.event.head_commit.message, 'bump:')"
          runs-on: ubuntu-latest
          name: "Bump version and create changelog"
            - name: Check out
              uses: actions/checkout@v3
                fetch-depth: 0  # Fetch all history
            - name: Configure SSH
              run: |
                  echo "${{ secrets.DEPLOY_SSH_KEY }}" > ~/.ssh/deploy_key
                  chmod 600 ~/.ssh/deploy_key
                  dos2unix ~/.ssh/deploy_key
                  ssh-agent -a $SSH_AUTH_SOCK > /dev/null
                  ssh-add ~/.ssh/deploy_key
            - name: Bump the version
              run: cz bump --changelog --no-verify
            - name: Push changes
              run: |
                git remote add ssh
                git pull ssh main
                git push ssh main
                git push ssh --tags

      It assumes that you have cz (commitizen) and dos2unix installed in your runner.

  • New: Skip gitea actions job on changes of some files.

    There are some expensive CI pipelines that don't need to be run for example if you changed a line in the, to skip a pipeline on changes of certain files you can use the paths-ignore directive:

    name: Ansible Testing
          - 'meta/**'
          - Makefile
          - renovate.json
          - .cz.toml
          - '.gitea/workflows/**'
        name: Test
        runs-on: ubuntu-latest

    The only downside is that if you set this pipeline as required in the branch protection, the merge button will look yellow instead of green when the pipeline is skipped.

  • New: Molecule doesn't find the molecule.yaml file.

    This is expected default behavior since Molecule searches for scenarios using the molecule/*/molecule.yml glob. But if you would like to change the suffix to yaml, you can do that if you set the MOLECULE_GLOB environment variable like this:

    export MOLECULE_GLOB='molecule/*/molecule.yaml'
  • Correction: Using paths-filter custom action to skip job actions.

        if: "!startsWith(github.event.head_commit.message, 'bump:')"
        name: Test
        runs-on: ubuntu-latest
          - name: Checkout the codebase
          - name: Check if we need to run the molecule tests
            id: filter
              filters: |
                  - 'defaults/**'
                  - 'tasks/**'
                  - 'handlers/**'
                  - 'tasks/**'
                  - 'templates/**'
                  - 'molecule/**'
                  - 'requirements.yaml'
                  - '.github/workflows/tests.yaml'
          - name: Run Molecule tests
            if: steps.filter.outputs.molecule == 'true'
            run: make molecule

    You can find more examples on how to use paths-filter here.

  • New: Get variables from the environment.

    You can configure your molecule.yaml file to read variables from the environment with:

      name: ansible
            my_secret: ${MY_SECRET}

    It's useful to have a task that checks if this secret exists:

    - name: Verify that the secret is set
        msg: 'Please export my_secret: export MY_SECRET=$(pass show my_secret)'
      run_once: true
      when: my_secret == None

    In the CI you can set it as a secret in the repository.

  • New: Run jobs if other jobs failed.

    This is useful to send notifications if any of the jobs failed.

    Right now you can't run a job if other jobs fail, all you can do is add a last step on each workflow to do the notification on failure:

    - name: Send mail
        if: failure()
            to: ${{ secrets.MAIL_TO }}
            from: Gitea <gitea@hostname>
            subject: ${{ gitea.repository }} ${{gitea.workflow}} ${{ job.status }}
            priority: high
            convert_markdown: true
            html_body: |
                ### Job ${{ job.status }}
                ${{ github.repository }}: [${{ github.ref }}@${{ github.sha }}](${{ github.server_url }}/${{ github.repository }}/actions)


  • New: Troubleshoot Yaml templates in go templates.

    If you are using a values.yaml.gotmpl file you won't be able to use {{ whatever }}. The solution is to extract that part to a yaml file and include it in the go template. For example:

    • values.yaml.gotmpl:
      enabled: true
        release: prometheus-operator
    {{ readFile "prometheus_rules.yaml" }}
    • prometheus_rules.yaml
      enabled: true
        release: prometheus-operator
        - alert: VeleroBackupPartialFailures
            message: Velero backup {{ $labels.schedule }} has {{ $value | humanizePercentage }} partialy failed backups.
          expr: increase(velero_backup_partial_failure_total{schedule!=""}[1h]) > 0
          for: 15m
            severity: warning
  • New: Introduce dotdrop.

    The main idea of Dotdropis to have the ability to store each dotfile only once and deploy them with a different content on different hosts/setups. To achieve this, it uses a templating engine that allows to specify, during the dotfile installation with dotdrop, based on a selected profile, how (with what content) each dotfile will be installed.

    What I like:

    • Popular
    • Actively maintained
    • Written in Python
    • Uses jinja2
    • Has a nice to read config file

    What I don't like:


  • New: How to install it.

    Go to the releases page, download the latest release, decompress it and add it to your $PATH.

  • New: How to store sensitive information in terraform.

    One of the most common questions we get about using Terraform to manage infrastructure as code is how to handle secrets such as passwords, API keys, and other sensitive data.

    In the article you'll find how to store your sensitive data in:

  • New: Create a list of resources based on a list of strings.

    variable "subnet_ids" {
      type = list(string)
    resource "aws_instance" "server" {
      # Create one instance for each subnet
      count = length(var.subnet_ids)
      ami           = "ami-a1b2c3d4"
      instance_type = "t2.micro"
      subnet_id     = var.subnet_ids[count.index]
      tags = {
        Name = "Server ${count.index}"

    If you want to use this generated list on another resource extracting for example the id you can use



Infrastructure Solutions


AWS Snippets

  • Correction: Recommend to use distro repos when installing.

    It's available now in debian

  • New: Stop an EC2 instance.

    aws ec2 stop-instances --instance-ids i-xxxxxxxx
  • New: Get EC2 metadata from within the instance.

    The quickest way to fetch or retrieve EC2 instance metadata from within a running EC2 instance is to log in and run the command:

    Fetch metadata from IPv4:

    curl -s

    You can also download the ec2-metadata tool to get the info:

    chmod +x ec2-metadata
    ./ec2-metadata --all
  • New: [Remove the lock screen in ubuntu.](../



Create the `/usr/share/glib-2.0/schemas/90_ubuntu-settings.gschema.override` file with the next content:

lock-enabled = false
idle-dim = false

Then reload the schemas with:

sudo glib-compile-schemas /usr/share/glib-2.0/schemas/

Kubectl Commands

  • New: Show the remaining space of a persistent volume claim.

    Either look it in Prometheus or run in the pod that has the PVC mounted:

    kubectl -n <namespace> exec <pod-name> -- df -ah

    You may need to use kubectl get pod <pod-name> -o yaml to know what volume is mounted where.

  • New: Run a pod in a defined node.

    Get the node hostnames with kubectl get nodes, then override the node with:

    kubectl run mypod --image ubuntu:18.04 --overrides='{"apiVersion": "v1", "spec": {"nodeSelector": { "": "my-node.internal" }}}' --command -- sleep 100000000000000


Continuous Integration


  • New: Module "typing" has no attribute "Annotated".

    This one happens only because annotated is not available in python < 3.9.

        # mypy is complaining that it can't import it, but it's solved below
        from typing import Annotated # type: ignore
    except ImportError:
        from typing_extensions import Annotated


  • New: Create the administrators.

    When you configure the Drone server you can create the initial administrative account by passing the below environment variable, which defines the account username (e.g. github handle) and admin flag set to true.


    If you need to grant the primary administrative role to an existing user, you can provide an existing username. Drone will update the account and grant administrator role on server restart.

    You can create administrator accounts using the command line tools. Please see the command line tools documentation for installation instructions.

    Create a new administrator account:

    $ drone user add octocat --admin

    Or grant the administrator role to existing accounts:

    $ drone user update octocat --admin
  • New: Linter: untrusted repositories cannot mount host volumes.

    Thats because the repository is not trusted.

    You have to set the trust as an admin of drone through the GUI or through the CLI with

    drone repo update --trusted <your/repo>

    If you're not an admin the above command returns a success but you'll see that the trust has not changed if you run

    drone repo info <your/repo>


  • New: Introduce ArgoCD.

    Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.

    Argo CD follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. Kubernetes manifests can be specified in several ways:

    • kustomize applications
    • helm charts
    • jsonnet files
    • Plain directory of YAML/json manifests
    • Any custom config management tool configured as a config management plugin, for example with helmfile

    Argo CD automates the deployment of the desired application states in the specified target environments. Application deployments can track updates to branches, tags, or pinned to a specific version of manifests at a Git commit. See tracking strategies for additional details about the different tracking strategies available.


  • New: Introduce shellcheck.

    Shellcheck is a linting tool to finds bugs in your shell scripts.


    apt-get install shellcheck


    SC2143: Use grep -q instead of comparing output with [ -n .. ].

    Problematic code:

    if [ "$(find . | grep 'IMG[0-9]')" ]
      echo "Images found"

    Correct code:

    if find . | grep -q 'IMG[0-9]'
      echo "Images found"


    The problematic code has to iterate the entire directory and read all matching lines into memory before making a decision.

    The correct code is cleaner and stops at the first matching line, avoiding both iterating the rest of the directory and reading data into memory.

Automating Processes


  • New: Introduce copier.

    Copier is a library and CLI app for rendering project templates.

    • Works with local paths and Git URLs.
    • Your project can include any file and Copier can dynamically replace values in any kind of text file.
    • It generates a beautiful output and takes care of not overwriting existing files unless instructed to do so.

    This long article covers:


  • Correction: Suggest to use copier instead.

    copier looks a more maintained solution nowadays.


  • New: Introduce letsencrypt.

    Letsencrypt is a free, automated, and open certificate authority brought to you by the nonprofit Internet Security Research Group (ISRG). Basically it gives away SSL certificates, which are required to configure webservers to use HTTPS instead of HTTP for example.

    In the article you can also find:


OpenZFS storage planning

  • New: Introduce ZFS storage planning.
  • New: Analyze the Exos X18 of 16TB disk.

    Specs IronWolf IronWolf Pro Exos 7E8 8TB Exos 7E10 8TB Exos X18 16TB
    Technology CMR CMR CMR SMR CMR
    Bays 1-8 1-24 ? ? ?
    Capacity 1-12TB 2-20TB 8TB 8TB 16 TB
    RPM 5,400 RPM (3-6TB) 7200 RPM 7200 RPM 7200 RPM 7200 RPM
    RPM 5,900 RPM (1-3TB) 7200 RPM 7200 RPM 7200 RPM 7200 RPM
    RPM 7,200 RPM (8-12TB) 7200 RPM 7200 RPM 7200 RPM 7200 RPM
    Speed 180MB/s (1-12TB) 214-260MB/s (4-18TB) 249 MB/s 255 MB/s 258 MB/s
    Cache 64MB (1-4TB) 256 MB 256 MB 256 MB 256 MB
    Cache 256MB (3-12TB) 256 MB 256 MB 256 MB 256 MB
    Power Consumption 10.1 W 10.1 W 12.81 W 11.03 W 9.31 W
    Power Consumption Rest 7.8 W 7.8 W 7.64 W 7.06 W 5.08 W
    Workload 180TB/yr 300TB/yr 550TB/yr 550TB/yr 550TB/yr
    MTBF 1 million 1 million 2 millions 2 millions 2.5 millions
    Warranty 3 years 5 years 5 years 5 years 5 years
    Price From $60 (2022) From $83 (2022) 249$ (2022) 249$ (2022) 249$ (2023)


  • New: How to create a pool and datasets.
  • New: Configure NFS.

    With ZFS you can share a specific dataset via NFS. If for whatever reason the dataset does not mount, then the export will not be available to the application, and the NFS client will be blocked.

    You still must install the necessary daemon software to make the share available. For example, if you wish to share a dataset via NFS, then you need to install the NFS server software, and it must be running. Then, all you need to do is flip the sharing NFS switch on the dataset, and it will be immediately available.

  • New: Backup.

    Please remember that RAID is not a backup, it guards against one kind of hardware failure. There's lots of failure modes that it doesn't guard against though:

    • File corruption
    • Human error (deleting files by mistake)
    • Catastrophic damage (someone dumps water onto the server)
    • Viruses and other malware
    • Software bugs that wipe out data
    • Hardware problems that wipe out data or cause hardware damage (controller malfunctions, firmware bugs, voltage spikes, ...)

    That's why you still need to make backups.

    ZFS has the builtin feature to make snapshots of the pool. A snapshot is a first class read-only filesystem. It is a mirrored copy of the state of the filesystem at the time you took the snapshot. They are persistent across reboots, and they don't require any additional backing store; they use the same storage pool as the rest of your data.

    If you remember ZFS's awesome nature of copy-on-write filesystems, you will remember the discussion about Merkle trees. A ZFS snapshot is a copy of the Merkle tree in that state, except we make sure that the snapshot of that Merkle tree is never modified.

    Creating snapshots is near instantaneous, and they are cheap. However, once the data begins to change, the snapshot will begin storing data. If you have multiple snapshots, then multiple deltas will be tracked across all the snapshots. However, depending on your needs, snapshots can still be exceptionally cheap.

    The article also includes:

  • New: Introduce Sanoid.

    Sanoid is the most popular tool right now, with it you can create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file.

    The article includes:

  • New: Get compress ratio of a filesystem.

    zfs get compressratio {{ filesystem }}
  • Correction: Use the recursive flag.

    recursive is not set by default, so the dataset's children won't be backed up unless you set this option.

       use_template = daily
       recursive = yes
  • New: See how much space do your snapshots consume.

    When a snapshot is created, its space is initially shared between the snapshot and the file system, and possibly with previous snapshots. As the file system changes, space that was previously shared becomes unique to the snapshot, and thus is counted in the snapshot’s used property.

    Additionally, deleting snapshots can increase the amount of space that is unique for use by other snapshots.

    Note: The value for a snapshot’s space referenced property is the same as that for the file system when the snapshot was created.

    You can display the amount of space that is consumed by snapshots and descendant file systems by using the zfs list -o space command.

    rpool                            10.2G  5.16G         0   4.52M              0      5.15G
    rpool/ROOT                       10.2G  3.06G         0     31K              0      3.06G
    rpool/ROOT/solaris               10.2G  3.06G     55.0M   2.78G              0       224M
    rpool/ROOT/solaris@install           -  55.0M         -       -              -          -
    rpool/ROOT/solaris/var           10.2G   224M     2.51M    221M              0          0
    rpool/ROOT/solaris/var@install       -  2.51M         -       -              -          -

    From this output, you can see the amount of space that is:

    • AVAIL: The amount of space available to the dataset and all its children, assuming that there is no other activity in the pool.
    • USED: The amount of space consumed by this dataset and all its descendants. This is the value that is checked against this dataset's quota and reservation. The space used does not include this dataset's reservation, but does take into account the reservations of any descendants datasets.

      The used space of a snapshot is the space referenced exclusively by this snapshot. If this snapshot is destroyed, the amount of used space will be freed. Space that is shared by multiple snapshots isn't accounted for in this metric. * USEDSNAP: Space being consumed by snapshots of each data set * USEDDS: Space being used by the dataset itself * USEDREFRESERV: Space being used by a refreservation set on the dataset that would be freed if it was removed. * USEDCHILD: Space being used by the children of this dataset.

    Other space properties are:

    • LUSED: The amount of space that is "logically" consumed by this dataset and all its descendents. It ignores the effect of compression and copies properties, giving a quantity closer to the amount of data that aplication ssee. However it does include space consumed by metadata.
    • REFER: The amount of data that is accessible by this dataset, which may or may not be shared with other dataserts in the pool. When a snapshot or clone is created, it initially references the same amount of space as the filesystem or snapshot it was created from, since its contents are identical.
  • New: Rename or move a dataset.

    NOTE: if you want to rename the topmost dataset look at rename the topmost dataset instead. File systems can be renamed by using the zfs rename command. You can perform the following operations:

    • Change the name of a file system.
    • Relocate the file system within the ZFS hierarchy.
    • Change the name of a file system and relocate it within the ZFS hierarchy.

    The following example uses the rename subcommand to rename of a file system from kustarz to kustarz_old:

    zfs rename tank/home/kustarz tank/home/kustarz_old

    The following example shows how to use zfs rename to relocate a file system:

    zfs rename tank/home/maybee tank/ws/maybee

    In this example, the maybee file system is relocated from tank/home to tank/ws. When you relocate a file system through rename, the new location must be within the same pool and it must have enough disk space to hold this new file system. If the new location does not have enough disk space, possibly because it has reached its quota, rename operation fails.

    The rename operation attempts an unmount/remount sequence for the file system and any descendent file systems. The rename command fails if the operation is unable to unmount an active file system. If this problem occurs, you must forcibly unmount the file system.

    You'll loose the snapshots though, as explained below.

  • New: Rename the topmost dataset.

    If you want to rename the topmost dataset you need to rename the pool too as these two are tied.

    $: zpool status -v
      pool: tets
     state: ONLINE
     scrub: none requested
            NAME        STATE     READ WRITE CKSUM
            tets        ONLINE       0     0     0
              c0d1      ONLINE       0     0     0
              c1d0      ONLINE       0     0     0
              c1d1      ONLINE       0     0     0
    errors: No known data errors

    To fix this, first export the pool:

    $ zpool export tets

    And then imported it with the correct name:

    $ zpool import tets test

    After the import completed, the pool contains the correct name:

    $ zpool status -v
      pool: test
     state: ONLINE
     scrub: none requested
            NAME        STATE     READ WRITE CKSUM
            test        ONLINE       0     0     0
              c0d1      ONLINE       0     0     0
              c1d0      ONLINE       0     0     0
              c1d1      ONLINE       0     0     0
    errors: No known data errors

    Now you may need to fix the ZFS mountpoints for each dataset

    zfs set mountpoint="/opt/zones/[Newmountpoint]" [ZFSPOOL/[ROOTor other filesystem]
  • New: Rename or move snapshots.

    If the dataset has snapshots you need to rename them too. They must be renamed within the same pool and dataset from which they were created though. For example:

    zfs rename tank/home/cindys@083006 tank/home/cindys@today

    In addition, the following shortcut syntax is equivalent to the preceding syntax:

    zfs rename tank/home/cindys@083006 today

    The following snapshot rename operation is not supported because the target pool and file system name are different from the pool and file system where the snapshot was created:

    $: zfs rename tank/home/cindys@today pool/home/cindys@saturday
    cannot rename to 'pool/home/cindys@today': snapshots must be part of same

    You can recursively rename snapshots by using the zfs rename -r command. For example:

    $: zfs list
    NAME                         USED  AVAIL  REFER  MOUNTPOINT
    users                        270K  16.5G    22K  /users
    users/home                    76K  16.5G    22K  /users/home
    users/home@yesterday            0      -    22K  -
    users/home/markm              18K  16.5G    18K  /users/home/markm
    users/home/markm@yesterday      0      -    18K  -
    users/home/marks              18K  16.5G    18K  /users/home/marks
    users/home/marks@yesterday      0      -    18K  -
    users/home/neil               18K  16.5G    18K  /users/home/neil
    users/home/neil@yesterday       0      -    18K  -
    $: zfs rename -r users/home@yesterday @2daysago
    $: zfs list -r users/home
    NAME                        USED  AVAIL  REFER  MOUNTPOINT
    users/home                   76K  16.5G    22K  /users/home
    users/home@2daysago            0      -    22K  -
    users/home/markm             18K  16.5G    18K  /users/home/markm
    users/home/markm@2daysago      0      -    18K  -
    users/home/marks             18K  16.5G    18K  /users/home/marks
    users/home/marks@2daysago      0      -    18K  -
    users/home/neil              18K  16.5G    18K  /users/home/neil
    users/home/neil@2daysago       0      -    18K  -
  • New: See the differences between two backups.

    To identify the differences between two snapshots, use syntax similar to the following:

    $ zfs diff tank/home/tim@snap1 tank/home/tim@snap2
    M       /tank/home/tim/
    +       /tank/home/tim/fileB

    The following table summarizes the file or directory changes that are identified by the zfs diff command.

    File or Directory Change Identifier
    File or directory has been modified or file or directory link has changed M
    File or directory is present in the older snapshot but not in the more recent snapshot
    File or directory is present in the more recent snapshot but not in the older snapshot +
    File or directory has been renamed R
  • New: Create a cold backup of a series of datasets.

    If you've used the -o keyformat=raw -o keylocation=file:///etc/zfs/keys/home.key arguments to encrypt your datasets you can't use a keyformat=passphase encryption on the cold storage device. You need to copy those keys on the disk. One way of doing it is to:

    • Create a 100M LUKS partition protected with a passphrase where you store the keys.
    • The rest of the space is left for a partition for the zpool.
  • New: Clear a permanent ZFS error in a healthy pool.

    Sometimes when you do a zpool status you may see that the pool is healthy but that there are "Permanent errors" that may point to files themselves or directly to memory locations.

    You can read this long discussion on what does these permanent errors mean, but what solved the issue for me was to run a new scrub

    zpool scrub my_pool

    It takes a long time to run, so be patient.

  • New: ZFS pool is in suspended mode.

    Probably because you've unplugged a device without unmounting it.

    If you want to remount the device you can follow these steps to symlink the new devfs entries to where zfs thinks the vdev is. That way you can regain access to the pool without a reboot.

    So if zpool status says the vdev is /dev/disk2s1, but the reattached drive is at disk4, then do the following:

    cd /dev
    sudo rm -f disk2s1
    sudo ln -s disk4s1 disk2s1
    sudo zpool clear -F WD_1TB
    sudo zpool export WD_1TB
    sudo rm disk2s1
    sudo zpool import WD_1TB

    If you don't care about the zpool anymore, sadly your only solution is to reboot the server. Real ugly, so be careful when you umount zpools.

  • New: Prune snapshots.

    If you want to manually prune the snapshots after you tweaked sanoid.conf you can run:

    sanoid --prune-snapshots
  • New: Send encrypted backups to a encrypted dataset.

    syncoid's default behaviour is to create the destination dataset without encryption so the snapshots are transferred and can be read without encryption. You can check this with the zfs get encryption,keylocation,keyformat command both on source and destination.

    To prevent this from happening you have to [pass the --sendoptions='w']( tosyncoidso that it tells zfs to send a raw stream. If you do so, you also need to [transfer the key file]( to the destination server so that it can do azfs loadkey` and then mount the dataset. For example:

    server-host:$ sudo zfs list -t filesystem
    NAME                    USED  AVAIL     REFER  MOUNTPOINT
    server_data             232M  38.1G      230M  /var/server_data
    server_data/log         111K  38.1G      111K  /var/server_data/log
    server_data/mail        111K  38.1G      111K  /var/server_data/mail
    server_data/nextcloud   111K  38.1G      111K  /var/server_data/nextcloud
    server_data/postgres    111K  38.1G      111K  /var/server_data/postgres
    server-host:$ sudo zfs get keylocation server_data/nextcloud
    NAME                   PROPERTY     VALUE                                    SOURCE
    server_data/nextcloud  keylocation  file:///root/zfs_dataset_nextcloud_pass  local
    server-host:$ sudo syncoid --recursive --skip-parent --sendoptions=w server_data root@
    INFO: Sending oldest full snapshot server_data/log@autosnap_2021-06-18_18:33:42_yearly (~ 49 KB) to new target filesystem:
    17.0KiB 0:00:00 [1.79MiB/s] [=================================================>                                                                                                  ] 34%
    INFO: Updating new target filesystem with incremental server_data/log@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:12:55 (~ 15 KB):
    41.2KiB 0:00:00 [78.4KiB/s] [===================================================================================================================================================] 270%
    INFO: Sending oldest full snapshot server_data/mail@autosnap_2021-06-18_18:33:42_yearly (~ 49 KB) to new target filesystem:
    17.0KiB 0:00:00 [ 921KiB/s] [=================================================>                                                                                                  ] 34%
    INFO: Updating new target filesystem with incremental server_data/mail@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:13:14 (~ 15 KB):
    41.2KiB 0:00:00 [49.4KiB/s] [===================================================================================================================================================] 270%
    INFO: Sending oldest full snapshot server_data/nextcloud@autosnap_2021-06-18_18:33:42_yearly (~ 49 KB) to new target filesystem:
    17.0KiB 0:00:00 [ 870KiB/s] [=================================================>                                                                                                  ] 34%
    INFO: Updating new target filesystem with incremental server_data/nextcloud@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:13:42 (~ 15 KB):
    41.2KiB 0:00:00 [50.4KiB/s] [===================================================================================================================================================] 270%
    INFO: Sending oldest full snapshot server_data/postgres@autosnap_2021-06-18_18:33:42_yearly (~ 50 KB) to new target filesystem:
    17.0KiB 0:00:00 [1.36MiB/s] [===============================================>                                                                                                    ] 33%
    INFO: Updating new target filesystem with incremental server_data/postgres@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:14:11 (~ 15 KB):
    41.2KiB 0:00:00 [48.9KiB/s] [===================================================================================================================================================] 270%
    server-host:$ sudo scp /root/zfs_dataset_nextcloud_pass
    backup-host:$ sudo zfs set keylocation=file:///root/zfs_dataset_nextcloud_pass  backup_pool/nextcloud
    backup-host:$ sudo zfs load-key backup_pool/nextcloud
    backup-host:$ sudo zfs mount backup_pool/nextcloud

    If you also want to keep the encryptionroot you need to let zfs take care of the recursion instead of syncoid. In this case you can't use syncoid's stuff like --exclude from the manpage of zfs:

    -R, --replicate
       Generate a replication stream package, which will replicate the specified file system, and all descendent file systems, up to the named snapshot.  When received, all properties, snap‐
       shots, descendent file systems, and clones are preserved.
       If the -i or -I flags are used in conjunction with the -R flag, an incremental replication stream is generated.  The current values of properties, and current snapshot and file system
       names are set when the stream is received.  If the -F flag is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed.
       If the -R flag is used to send encrypted datasets, then -w must also be specified.

    In this case this should work:

    /sbin/syncoid --recursive --force-delete --sendoptions="Rw" zpool/backups zfs-recv@
  • New: Repair a DEGRADED pool.

    First let’s offline the device we are going to replace:

    zpool offline tank0 ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx

    Now let us have a look at the pool status.

    zpool status
    NAME                                            STATE     READ WRITE CKSUM
    tank0                                           DEGRADED     0     0     0
      raidz2-1                                      DEGRADED     0     0     0
        ata-TOSHIBA_HDWN180_xxxxxxxxxxxx            ONLINE       0     0     0
        ata-TOSHIBA_HDWN180_xxxxxxxxxxxx            ONLINE       0     0     0
        ata-TOSHIBA_HDWN180_xxxxxxxxxxxx            ONLINE       0     0     0
        ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx           ONLINE       0     0     0
        ata-TOSHIBA_HDWG180_xxxxxxxxxxxx            ONLINE       0     0     0
        ata-TOSHIBA_HDWG180_xxxxxxxxxxxx            ONLINE       0     0     0
        ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx  OFFLINE      0     0     0
        ata-ST4000VX007-2DT166_xxxxxxxx             ONLINE       0     0     0

    Sweet, the device is offline (last time it didn't show as offline for me, but the offline command returned a status code of 0).

    Time to shut the server down and physically replace the disk.

    shutdown -h now

    When you start again the server, it’s time to instruct ZFS to replace the removed device with the disk we just installed.

    zpool replace tank0 \
        ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx \
    zpool status tank0
    pool: main
    state: DEGRADED
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
      scan: resilver in progress since Fri Sep 22 12:40:28 2023
            4.00T scanned at 6.85G/s, 222G issued at 380M/s, 24.3T total
            54.7G resilvered, 0.89% done, 18:28:03 to go
    NAME                                              STATE     READ WRITE CKSUM
    tank0                                             DEGRADED     0     0     0
      raidz2-1                                        DEGRADED     0     0     0
        ata-TOSHIBA_HDWN180_xxxxxxxxxxxx              ONLINE       0     0     0
        ata-TOSHIBA_HDWN180_xxxxxxxxxxxx              ONLINE       0     0     0
        ata-TOSHIBA_HDWN180_xxxxxxxxxxxx              ONLINE       0     0     0
        ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx             ONLINE       0     0     0
        ata-TOSHIBA_HDWG180_xxxxxxxxxxxx              ONLINE       0     0     0
        ata-TOSHIBA_HDWG180_xxxxxxxxxxxx              ONLINE       0     0     0
        replacing-6                                   DEGRADED     0     0     0
          ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx  OFFLINE      0     0     0
          ata-TOSHIBA_HDWG180_xxxxxxxxxxxx            ONLINE       0     0     0  (resilvering)
        ata-ST4000VX007-2DT166_xxxxxxxx               ONLINE       0     0     0

    The disk is replaced and getting resilvered (which may take a long time to run (18 hours in a 8TB disk in my case).

    Once the resilvering is done; this is what the pool looks like.

    zpool list
    tank0    43.5T  33.0T  10.5T     14.5T     7%    75%  1.00x  ONLINE  -

    If you want to read other blogs that have covered the same topic check out 1.

  • New: Stop a ZFS scrub.

    zpool scrub -s my_pool
  • New: Mount a dataset that is encrypted.

    If your dataset is encrypted using a key file you need to:

    • Mount the device that has your keys
    • Import the pool without loading the key because you want to override the keylocation attribute with zfs load-key. Without the -l option, any encrypted datasets won't be mounted, which is what you want.
    • Load the key(s) for the dataset(s)
    • Mount the dataset(s).
    zpool import rpool    # without the `-l` option!
    zfs load-key -L file:///path/to/keyfile rpool
    zfs mount rpool
  • New: Umount a pool.

    zpool export pool-name
  • Correction: Improve the Repair a DEGRADED pool instructions.

    First you need to make sure that it is in fact a problem of the disk. Check the dmesg to see if there are any traces of reading errors, or SATA cable errors.

    A friend suggested to mark the disk as healthy and do a resilver on the same disk. If the error is reproduced in the next days, then replace the disk. A safer approach is to resilver on a new disk, analyze the disk when it's not connected to the pool, and if you feel it's safe then save it as a cold spare.

  • New: Remove all snapshots of a dataset.

    zfs list -t snapshot -o name path/to/dataset | tail -n+2 | tac | xargs -n 1 zfs destroy -r

ZFS Prometheus exporter

  • New: Introduce the ZFS exporter.

    You can use a zfs exporter to create alerts on your ZFS pools, filesystems, snapshots and volumes.

    It's not easy to match the exporter metrics with the output of zfs list -o space. Here is a correlation table:

    • USED: zfs_dataset_used_bytes{type="filesystem"}
    • AVAIL: zfs_dataset_available_bytes{type="filesystem"}
    • LUSED: zfs_dataset_logical_used_bytes{type="filesystem"}
    • USEDDS: zfs_dataset_used_by_dataset_bytes="filesystem"}
    • USEDSNAP: Currently there is no published metric to get this data. You can either use zfs_dataset_used_bytes - zfs_dataset_used_by_dataset_bytes which will show wrong data if the dataset has children or try to do sum by (hostname,filesystem) (zfs_dataset_used_bytes{type='snapshot'}) which returns smaller sizes than expected.

    It also covers the installation as well as some nice alerts.

  • Correction: Improve alerts.

  • Correction: Update the alerts to the more curated version.
  • New: Useful inhibits.

    Some you may want to inhibit some of these rules for some of your datasets. These subsections should be added to the alertmanager.yml file under the inhibit_rules field.

    Ignore snapshots on some datasets: Sometimes you don't want to do snapshots on a dataset

    - target_matchers:
        - alertname = ZfsDatasetWithNoSnapshotsError
        - hostname = my_server_1
        - filesystem = tmp

    Ignore snapshots growth: Sometimes you don't mind if the size of the data saved in the filesystems doesn't change too much between snapshots doesn't change much specially in the most frequent backups because you prefer to keep the backup cadence. It's interesting to have the alert though so that you can get notified of the datasets that don't change that much so you can tweak your backup policy (even if zfs snapshots are almost free).

      - target_matchers:
        - alertname =~ "ZfsSnapshotType(Frequently|Hourly)SizeError"
        - filesystem =~ "(media/(docs|music))"



  • New: Introduce loki.

    Loki is a set of components that can be composed into a fully featured logging stack.

    Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels). Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.

    A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.

  • New: How to install loki.

    There are many ways to install Loki, we're going to do it using docker-compose taking their example as a starting point and complementing our already existent grafana docker-compose.

    It makes use of the environment variables to configure Loki, that's why we have the -config.expand-env=true flag in the command line launch.

    In the grafana datasources directory add loki.yaml:

    apiVersion: 1
      - name: Loki
        type: loki
        access: proxy
        orgId: 1
        url: http://loki:3100
        basicAuth: false
        isDefault: true
        version: 1
        editable: false

    Storage configuration:

    Unlike other logging systems, Grafana Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels). Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on the filesystem. A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.

    Loki 2.0 brings an index mechanism named ‘boltdb-shipper’ and is what we now call Single Store. This type only requires one store, the object store, for both the index and chunks.

    Loki 2.8 adds TSDB as a new mode for the Single Store and is now the recommended way to persist data in Loki as it improves query performance, reduces TCO and has the same feature parity as “boltdb-shipper”.


  • New: Introduce grafana.

    Grafana is a web application to create dashboards.

    Installation: We're going to install it with docker-compose and connect it to Authentik.

    Create the Authentik connection:

    Assuming that you have the terraform authentik provider configured, use the next terraform code:

    variable "grafana_name" {
      type        = string
      description = "The name shown in the Grafana application."
      default     = "Grafana"
    variable "grafana_redirect_uri" {
      type        = string
      description = "The redirect url configured on Grafana."
    variable "grafana_icon" {
      type        = string
      description = "The icon shown in the Grafana application"
      default     = "/application-icons/grafana.svg"
    resource "authentik_application" "grafana" {
      name              = var.grafana_name
      slug              = "grafana"
      protocol_provider =
      meta_icon         = var.grafana_icon
      lifecycle {
        ignore_changes = [
          # The terraform provider is continuously changing the attribute even though it's set
    resource "authentik_provider_oauth2" "grafana" {
      name               = var.grafana_name
      client_id          = "grafana"
      authorization_flow =
      property_mappings = [,,,
      redirect_uris = [
      signing_key =
      access_token_validity = "minutes=120"
    data "authentik_certificate_key_pair" "default" {
      name = "authentik Self-signed Certificate"
    data "authentik_flow" "default-authorization-flow" {
      slug = "default-provider-authorization-implicit-consent"
    output "grafana_oauth_id" {
      value = authentik_provider_oauth2.grafana.client_id
    output "grafana_oauth_secret" {
      value = authentik_provider_oauth2.grafana.client_secret
  • Correction: Install grafana.

    version: "3.8"
        image: grafana/grafana-oss:${GRAFANA_VERSION:-latest}
        container_name: grafana
        restart: unless-stopped
          - data:/var/lib/grafana
          - grafana
          - monitorization
          - swag
          - .env
          - db
        image: postgres:${DATABASE_VERSION:-15}
        restart: unless-stopped
        container_name: grafana-db
          - POSTGRES_DB=${GF_DATABASE_NAME:-grafana}
          - POSTGRES_USER=${GF_DATABASE_USER:-grafana}
          - POSTGRES_PASSWORD=${GF_DATABASE_PASSWORD:?database password required}
          - grafana
          - db-data:/var/lib/postgresql/data
          - .env
          name: grafana
          name: monitorization
          name: swag
        driver: local
          type: none
          o: bind
          device: /data/grafana/app
        driver: local
          type: none
          o: bind
          device: /data/grafana/database

    Where the monitorization network is where prometheus and the rest of the stack listens, and swag the network to the gateway proxy.

    It uses the .env file to store the required configuration, to connect grafana with authentik you need to add the next variables:

    GF_AUTH_GENERIC_OAUTH_CLIENT_ID="<Client ID from above>"
    GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET="<Client Secret from above>"
    GF_AUTH_GENERIC_OAUTH_SCOPES="openid profile email"
    GF_AUTH_SIGNOUT_REDIRECT_URL="<Slug of the application from above>/end-session/"
    GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"

    In the configuration above you can see an example of a role mapping. Upon login, this configuration looks at the groups of which the current user is a member. If any of the specified group names are found, the user will be granted the resulting role in Grafana.

    In the example shown above, one of the specified group names is "Grafana Admins". If the user is a member of this group, they will be granted the "Admin" role in Grafana. If the user is not a member of the "Grafana Admins" group, it moves on to see if the user is a member of the "Grafana Editors" group. If they are, they are granted the "Editor" role. Finally, if the user is not found to be a member of either of these groups, it fails back to granting the "Viewer" role.

    Also make sure in your configuration that root_url is set correctly, otherwise your redirect url might get processed incorrectly. For example, if your grafana instance is running on the default configuration and is accessible behind a reverse proxy at, your redirect url will end up looking like this, If you get user does not belong to org error when trying to log into grafana for the first time via OAuth, check if you have an organization with the ID of 1, if not, then you have to add the following to your grafana config:

    auto_assign_org = true
    auto_assign_org_id = <id-of-your-default-organization>

    Once you've made sure that the oauth works, go to /admin/users and remove the admin user.

  • New: Configure grafana.

    Grafana has default and custom configuration files. You can customize your Grafana instance by modifying the custom configuration file or by using environment variables. To see the list of settings for a Grafana instance, refer to View server settings.

    To override an option use GF_<SectionName>_<KeyName>. Where the section name is the text within the brackets. Everything should be uppercase, . and - should be replaced by _. For example, if you have these configuration settings:

    instance_name = ${HOSTNAME}
    admin_user = admin
    client_secret = 0ldS3cretKey
    rendering_ignore_https_errors = true
    enable = newNavigation

    You can override variables on Linux machines with:

    export GF_DEFAULT_INSTANCE_NAME=my-instance
    export GF_SECURITY_ADMIN_USER=owner
    export GF_AUTH_GOOGLE_CLIENT_SECRET=newS3cretKey
    export GF_FEATURE_TOGGLES_ENABLE=newNavigation

    And in the docker compose you can edit the .env file. Mine looks similar to:

    GF_AUTH_GENERIC_OAUTH_CLIENT_ID="<Client ID from above>"
    GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET="<Client Secret from above>"
    GF_AUTH_GENERIC_OAUTH_SCOPES="openid profile email"
    GF_AUTH_SIGNOUT_REDIRECT_URL="<Slug of the application from above>/end-session/"
    GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"
  • New: Configure datasources.

    You can manage data sources in Grafana by adding YAML configuration files in the provisioning/datasources directory. Each config file can contain a list of datasources to add or update during startup. If the data source already exists, Grafana reconfigures it to match the provisioned configuration file.

    The configuration file can also list data sources to automatically delete, called deleteDatasources. Grafana deletes the data sources listed in deleteDatasources before adding or updating those in the datasources list.

    For example to configure a Prometheus datasource use:

    apiVersion: 1
      - name: Prometheus
        type: prometheus
        access: proxy
        # Access mode - proxy (server in the UI) or direct (browser in the UI).
        url: http://prometheus:9090
          httpMethod: POST
          manageAlerts: true
          prometheusType: Prometheus
          prometheusVersion: 2.44.0
          cacheLevel: 'High'
          disableRecordingRules: false
          incrementalQueryOverlapWindow: 10m
          exemplarTraceIdDestinations: []
  • New: Configure dashboards.

    You can manage dashboards in Grafana by adding one or more YAML config files in the provisioning/dashboards directory. Each config file can contain a list of dashboards providers that load dashboards into Grafana from the local filesystem.

    Create one file called dashboards.yaml with the next contents:

    apiVersion: 1
      - name: default # A uniquely identifiable name for the provider
        type: file
          path: /etc/grafana/provisioning/dashboards/definitions

    Then inside the config directory of your docker compose create the directory provisioning/dashboards/definitions and add the json of the dashboards themselves. You can download them from the dashboard pages. For example:

  • New: Configure the plugins.

    To install plugins in the Docker container, complete the following steps:

    • Pass the plugins you want to be installed to Docker with the GF_INSTALL_PLUGINS environment variable as a comma-separated list.
    • This sends each plugin name to grafana-cli plugins install ${plugin} and installs them when Grafana starts.

    For example:

    docker run -d -p 3000:3000 --name=grafana \
      -e "GF_INSTALL_PLUGINS=grafana-clock-panel, grafana-simple-json-datasource" \

    To specify the version of a plugin, add the version number to the GF_INSTALL_PLUGINS environment variable. For example: GF_INSTALL_PLUGINS=grafana-clock-panel 1.0.1.

    To install a plugin from a custom URL, use the following convention to specify the URL: <url to plugin zip>;<plugin install folder name>. For example: GF_INSTALL_PLUGINS=;custom-plugin.

  • Correction: Improve installation method.

    Add more configuration values such as:

    GF_LOG_MODE="console file"
  • Correction: Warning when configuring datasources.

    Be careful to set the timeInterval variable to the value of how often you scrape the data from the node exporter to avoid this issue.


  • New: Alertmanager routes.

    A route block defines a node in a routing tree and its children. Its optional configuration parameters are inherited from its parent node if not set.

    Every alert enters the routing tree at the configured top-level route, which must match all alerts (i.e. not have any configured matchers). It then traverses the child nodes. If continue is set to false, it stops after the first matching child. If continue is true on a matching node, the alert will continue matching against subsequent siblings. If an alert does not match any children of a node (no matching child nodes, or none exist), the alert is handled based on the configuration parameters of the current node.

    A basic configuration would be:

      group_by: [job, alertname, severity]
      group_wait: 30s
      group_interval: 5m
      repeat_interval: 12h
      receiver: 'email'
        - match:
            alertname: Watchdog
          receiver: 'null'
  • New: Add Wazuh SIEM.


Blackbox Exporter

  • New: Check TCP with TLS.

    If you want to test for example if an LDAP is serving the correct certificate on the port 636 you can use:

      prober: tcp
      timeout: 10s
      tls: true
    - name: Ldap
      url: my-ldap-server:636
      module: tcp_ssl_connect

Node Exporter

  • Correction: Improve how to install it.



  • New: Introduce Authentik.

    Authentik is an open-source Identity Provider focused on flexibility and versatility.

    What I like:

    • Is maintained and popular
    • It has a clean interface
    • They have their own terraform provider Oo!

    What I don't like:

    • It's heavy focused on GUI interaction, but you can export the configuration to YAML files to be applied without the GUI interaction.
    • The documentation is oriented to developers and not users. It's a little difficult to get a grasp on how to do things in the platform without following blog posts.

    In the article you can also find:

  • Correction: Configure the invitation flow with terraform.

  • New: Hide and application from a user.

    Application access can be configured using (Policy) Bindings. Click on an application in the applications list, and select the Policy / Group / User Bindings tab. There you can bind users/groups/policies to grant them access. When nothing is bound, everyone has access. You can use this to grant access to one or multiple users/groups, or dynamically give access using policies.

    With terraform you can use authentik_policy_binding, for example:

    resource "authentik_policy_binding" "admin" {
      target = authentik_application.gitea.uuid
      group  =
      order  = 0
  • New: Configure password recovery.

    Password recovery is not set by default, in the article you can find the terraform resources needed for it to work.

  • New: Protect applications that don't have authentication.

    Some applications don't have authentication, for example prometheus. You can use Authentik in front of such applications to add the authentication and authorization layer.

    Authentik can be used as a (very) simple reverse proxy by using its Provider feature with the regular "Proxy" setting. This let's you wrap authentication around a sub-domain / app where it normally wouldn't have authentication (or not the type of auth that you would specifically want) and then have Authentik handle the proxy forwarding and Auth.

    In this mode, there is no domain level nor 'integrated' authentication into your desired app; Authentik becomes both your reverse proxy and auth for this one particular app or (sub) domain. This mode does not forward authentication nor let you log in into any app. It's just acts like an authentication wrapper.

    It's best to use a normal reverse proxy out front of Authentik. This adds a second layer of routing to deal with but Authentik is not NGINX or a reverse proxy system, so it does not have that many configuration options.

    We'll use the following fake domains in this example:

    • Authentik domain:
    • App domain:
    • Nginx:
    • Authentik's docker conter name: auth_server

    The steps are:

    • Configure the proxy provider:
    # ---------------
    # -- Variables --
    # ---------------
    variable "prometheus_url" {
      type        = string
      description = "The url to access the service."
    # ----------
    # -- Data --
    # ----------
    data "authentik_flow" "default-authorization-flow" {
      slug = "default-provider-authorization-implicit-consent"
    # --------------------
    # --    Provider    --
    # --------------------
    resource "authentik_provider_proxy" "prometheus" {
      name               = "Prometheus"
      internal_host      = "http://prometheus:9090"
      external_host      = var.prometheus_url
      authorization_flow =
      internal_host_ssl_validation = false
  • Correction: Finish the installation of prometheus.

  • New: Disregard monitorization.

    I've skimmed through the prometheus metrics exposed at :9300/metrics in the core and they aren't that useful :(

  • New: Troubleshoot I can't log in to authentik.

    In case you can't login anymore, perhaps due to an incorrectly configured stage or a failed flow import, you can create a recovery key.

    To create the key, run the following command:

    docker run --it authentik bash
    ak create_recovery_key 1 akadmin

    This will output a link, that can be used to instantly gain access to authentik as the user specified above. The link is valid for amount of years specified above, in this case, 1 year.

Operating Systems


Linux Snippets

  • New: Use a pass password in a Makefile.

    TOKEN ?= $(shell bash -c '/usr/bin/pass show path/to/token')
        @AUTHENTIK_TOKEN=$(TOKEN) terraform plan
  • New: Install a new font.

    Install a font manually by downloading the appropriate .ttf or otf files and placing them into /usr/local/share/fonts (system-wide), ~/.local/share/fonts (user-specific) or ~/.fonts (user-specific). These files should have the permission 644 (-rw-r--r--), otherwise they may not be usable.

  • New: Get VPN password from pass.

    To be able to retrieve the user and password from pass you need to run the openvpn command with the next flags:

    sudo bash -c "openvpn --config config.ovpn  --auth-user-pass <(echo -e 'user_name\n$(pass show vpn)')"

    Assuming that vpn is an entry of your pass password store.

  • New: Measure the performance, IOPS of a disk.

    To measure disk IOPS performance in Linux, you can use the fio tool. Install it with

    apt-get install fio

    Then you need to go to the directory where your disk is mounted. The test is done by performing read/write operations in this directory.

    To do a random read/write operation test an 8 GB file will be created. Then fio will read/write a 4KB block (a standard block size) with the 75/25% by the number of reads and writes operations and measure the performance.

    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75
  • New: What is /var/log/tallylog.

    /var/log/tallylog is the file where the PAM linux module (used for authentication of the machine) keeps track of the failed ssh logins in order to temporarily block users.

  • New: Manage users.

    • Change main group of user
    usermod -g {{ group_name }} {{ user_name }}
    • Add user to group
    usermod -a -G {{ group_name }} {{ user_name }}
    • Remove user from group.
    usermod -G {{ remaining_group_names }} {{ user_name }}

    You have to execute groups {{ user }} get the list and pass the remaining to the above command

    • Change uid and gid of the user
    usermod -u {{ newuid }} {{ login }}
    groupmod -g {{ newgid }} {{ group }}
    find / -user {{ olduid }} -exec chown -h {{ newuid }} {} \;
    find / -group {{ oldgid }} -exec chgrp -h {{ newgid }} {} \;
    usermod -g {{ newgid }} {{ login }}
  • New: Manage ssh keys.

    • Generate ed25519 key
    ssh-keygen -t ed25519 -f {{ path_to_keyfile }}
    • Generate RSA key
    ssh-keygen -t rsa -b 4096 -o -a 100 -f {{ path_to_keyfile }}
    • Generate different comment
    ssh-keygen -t ed25519 -f {{ path_to_keyfile }} -C {{ email }}
    • Generate key headless, batch
    ssh-keygen -t ed25519 -f {{ path_to_keyfile }} -q -N ""
    • Generate public key from private key
    ssh-keygen -y -f {{ path_to_keyfile }} > {{ path_to_public_key_file }}
    • Get fingerprint of key
      ssh-keygen -lf {{ path_to_key }}
  • New: Measure the network performance between two machines.

    Install iperf3 with apt-get install iperf3 on both server and client.

    On the server system run:

    server#: iperf3 -i 10 -s


    • -i: the interval to provide periodic bandwidth updates
    • -s: listen as a server

    On the client system:

    client#: iperf3 -i 10 -w 1M -t 60 -c [server hostname or ip address]


    • -i: the interval to provide periodic bandwidth updates
    • -w: the socket buffer size (which affects the TCP Window). The buffer size is also set on the server by this client command.
    • -t: the time to run the test in seconds
    • -c: connect to a listening server at…

    Sometimes is interesting to test both ways as they may return different outcomes

  • New: Force umount nfs mounted directory.

    umount -l path/to/mounted/dir
  • New: Configure fstab to mount nfs.

    NFS stands for ‘Network File System’. This mechanism allows Unix machines to share files and directories over the network. Using this feature, a Linux machine can mount a remote directory (residing in a NFS server machine) just like a local directory and can access files from it.

    An NFS share can be mounted on a machine by adding a line to the /etc/fstab file.

    The default syntax for fstab entry of NFS mounts is as follows.

    Server:/path/to/export /local_mountpoint nfs <options> 0 0


    • Server: The hostname or IP address of the NFS server where the exported directory resides.
    • /path/to/export: The shared directory (exported folder) path.
    • /local_mountpoint: Existing directory in the host where you want to mount the NFS share.

    You can specify a number of options that you want to set on the NFS mount:

    • soft/hard: When the mount option hard is set, if the NFS server crashes or becomes unresponsive, the NFS requests will be retried indefinitely. You can set the mount option intr, so that the process can be interrupted. When the NFS server comes back online, the process can be continued from where it was while the server became unresponsive.

    When the option soft is set, the process will be reported an error when the NFS server is unresponsive after waiting for a period of time (defined by the timeo option). In certain cases soft option can cause data corruption and loss of data. So, it is recommended to use hard and intr options.

    • noexec: Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system via NFS containing incompatible binaries.
    • nosuid: Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program.
    • tcp: Specifies the NFS mount to use the TCP protocol.
    • udp: Specifies the NFS mount to use the UDP protocol.
    • nofail: Prevent issues when rebooting the host. The downside is that if you have services that depend on the volume to be mounted they won't behave as expected.
  • New: Fix limit on the number of inotify watches.

    Programs that sync files such as dropbox, git etc use inotify to notice changes to the file system. The limit can be see by -

    cat /proc/sys/fs/inotify/max_user_watches

    For me, it shows 65536. When this limit is not enough to monitor all files inside a directory it throws this error.

    If you want to increase the amount of inotify watchers, run the following in a terminal:

    echo fs.inotify.max_user_watches=100000 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

    Where 100000 is the desired number of inotify watches.

  • New: Get class of a window.

    Use xprop and click the window.

    Get the current brightness level with cat /sys/class/backlight/intel_backlight/brightness. Imagine it's 1550, then if you want to lower the brightness use:

    sudo echo 500 > /sys/class/backlight/intel_backlight/brightness
  • New: SSH tunnel.

    ssh -D 9090 -N -f user@host
  • New: Fix the SSH client kex_exchange_identification: read: Connection reset by peer error.

    Restart the ssh service.

  • New: Automatic reboot after power failure.

    That's not something you can control in your operating system. That's what the BIOS is for. In most BIOS setups there'll be an option like After power loss with possible values like Power off and Reboot.

    You can also edit /etc/default/grub and add:


    Then run:

    sudo update-grub

    This will make your machine display the boot options for 5 seconds before it boot the default option (instead of waiting forever for you to choose one).

  • New: Add sshuttle information link.

    If you need a more powerful ssh tunnel you can try sshuttle

  • New: Reset failed systemd services.

    Use systemctl to remove the failed status. To reset all units with failed status:

    systemctl reset-failed

    or just your specific unit:

    systemctl reset-failed openvpn-server@intranert.service
  • New: Get the current git branch.

    git branch --show-current
  • New: Install latest version of package from backports.

    Add the backports repository:

    vi /etc/apt/sources.list.d/bullseye-backports.list
    deb bullseye-backports main contrib
    deb-src bullseye-backports main contrib

    Configure the package to be pulled from backports

    vi /etc/apt/preferences.d/90_zfs
    Package: src:zfs-linux
    Pin: release n=bullseye-backports
    Pin-Priority: 990
  • New: Rename multiple files matching a pattern.

    There is rename that looks nice, but you need to install it. Using only find you can do:

    find . -name '*yml' -exec bash -c 'echo mv $0 ${0/yml/yaml}' {} \;

    If it shows what you expect, remove the echo.

  • New: Force ssh to use password authentication.

    ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no
    feat(linux_snippets#Do a tail -f with grep): Do a tail -f with grep

    tail -f file | grep --line-buffered my_pattern
  • New: Check if a program exists in the user's PATH.

    command -v <the_command>

    Example use:

    if ! command -v <the_command> &> /dev/null
        echo "<the_command> could not be found"
  • New: Add interesting tools to explore.

    • qbittools: a feature rich CLI for the management of torrents in qBittorrent.
    • qbit_manage: tool will help manage tedious tasks in qBittorrent and automate them.
  • New: Limit the resources a docker is using.

    You can either use limits in the docker service itself, see 1 and 2.

    Or/and you can limit it for each docker, see 1 and 2.

  • New: Wipe a disk.

    Overwrite it many times with badblocks.

    badblocks -wsv -b 4096 /dev/sde | tee disk_wipe_log.txt
  • New: Impose load on a system to stress it.

    sudo apt-get install stress
    stress --cpu 2

    That will fill up the usage of 2 cpus. To run 1 vm stressor using 1GB of virtual memory for 60s, enter:

    stress --vm 1 --vm-bytes 1G --vm-keep -t 60s

    You can also stress io with --io 4, for example to spawn 4 workers.

  • New: Get the latest tag of a git repository.

    git describe --tags --abbrev=0
  • New: Configure gpg-agent cache ttl.

    The user configuration (in ~/.gnupg/gpg-agent.conf) can only define the default and maximum caching duration; it can't be disabled.

    The default-cache-ttl option sets the timeout (in seconds) after the last GnuPG activity (so it resets if you use it), the max-cache-ttl option set the timespan (in seconds) it caches after entering your password. The default value is 600 seconds (10 minutes) for default-cache-ttl and 7200 seconds (2 hours) for max-cache-ttl.

    default-cache-ttl 21600
    max-cache-ttl 21600

    For this change to take effect, you need to end the session by restarting gpg-agent.

    gpgconf --kill gpg-agent
    gpg-agent --daemon --use-standard-socket
  • New: Get return code of failing find exec.

    When you run find . -exec ls {} \; even if the command run in the exec returns a status code different than 0 you'll get an overall status code of 0 which makes difficult to catch errors in bash scripts.

    You can instead use xargs, for example:

    find /tmp/ -iname '*.sh' -print0 | xargs -0 shellcheck

    This will run shellcheck file_name for each of the files found by the find command.

  • New: Accept new ssh keys by default.

    While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5).

    This is done with -o StrictHostKeyChecking=accept-new. Or if you want to use it for all hosts you can add the next lines to your ~/.ssh/config:

    Host *
      StrictHostKeyChecking accept-new

    WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to:

    ssh -o StrictHostKeyChecking=accept-new

    Note, StrictHostKeyChecking=no will add the public key to ~/.ssh/known_hosts even if the key was changed. accept-new is only for new hosts. From the man page:

    If this flag is set to “accept-new” then ssh will automatically add new host keys to the user known hosts files, but will not permit connections to hosts with changed host keys. If this flag is set to “no” or “off”, ssh will automatically add new host keys to the user known hosts files and allow connections to hosts with changed hostkeys to proceed, subject to some restrictions. If this flag is set to ask (the default), new host keys will be added to the user known host files only after the user has confirmed that is what they really want to do, and ssh will refuse to connect to hosts whose host key has changed. The host keys of known hosts will be verified automatically in all cases.

  • New: Do not add trailing / to ls.

    Probably, your ls is aliased or defined as a function in your config files.

    Use the full path to ls like:

    /bin/ls /var/lib/mysql/
  • New: Convert png to svg.

    Inkscape has got an awesome auto-tracing tool.

    • Install Inkscape using sudo apt-get install inkscape
    • Import your image
    • Select your image
    • From the menu bar, select Path > Trace Bitmap Item
    • Adjust the tracing parameters as needed
    • Save as svg

    Check their tracing tutorial for more information.

    Once you are comfortable with the tracing options. You can automate it by using CLI of Inkscape.

  • New: Redirect stdout and stderr of a cron job to a file.

    */1 * * * * /home/ranveer/ >> /home/ranveer/vimbackup.log 2>&1
  • New: Error when unmounting a device Target is busy.

    • Check the processes that are using the mountpoint with lsof /path/to/mountpoint
    • Kill those processes
    • Try the umount again

    If that fails, you can use umount -l.


  • Correction: Remove unneeded dependencies when installing.
  • Correction: Deprecate it in favour of qbittorrent.

    Use qbittorrent instead.



  • New: Introduce gitsigns.

    Gitsigns is a neovim plugin to create git decorations similar to the vim plugin gitgutter but written purely in Lua.


    Add to your plugins.lua file:

      use {'lewis6991/gitsigns.nvim'}

    Install it with :PackerInstall.

    Configure it in your init.lua with:

    -- Configure gitsigns
      on_attach = function(bufnr)
        local gs = package.loaded.gitsigns
        local function map(mode, l, r, opts)
          opts = opts or {}
          opts.buffer = bufnr
          vim.keymap.set(mode, l, r, opts)
        -- Navigation
        map('n', ']c', function()
          if vim.wo.diff then return ']c' end
          vim.schedule(function() gs.next_hunk() end)
          return '<Ignore>'
        end, {expr=true})
        map('n', '[c', function()
          if vim.wo.diff then return '[c' end
          vim.schedule(function() gs.prev_hunk() end)
          return '<Ignore>'
        end, {expr=true})
        -- Actions
        map('n', '<leader>gs', gs.stage_hunk)
        map('n', '<leader>gr', gs.reset_hunk)
        map('v', '<leader>gs', function() gs.stage_hunk {vim.fn.line('.'), vim.fn.line('v')} end)
        map('v', '<leader>gr', function() gs.reset_hunk {vim.fn.line('.'), vim.fn.line('v')} end)
        map('n', '<leader>gS', gs.stage_buffer)
        map('n', '<leader>gu', gs.undo_stage_hunk)
        map('n', '<leader>gR', gs.reset_buffer)
        map('n', '<leader>gp', gs.preview_hunk)
        map('n', '<leader>gb', function() gs.blame_line{full=true} end)
        map('n', '<leader>gb', gs.toggle_current_line_blame)
        map('n', '<leader>gd', gs.diffthis)
        map('n', '<leader>gD', function() gs.diffthis('~') end)
        map('n', '<leader>ge', gs.toggle_deleted)
        -- Text object
        map({'o', 'x'}, 'ih', ':<C-U>Gitsigns select_hunk<CR>')


    Some interesting bindings:

    • ]c: Go to next diff chunk
    • [c: Go to previous diff chunk
    • <leader>gs: Stage chunk, it works both in normal and visual mode
    • <leader>gr: Restore chunk from index, it works both in normal and visual mode
    • <leader>gp: Preview diff, you can use it with ]c and [c to see all the chunk diffs
    • <leader>gb: Show the git blame of the line as a shadowed comment


  • New: Introduce DiffView.

    Diffview is a single tabpage interface for easily cycling through diffs for all modified files for any git rev.


    If you're using it with NeoGit and Packer use:

      use {
        requires = {


    Calling :DiffviewOpen with no args opens a new Diffview that compares against the current index. You can also provide any valid git rev to view only changes for that rev.


    • :DiffviewOpen
    • :DiffviewOpen HEAD~2
    • :DiffviewOpen HEAD~4..HEAD~2
    • :DiffviewOpen d4a7b0d
    • :DiffviewOpen d4a7b0d^!
    • :DiffviewOpen d4a7b0d..519b30e
    • :DiffviewOpen origin/main...HEAD

    You can also provide additional paths to narrow down what files are shown :DiffviewOpen HEAD~2 -- lua/diffview plugin.

    Additional commands for convenience:

    • :DiffviewClose: Close the current diffview. You can also use :tabclose.
    • :DiffviewToggleFiles: Toggle the file panel.
    • :DiffviewFocusFiles: Bring focus to the file panel.
    • :DiffviewRefresh: Update stats and entries in the file list of the current Diffview.

    With a Diffview open and the default key bindings, you can:

    • Cycle through changed files with <tab> and <s-tab>
    • You can stage changes with -
    • Restore a file with X
    • Refresh the diffs with R
    • Go to the file panel with <leader>e
  • New: Use the same binding to open and close the diffview windows.

    vim.keymap.set('n', 'dv', function()
      if next(require('diffview.lib').views) == nil then


  • New: Introduce tridactyl.

    Tridactyl is a Vim-like interface for Firefox, inspired by Vimperator/Pentadactyl.

    In the article you'll also find:

  • New: Guide on how to start using it.

    You’ll want to set a few basic options before you start using beets. The configuration is stored in a text file. You can show its location by running beet config -p, though it may not exist yet. Run beet config -e to edit the configuration in your favorite text editor. The file will start out empty, but here’s good place to start:

    directory: ~/music
    library: ~/data/musiclibrary.db

    The default configuration assumes you want to start a new organized music folder (that directory above) and that you’ll copy cleaned-up music into that empty folder using beets’ import command. But you can configure beets to behave many other ways:

    • Start with a new empty directory, but move new music in instead of copying it (saving disk space). Put this in your config file:

          move: yes
    • Keep your current directory structure; importing should never move or copy files but instead just correct the tags on music. Put the line copy: no under the import: heading in your config file to disable any copying or renaming. Make sure to point directory at the place where your music is currently stored.

    • Keep your current directory structure and do not correct files’ tags: leave files completely unmodified on your disk. (Corrected tags will still be stored in beets’ database, and you can use them to do renaming or tag changes later.) Put this in your config file:

          copy: no
          write: no

      to disable renaming and tag-writing.

  • New: Importing your library.

    The next step is to import your music files into the beets library database. Because this can involve modifying files and moving them around, data loss is always a possibility, so now would be a good time to make sure you have a recent backup of all your music. We’ll wait.

    There are two good ways to bring your existing library into beets. You can either: (a) quickly bring all your files with all their current metadata into beets’ database, or (b) use beets’ highly-refined autotagger to find canonical metadata for every album you import. Option (a) is really fast, but option (b) makes sure all your songs’ tags are exactly right from the get-go. The point about speed bears repeating: using the autotagger on a large library can take a very long time, and it’s an interactive process. So set aside a good chunk of time if you’re going to go that route.

    If you’ve got time and want to tag all your music right once and for all, do this:

    beet import /path/to/my/music

    (Note that by default, this command will copy music into the directory you specified above. If you want to use your current directory structure, set the import.copy config option.) To take the fast, un-autotagged path, just say:

    beet import -A /my/huge/mp3/library

    Note that you just need to add -A for “don’t autotag”.


google chrome

  • Correction: Update the installation steps.

    • Import the GPG key, and use the following command.

      sudo wget -O- | gpg --dearmor > /usr/share/keyrings/google-chrome.gpg

    • Once the GPG import is complete, you will need to import the Google Chrome repository.

    echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/google-chrome.gpg] stable main' | sudo tee /etc/apt/sources.list.d/google-chrome.list
    • Install the program:
      apt-get update
      apt-get install google-chrome-stable


Hard drive health

  • New: Check the disk health with smartctl.

    Start with a long self test with smartctl. Assuming the disk to test is /dev/sdd:

    smartctl -t long /dev/sdd

    The command will respond with an estimate of how long it thinks the test will take to complete.

    To check progress use:

    smartctl -A /dev/sdd | grep remaining
    smartctl -c /dev/sdd | grep remaining

    Don't check too often because it can abort the test with some drives. If you receive an empty output, examine the reported status with:

    smartctl -l selftest /dev/sdd

    If errors are shown, check the dmesg as there are usually useful traces of the error.

  • New: Check the health of a disk with badblocks.

    The badblocks command will write and read the disk with different patterns, thus overwriting the whole disk, so you will loose all the data in the disk.

    This test is good for rotational disks as there is no disk degradation on massive writes, do not use it on SSD though.

    WARNING: be sure that you specify the correct disk!!

    badblocks -wsv -b 4096 /dev/sde | tee disk_analysis_log.txt

    If errors are shown is that all of the spare sectors of the disk are used, so you must not use this disk anymore. Again, check dmesg for traces of disk errors.


  • New: Move the focus to a container.

    Get the container identifier with xprop and then:

    i3-msg '[title="khime"]' focus
    i3-msg '[class="Firefox"]' focus
  • New: Interact with Python.

    Install the i3ipc library:

    pip install i3ipc

    Create the connection object:

    from i3ipc import Connection, Event
    i3 = Connection()

    Interact with i3:

    focused = i3.get_tree().find_focused()
    print('Focused window %s is on workspace %s' %
          (, focused.workspace().name))
    outputs = i3.get_outputs()
    print('Active outputs:')
    for output in filter(lambda o:, outputs):
    i3.command('focus left')
    for container in i3.get_tree().find_fullscreen():
    root = i3.get_tree()
    for con in root:
    def on_workspace_focus(self, e):
        # The first parameter is the connection to the ipc and the second is an object
        # with the data of the event sent from i3.
        if e.current:
            print('Windows on this workspace:')
            for w in e.current.leaves():
    def on_window_focus(i3, e):
        focused = i3.get_tree().find_focused()
        ws_name = "%s:%s" % (focused.workspace().num, focused.window_class)
        i3.command('rename workspace to "%s"' % ws_name)
    i3.on(Event.WORKSPACE_FOCUS, on_workspace_focus)
    i3.on(Event.WINDOW_FOCUS, on_window_focus)


  • New: Fix Corrupt: SQLitePCL.pretty.SQLiteException: database disk image is malformed.

    If your server log file shows SQLite errors like the following example your jellyfin.db file needs attention.


    Typical causes of this are sudden and abrupt terminations of the Emby server process, such as a power loss, operating system crash, force killing the server process, etc.

    To solve it there are many steps:

  • New: Restore watched history.

    Jellyfin stores the watched information in one of the .db files, there are two ways to restore it:

    The user data is stored in the table UserDatas table in the library.db database file. The media data is stored in the TypedBaseItems table of the same database.

    Comparing the contents of the tables of the broken database (lost watched content) and a backup database, I've seen that the media content is the same after a full library rescan, so the issue was fixed after injecting the missing user data from the backup to the working database through the importing a table from another database sqlite operation.

  • New: Fix ReadOnly: SQLitePCL.pretty.SQLiteException: attempt to write a readonly database.

    Some of the database files of Jellyfin is not writable by the jellyfin user, check if you changed the ownership of the files, for example in the process of restoring a database file from backup.

  • New: Deceptive site ahead.

    It seems that Google is marking the domains that host Jellyfin as deceptive. If it happens to you, your users won't be able to access your instance with Firefox, Chrome nor the Android app. Nice uh? It's kind of scary how google is able to control who can access what in the internet without you signing for it.

    If you search the problem online they suggest that you log in with your google account into the Search Console and see the reasons behind it. Many people did this and reported in the issue that they didn't get any useful information through this process. It's a privacy violation though, as now google is able to tie your identity (as your google account is linked to your phone number) with your Jellyfin domain. Completely disgusting.

    To solve this issue you need to file a case with google and wait for them to unban you. It's like asking them for permission so that they let your users access your system. The disgust levels keep on growing. Don't waste your time being creative in the Comments of the request either, it looks like they don't even read them.

    The problem is that until the people from Jellyfin finds a solution, after following this ugly process, you may be flagged again any time in the future (ranging from days to months).

    A mitigation of the problem is to have an alternative domain that your users can use (for example in You may be lucky that google doesn't block both domains at the same time.

    For more information follow the Jellyfin issue or the Jellyfin reddit thread.

  • New: Missing features.

    • Hide movie or tv show from my gallery: Tracked by these feature requests 1 and 2
  • New: Introduce Jellyfin Desktop.

    • Download the latest deb package from the releases page
    • Install the dependencies
    • Run dpkg -i

    If you're on a TV you may want to enable the TV mode so that the remote keys work as expected. The play/pause/next/prev won't work until this issue is solved, but it's not that bad to use the "Ok" and then navigate with the arrow keys.

  • New: Introduce Jellycon.

    JellyCon is a lightweight Kodi add-on that lets you browse and play media files directly from your Jellyfin server within the Kodi interface. It can be thought of as a thin frontend for a Jellyfin server.

    It's not very pleasant to use though.

  • New: Forgot Password. Please try again within your home network to initiate the password reset process.

    If you're an external jellyfin user you can't reset your password unless you are part of the LAN. This is done because the reset password process is simple and insecure.

    If you don't care about that and still think that the internet is a happy and safe place here and here are some instructions on how to bypass the security measure.

    For more information also read 1 and 2.


  • New: How to add fonts to kitty.

    • Add your fonts to the ~/.local/share/fonts directory
    • Check they are available when you run kitty +list-fonts
    • Add them to your config:

    font_family      Operator Mono Book
    bold_font        Operator Mono Medium
    italic_font      Operator Mono Book Italic
    bold_italic_font Operator Mono Medium Italic
    feat(kitty#Screen not working on server with sudo): Troubleshoot the Screen not working on server with sudo issue

    Make sure you're using the ssh alias below

    alias ssh="kitty +kitten ssh"

    And then copy the ~/.terminfo into /root

    sudo copy -r ~/.terminfo /root


  • New: Introduce Kodi.

    Kodi is a entertainment center software. It basically converts your device into a smart tv



  • New: How to install matrix.

    sudo apt install -y wget apt-transport-https
    sudo wget -O /usr/share/keyrings/element-io-archive-keyring.gpg
    echo "deb [signed-by=/usr/share/keyrings/element-io-archive-keyring.gpg] default main" | sudo tee /etc/apt/sources.list.d/element-io.list
    sudo apt update
    sudo apt install element-desktop


  • New: Introduce MediaTracker.

    MediaTracker is a self hosted media tracker for movies, tv shows, video games, books and audiobooks


    With docker compose:

    version: "3"
        container_name: mediatracker
          - 7481:7481
          - /home/YOUR_HOME_DIRECTORY/.config/mediatracker/data:/storage
          - assetsVolume:/assets
          SERVER_LANG: en
          TMDB_LANG: en
          AUDIBLE_LANG: us
          TZ: Europe/London
        image: bonukai/mediatracker:latest
      assetsVolume: null

    If you attach more than one docker network the container becomes unreachable :S.

    Install the jellyfin plugin:

    They created a Jellyfin plugin so that all scrobs are sent automatically to the mediatracker

    • Add new Repository in Jellyfin (Dashboard -> Plugins -> Repositories -> +) from url
    • Install MediaTracker plugin from Catalogue (Dashboard -> Plugins -> Catalogue)

    Some tips on usage:

    • Add the shows you want to watch to the watchlist so that it's easier to find them
    • When you're ending an episode, click on the episode number on the watchlist element and then rate the episode itself.

    • You can create public lists to share with the rest of the users, the way to share it though is a bit archaic so far, it's only through the list link, in the interface they won't be able to see it.

  • Correction: Update ryot comparison with mediatracker.

    Ryot has a better web design, it also has a jellyfin scrobbler, although it's not yet stable. There are other UI tweaks that is preventing me from migrating to ryot such as the easier media rating and the percentage over five starts rating system.


  • New: Install retroarch instructions.

    To add the stable branch to your system type:

    sudo add-apt-repository ppa:libretro/stable
    sudo apt-get update
    sudo apt-get install retroarch

    Go to Main Menu/Online Updater and then update everything you can:

    • Update Core Info Files
    • Update Assets
    • Update controller Profiles
    • Update Databases
    • Update Overlays
    • Update GLSL Shaders


  • New: Introduce Rocketchat integrations.

    Rocket.Chat supports webhooks to integrate tools and services you like into the platform. Webhooks are simple event notifications via HTTP POST. This way, any webhook application can post a message to a Rocket.Chat instance and much more.

    With scripts, you can point any webhook to Rocket.Chat and process the requests to print customized messages, define the username and avatar of the user of the messages and change the channel for sending messages, or you can cancel the request to prevent undesired messages.

    Available integrations:

    • Incoming Webhook: Let an external service send a request to Rocket.Chat to be processed.
    • Outgoing Webhook: Let Rocket.Chat trigger and optionally send a request to an external service and process the response.

    By default, a webhook is designed to post messages only. The message is part of a JSON structure, which has the same format as that of a .

    Incoming webhook script:

    To create a new incoming webhook:

    • Navigate to Administration > Workspace > Integrations.
    • Click +New at the top right corner.
    • Switch to the Incoming tab.
    • Turn on the Enabled toggle.
    • Name: Enter a name for your webhook. The name is optional; however, providing a name to manage your integrations easily is advisable.
    • Post to Channel: Select the channel (or user) where you prefer to receive the alerts. It is possible to override messages.
    • Post as: Choose the username that this integration posts as. The user must already exist.
    • Alias: Optionally enter a nickname that appears before the username in messages.
    • Avatar URL: Enter a link to an image as the avatar URL if you have one. The avatar URL overrides the default avatar.
    • Emoji: Enter an emoji optionally to use the emoji as the avatar. Check the emoji cheat sheet
    • Turn on the Script Enabled toggle.
    • Paste your script inside the Script field (check below for a sample script)
    • Save the integration.
    • Use the generated Webhook URL to post messages to Rocket.Chat.

    The Rocket.Chat integration script should be written in ES2015 / ECMAScript 6. The script requires a global class named Script, which is instantiated only once during the first execution and kept in memory. This class contains a method called process_incoming_request, which is called by your server each time it receives a new request. The process_incoming_request method takes an object as a parameter with the request property and returns an object with a content property containing a valid Rocket.Chat message, or an object with an error property, which is returned as the response to the request in JSON format with a Code 400 status.

    A valid Rocket.Chat message must contain a text field that serves as the body of the message. If you redirect the message to a channel other than the one indicated by the webhook token, you can specify a channel field that accepts room id or, if prefixed with "#" or "@", channel name or user, respectively.

    You can use the console methods to log information to help debug your script. More information about the console can be found here. . To view the logs, navigate to Administration > Workspace > View Logs.

    /* exported Script */
    /* globals console, _, s */
    /** Global Helpers
     * console - A normal console instance
     * _       - An underscore instance
     * s       - An underscore string instance
    class Script {
       * @params {object} request
      process_incoming_request({ request }) {
        // request.url.hash
        // request.url.query
        // request.url.pathname
        // request.url.path
        // request.url_raw
        // request.url_params
        // request.headers
        // request.user._id
        // request.user.username
        // request.content_raw
        // request.content
        // console is a global helper to improve debug
        return {
            text: request.content.text,
            icon_emoji: request.content.icon_emoji,
            // "attachments": [{
            //   "color": "#FF0000",
            //   "author_name": "Rocket.Cat",
            //   "author_link": "",
            //   "author_icon": "",
            //   "title": "Rocket.Chat",
            //   "title_link": "",
            //   "text": "Rocket.Chat, the best open source chat",
            //   "fields": [{
            //     "title": "Priority",
            //     "value": "High",
            //     "short": false
            //   }],
            //   "image_url": "",
            //   "thumb_url": ""
            // }]
        // return {
        //   error: {
        //     success: false,
        //     message: 'Error example'
        //   }
        // };

    To test if your integration works, use curl to make a POST request to the generated webhook URL.

    curl -X POST \
      -H 'Content-Type: application/json' \
      --data '{
          "icon_emoji": ":smirk:",
          "text": "Example message"
      }' \

    If you want to send the message to another channel or user use the channel argument with @user or #channel. Keep in mind that the user of the integration needs to be part of those channels if they are private.

    curl -X POST \
      -H 'Content-Type: application/json' \
      --data '{
          "icon_emoji": ":smirk:",
          "channel": "#notifications",
          "text": "Example message"
      }' \

    If you want to do more complex things uncomment the part of the attachments.


  • New: Introduce sed snippets.



  • Correction: Use SHA256 for the verification.

    Now SHA1 is not allowed

  • New: Suggest more debuggin steps when connecting to google.

    The code has changed and the fix is now different


  • New: Configure nvim with lua.

    Nvim moved away from vimscript and now needs to be configured in lua. You can access the config file in ~/.config/nvim/init.lua. It's not created by default so you need to do it yourself.

    In the article it explains how to do the basic configuration with lua:

    And some troubleshooting:

  • Correction: Update the leader key section.

    There are different opinions on what key to use as the <leader> key. The <space> is the most comfortable as it's always close to your thumbs, and it works well with both hands. Nevertheless, you can only use it in normal mode, because in insert <space><whatever> will be triggered as you write. An alternative is to use ; which is also comfortable (if you use the english key distribution) and you can use it in insert mode.

    If you want to define more than one leader key you can either:

    • Change the mapleader many times in your file: As the value of mapleader is used at the moment the mapping is defined, you can indeed change that while plugins are loading. For that, you have to explicitly :runtime the plugins in your ~/.vimrc (and count on the canonical include guard to prevent redefinition later):

    let mapleader = ','
    runtime! plugin/NERD_commenter.vim
    runtime! ...
    let mapleader = '\'
    runime! plugin/mark.vim
    * Use the keys directly instead of using <leader>

    " editing mappings
    nnoremap ,a <something>
    nnoremap ,k <something else>
    nnoremap ,d <and something else>
    " window management mappings
    nnoremap gw <something>
    nnoremap gb <something else>

    Defining mapleader and/or using <leader> may be useful if you change your mind often on what key to use a leader but it won't be of any use if your mappings are stable.

  • New: Configure Telescope to follow symbolic links.

    By default symbolic links are not followed either for files or directories, to enable it use

      require('telescope').setup {
        pickers = {
          find_files = {
            follow = true
  • New: Run a command when opening vim.

    nvim -c ':DiffViewOpen'
  • New: Update treesitter language definitions.

    To do so you need to run:

    :TSInstall <language>

    To update the parsers run

  • New: Telescope changes working directory when opening a file.

    In my case was due to a snippet I have to remember the folds:

      augroup remember_folds
        autocmd BufWinLeave * silent! mkview
        autocmd BufWinEnter * silent! loadview
      augroup END

    It looks that it had saved a view with the other working directory so when a file was loaded the cwd changed. To solve it I created a new mkview in the correct directory.

  • New: Concealment.

    Some plugins allow the conceal of some text, for example in orgmode you will only see the text of the description of a link and not the content, making it more pleasant to read. To enable it set in your config:

    -- Conceal links
    -- Use visual mode to navigate through the hidden text
    vim.opt.conceallevel = 2
    vim.opt.concealcursor = 'nc'


    • conceallevel: Determine how text with the "conceal" syntax attribute is shown:

    • 0: Text is shown normally

    • 1: Each block of concealed text is replaced with one character. If the syntax item does not have a custom replacement character defined the character defined in 'listchars' is used (default is a space). It is highlighted with the "Conceal" highlight group.
    • 2: Concealed text is completely hidden unless it has a custom replacement character defined.
    • 3: Concealed text is completely hidden.

    • concealcursor: Sets the modes in which text in the cursor line can also be concealed. When the current mode is listed then concealing happens just like in other lines.

    • n: Normal mode
    • v: Visual mode
    • i: Insert mode
    • c: Command line editing, for 'incsearch'

    A useful value is nc. So long as you are moving around text is concealed, but when starting to insert text or selecting a Visual area the concealed text is displayed, so that you can see what you are doing.

  • New: Email inside nvim.

    The best looking one is himalaya


  • New: Introduce yq.

    yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor. It uses jq like syntax but works with yaml files as well as json, xml, properties, csv and tsv. It doesn't yet support everything jq does - but it does support the most common operations and functions, and more is being added continuously.

  • New: Find and update items in an array.

    We have an array and we want to update the elements with a particular name.

    Given a sample.yaml file of:

    - name: Foo
      numBuckets: 0
    - name: Bar
      numBuckets: 0

    Then yq '(.[] | select(.name == "Foo") | .numBuckets) |= . + 1' sample.yaml will output:

    - name: Foo
      numBuckets: 1
    - name: Bar
      numBuckets: 0
  • New: Iterate over the elements of a query with a bash loop.

    readarray dependencies < <(yq e -o=j -I=0 '.roles[]' requirements.yaml)
    for dependency in "${dependencies[@]}"; do
        source="$(echo "$dependency" | yq e '.src' -)"



  • New: Split the screen.

    Go into app switcher, tap on the app icon above the active app and then select "Split top".


  • New: Introduce Libretube.

    Libretube is an alternative frontend for YouTube, for Android.

    YouTube has an extremely invasive privacy policy which relies on using user data in unethical ways. They store a lot of your personal data - ranging from ideas, music taste, content, political opinions, and much more than you think.

    This project is aimed at improving the users' privacy by being independent from Google and bypassing their data collection.

    Therefore, the app is using the Piped API, which uses proxies to circumvent Google's data collection and includes some other additional features.

    Differences to NewPipe:

    With NewPipe, the extraction is done locally on your phone, and all the requests sent towards YouTube/Google are done directly from the network you're connected to, which doesn't use a middleman server in between. Therefore, Google can still access information such as the user's IP address. Aside from that, subscriptions can only be stored locally.

    LibreTube takes this one step further and proxies all requests via Piped (which uses the NewPipeExtractor). This prevents Google servers from accessing your IP address or any other personal data. Apart from that, Piped allows syncing your subscriptions between LibreTube and Piped, which can be used on desktop too.

    If the NewPipeExtractor breaks, it only requires an update of Piped and not LibreTube itself. Therefore, fixes usually arrive faster than in NewPipe.

    While LibreTube only supports YouTube, NewPipe also allows the use of other platforms like SoundCloud, PeerTube, Bandcamp and Both are great clients for watching YouTube videos. It depends on the individual's use case which one fits their needs better.

    Other software that uses Piped:

    • Yattee - an alternative frontend for YouTube, for IOS.
    • Hyperpipe - an alternative privacy respecting frontend for YouTube Music.
    • Musicale - an alternative to YouTube Music, with style.
    • ytify - a complementary minimal audio streaming frontend for YouTube.
    • PsTube - Watch and download videos without ads on Android, Linux, Windows, iOS, and Mac OSX.
    • Piped-Material - A fork of Piped, focusing on better performance and a more usable design.
    • ReacTube - Privacy friendly & distraction free Youtube front-end using Piped API.


  • New: Introduce Happycow.

    Happycow is a web application and android app to search vegan restaurants nearby.

    The android app requires google services to work :(.


  • New: Introduce Orgzly.

    Orgzly is an android application to interact with orgmode files.

  • New: Avoid the conflicts in the files edited in two places.

    If you use syncthing you may be seeing conflicts in your files. This happens specially if you use the Orgzly widget to add tasks, this is because it doesn't synchronize the files to the directory when using the widget. If you have a file that changes a lot in a device, for example the of my mobile, it's interesting to have a specific file that's edited mainly in the mobile, and when you want to edit it elsewhere, you sync as specified below and then process with the editing. Once it's done manually sync the changes in orgzly again. The rest of the files synced to the mobile are for read only reference, so they rarely change.

    If you want to sync reducing the chance of conflicts then:

    • Open Orgzly and press Synchronize
    • Open Syncthing.

    If that's not enough check these automated solutions:

    Other interesting solutions:

    • org-orgzly: Script to parse a chosen org file or files, check if an entry meets required parameters, and if it does, write the entry in a new file located inside the folder you desire to sync with orgzly.
    • Git synchronization: I find it more cumbersome than syncthing but maybe it's interesting for you.
  • New: Add new orgzly fork.

    Alternative fork maintained by the community


  • New: Introduce seedvault.

    Seedvault is an open-source encrypted backup app for inclusion in Android-based operating systems.

    While every smartphone user wants to be prepared with comprehensive data backups in case their phone is lost or stolen, not every Android user wants to entrust their sensitive data to Google's cloud-based storage. By storing data outside Google's reach, and by using client-side encryption to protect all backed-up data, Seedvault offers users maximum data privacy with minimal hassle.

    Seedvault allows Android users to store their phone data without relying on Google's proprietary cloud storage. Users can decide where their phone's backup will be stored, with options ranging from a USB flash drive to a remote self-hosted cloud storage alternative such as NextCloud. Seedvault also offers an Auto-Restore feature: instead of permanently losing all data for an app when it is uninstalled, Seedvault's Auto-Restore will restore all backed-up data for the app upon reinstallation.

    Seedvault protects users' private data by encrypting it on the device with a key known only to the user. Each Seedvault account is protected by client-side encryption (AES/GCM/NoPadding). This encryption is unlockable only with a 12-word randomly-generated key.

    With Seedvault, backups run automatically in the background of the phone's operating system, ensuring that no data will be left behind if the device is lost or stolen. The Seedvault application requires no technical knowledge to operate, and does not require a rooted device.

    In the article you'll also find:

    • How to install it
    • How to store the backup remotely
    • How to restore a backup


  • New: Add installation steps.

    These instructions only work for 64 bit Debian-based Linux distributions such as Ubuntu, Mint etc.

    • Install our official public software signing key
    wget -O- | gpg --dearmor > signal-desktop-keyring.gpg
    cat signal-desktop-keyring.gpg | sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null
    • Add our repository to your list of repositories
    echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] xenial main' |\
      sudo tee -a /etc/apt/sources.list.d/signal-xenial.list
    • Update your package database and install signal
    sudo apt update && sudo apt install signal-desktop




Lindy Hop



  • New: Introduce Vial.

    Vial is an open-source cross-platform (Windows, Linux and Mac) GUI and a QMK fork for configuring your keyboard in real time.

    Even though you can use a web version you can install it locally through an AppImage

    • Download the latest version
    • Give it execution permissions
    • Add the file somewhere in your $PATH

    On linux you need to configure an udev rule.

    For a universal access rule for any device with Vial firmware, run this in your shell while logged in as your user (this will only work with sudo installed):

    export USER_GID=`id -g`; sudo --preserve-env=USER_GID sh -c 'echo "KERNEL==\"hidraw*\", SUBSYSTEM==\"hidraw\", ATTRS{serial}==\"*vial:f64c2b3c*\", MODE=\"0660\", GROUP=\"$USER_GID\", TAG+=\"uaccess\", TAG+=\"udev-acl\"" > /etc/udev/rules.d/99-vial.rules && udevadm control --reload && udevadm trigger'

    This command will automatically create a udev rule and reload the udev system.

Video Gaming

Age of Empires


  • New: Fertilizing with manure.

    Manure is one of the best organic fertilizers for plants. It's made by the accumulation of excrements of bats, sea birds and seals and it usually doesn't contain additives or synthetic chemical components.

    This fertilizer is rich in nitrogen, phosphorus and potassium, which are key minerals for the growth of plants. These components help the regeneration of the soil, the enrichment in terms of nutrients and also acts as fungicide preventing plagues.

    Manure is a fertilizer of slow absorption, which means that it's released to the plants in an efficient, controlled and slow pace. That way the plants take the nutrients when they need them.

    The best moment to use it is at spring and depending on the type of plant you should apply it between each month and a half and three months. It's use in winter is not recommended, as it may burn the plant's roots.

    Manure can be obtained in dust or liquid state. The first is perfect to scatter directly over the earth, while the second is better used on plant pots. You don't need to use much, in fact, with just a pair of spoons per pot is enough. Apply it around the base of the plant, avoiding it's touch with leaves, stem or exposed roots, as it may burn them. After you apply them remember to water them often, keep in mind that it's like a heavy greasy sandwich for the plants, and they need water to digest it.

    For my indoor plants I'm going to apply a small dose (one spoon per plant) at the start of Autumn (first days of September), and two spoons at the start of spring (first days of March).


  • New: Create a list of most used emojis.

    (╯°□°)╯ ┻━┻
    \\ ٩( ᐛ )و //
    ᕕ( ᐛ )ᕗ
    ( ˘ ³˘)♥
  • New: Add new emojis.





  • Correction: Update introduction.

    The method was described by David Allen in a book with the same name. It's clear that the book is the corner stone of David's business. He is selling his method on every word, some times to the point of tiresome. It's also repeats the same ideas on different parts of the book, I guess that's good in terms of sticking an idea in the people's mind, but if you're already convinced and are trying to sum up the book it's like, hey, I have 90% of the valuable contents of this chapter already in my summary. It's obvious too the context of the writer, that the book was written a while ago and who does it write to. It talks quite often about assistants, bosses of high firm companies he's helped, preferring low-tech physical solutions over digital ones, a lot of references about parenting... If you're able to ignore all the above, it's actually a very good book. The guy has been polishing the method for more than 30 years, and has pretty nice ideas that can change how you manage your life.

    My idea of this summary is to try to extract the useful ideas removing all those old-fashioned capitalist values from it.

  • New: Guides on processing your inbox.

    Remember to follow the next rules while processing the items:

    • Process the top item first: that way you treat each element equally, so the "least" important ones are not left dangling forever in your inbox thus thwarting it's purpose.
    • Process one item at a time.
    • Never put anything back into “in.”

    For each element you need to ask yourself: "What's the next action?".

  • New: How to clarify your inbox items.

    If you can do something about the element, you need to think which is the next physical, visible activity that would be required to move the situation towards closure. It's tricky, something like "set meeting" won't do because it's not descriptive of physical behaviour. There is still stuff to decide how, when, with whom, if you don't do it now you won't empty your head and the uncertainty will create a psychological gap that will make you procrastinate, so define the next action now. "Decide what to do about X" doesn't work either, you may need to gather more information on the topic, but deciding doesn't take time.

    Once you have the next action, if it can be done in two minutes or less, do it when you first pick the item up. Even if it is not a high-priority one, do it now if you’re ever going to do it at all. The rationale for the two-minute rule is that it’s more or less the point where it starts taking longer to store and track an item than to deal with it the first time it’s in your hands. Two minutes is just a guideline. If you have a long open window of time in which to process your in-tray, you can extend the cutoff for each item to five or ten minutes. If you’ve got to get to the bottom of all your input rapidly, then you may want to shorten the time to one minute, or even thirty seconds, so you can get through everything a little faster.

    There’s nothing you really need to track about your two-minute actions. Just do them. If, however, you take an action and don’t finish the project with that one action, you’ll need to clarify what’s next on it, and manage that according to the same criteria.

    If the next action is going to take longer than two minutes, ask yourself, “Am I the best person to be doing it?” If not, hand it off to the appropriate person, in order of priority:

    • Send an e-mail.
    • Write a note or an over-note on paper and route it to that person.
    • Send it a instant message.
    • Add it as an agenda item on a list for your next real-time conversation with that person.
    • Talk with her directly, either face-to-face or by phone.

    When you hand it off to someone else, and if you care at all whether something happens as a result, you’ll need to track it. Depending on how active you need to be it can go to your Waiting list or to your tickler.

  • Correction: Deprecate pydo.

    I'm happy with orgmode so far, so I'm not going to continue it's development

  • New: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress.

    This error can happen for few reasons, but it most commonly occurs when there is an interruption during the upgrade/install process as you already mentioned.

    To fix this one may need to, first rollback to another version, then reinstall or helm upgrade again.

    Try below command to list the available charts:

    helm ls --namespace <namespace>

    You may note that when running that command ,it may not show any columns with information. If that's the case try to check the history of the previous deployment

    helm history <release> --namespace <namespace>

    This provides with information mostly like the original installation was never completed successfully and is pending state something like STATUS: pending-upgrade state.

    To escape from this state, use the rollback command:

    helm rollback <release> <revision> --namespace <namespace>

    revision is optional, but you should try to provide it.

    You may then try to issue your original command again to upgrade or reinstall.

  • New: Ansible retry a failed job.

    - command: /usr/bin/false
      retries: 3
      delay: 3
      register: result
      until: result.rc == 0
  • New: Introduce Bookwyrm.

    Bookwyrm is a social network for tracking your reading, talking about books, writing reviews, and discovering what to read next. Federation allows BookWyrm users to join small, trusted communities that can connect with one another, and with other ActivityPub services like Mastodon and Pleroma.

  • New: Introduce Elastic security.

    Elastic security is a program to protect, investigate, and respond to complex threats by unifying the capabilities of SIEM, endpoint security, and cloud security.

  • New: Introduce RSS.

    Note: This post is a shameless direct copy of Nicky beautiful post, read it there as it has beautiful illustrations

    What is RSS (Really Simple Syndication)?

    Imagine an open version of Twitter or Facebook News Feed, with no psy-op ads, owned by no oligopoly, manipulated by no algorithm, and all under your full control.

    Imagine a version of the newsletter where you don't have to worry about them selling your email to scammers, labyrinth-like unsubscribe pages, or stuffing your inbox with ever more crap.

    Now imagine this existed and was extremely popular 15 years ago. Then we got suckered by the shiny walled gardens.

    Well, it's time to make like a tree and go back to the future, baby!

    How does RSS work?

    Unlike newsletters where give each publisher your email (and they may abuse that trust), RSS works on a "don't call me, I'll call you" policy.

    An RSS feed is a text file on a website. It's just a bunch of posts – no tracking or "personalization" – like a printed newspaper:

    Then, whatever RSS reader app you use – you can use any app made by anyone – it'll call the websites for the feeds you specifically opted into, no more or less. The websites can't force it in the other direction.

    Your app then shows you your posts in good ol' reverse chronological order. (Some apps let you add extra filters, but unlike social media algorithms, you control 'em.) Apps also make the posts prettier than raw text:

    Really Simple, indeed!

    Cool, how do I get started?

    First, you need a reader app. Such as the minimalist Inoreader, but Feedly is the most popular, and folks use The Old Reader. See this list of readers.

    To add a feed to your app, just paste a link to the blog/site, and your app will automatically find the feed! RSS also lets you follow creators on YouTube, Substack, Medium, and more.

    Tips for using RSS wisely

    • Beware the hoarder instinct. No algorithm can save you from hoarding feeds "just in case", then being overwhelmed. The only cure is to ruthlessly Marie Kondo that crap – if a feed doesn't consistently enrich your life, cut it.
    • Some feeds only give you the excerpt of a post, with a link to see the full post at their site. Don't follow those: they break you out of the RSS reading experience, and trick you into losing time on their site. (This is a harsh rule: I used to follow Quanta Magazine's feed, but they switched from full-text to excerpts, so I unsubscribed.)
    • Don't follow feeds that update more than once a day. Go for daily digests, or better yet, weekly digests.

    If RSS Was So Great, Why'd It Die In The First Place

    Well, Google killed Google Reader in 2013, the #1 RSS reader at the time. This was to make way for Google Plus, which failed. The sacrificial lamb was for nothing.

    But Google only did what nearly everyone – including yours truly – did in 2013: leave the open, decentralized Web 1.0 for the shiny new Web 2.0 platforms. Why? Well, it was more fun & convenient.

    But now in 2021, for most of us, social media is very not fun and not convenient. That's why I went back to the future with RSS, and wrote this post encouraging you to do the same!

    (Ok, RSS had two more problems: 1) Getting overwhelmed with feeds. As said above, the only cure is to trim ruthlessly. 2) RSS lets you serve text/link/image ads, but not the creepy user-tracking ads. In 2013 that was the "best" way make money on the web, but these days ad revenue is dying, and subscriptions like Patreon/Substack are thriving.)

    And that's all, folks! Now you know how to escape the attention-draining, empathy-killing, critical-thought-suffocating siren song of the algorithms. And get your inbox less cluttered with newsletters.

    Here's to a renaissance for a kinder, better web. <3

  • New: Introduce the analysis of life process.

    It's interesting to do analysis at representative moments of the year. It gives it an emotional weight. You can for example use the solstices or my personal version of the solstices:

    • Spring analysis (1st of March): For me the spring is the real start of the year, it's when life explodes after the stillness of the winter. The sun starts to set later enough so that you have light in the afternoons, the climate gets warmer thus inviting you to be more outside, the nature is blooming new leaves and flowers. It is then a moment to build new projects and set the current year on track.
    • Summer analysis (1st of June): I hate heat, so summer is a moment of retreat. Everyone temporarily stop their lives, we go on holidays and all social projects slow their pace. Even the news have even less interesting things to report. It's so hot outside that some of us seek the cold refuge of home or remote holiday places. Days are long and people love to hang out till late, so usually you wake up later, thus having less time to actually do stuff. Even in the moments when you are alone the heat drains your energy to be productive. It is then a moment to relax and gather forces for the next trimester. It's also perfect to develop easy and chill personal projects that have been forgotten in a drawer. Lower your expectations and just flow with what your body asks you.
    • Autumn analysis (1st of September): September it's another key moment for many people. We have it hardcoded in our life since we were children as it was the start of school. People feel energized after the summer holidays and are eager to get back to their lives and stopped projects. You're already 6 months into the year, so it's a good moment to review your year plan and decide how you want to invest your energy reserves.
    • Winter analysis (1st of December): December is the cue that the year is coming to an end. The days grow shorter and colder, they basically invite you to enjoy a cup of tea under a blanket. It is then a good time to get into your cave and do an introspection analysis on the whole year and prepare the ground for the coming year.

    We see then that the year is divided in two sets of an expansion trimester and a retreat one. We can use this information to plan our tasks accordingly. In the expansion trimester we could invest more energies in the planning, and in the retreat ones we can do more throughout reviews.

  • New: How to create a prometheus exporter with python.

    prometheus-client is the official Python client for Prometheus.


    pip install prometheus-client

    Here is a simple script:

    from prometheus_client import start_http_server, Summary
    import random
    import time
    REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')
    def process_request(t):
        """A dummy function that takes some time."""
    if __name__ == '__main__':
        # Start up the server to expose the metrics.
        # Generate some requests.
        while True:

    Then you can visit http://localhost:8000/ to view the metrics.

    From one easy to use decorator you get:

    • request_processing_seconds_count: Number of times this function was called.
    • request_processing_seconds_sum: Total amount of time spent in this function.

    Prometheus's rate function allows calculation of both requests per second, and latency over time from this data.

    In addition if you're on Linux the process metrics expose CPU, memory and other information about the process for free.