Skip to content



  • New: Add the donation information.


  • New: Add two more solutions to the voice recognition project.

    For offline voice recognition, vosk-api can be used. Or voiceliner once it supports offline voice recognition.

  • Correction: Deprecate faker-optional project.

    Wrapper over other Faker providers to return their value or None. Useful to create data of type Optional[Any].

    Not needed anymore as I use pydantic factories now.

  • New: Create the Life Warnings seed.

    I've always tackled the pursuit of the peace of mind by improving in task management, for example trying to predict when I have to do something in order to avoid a nuisance. Maybe it's more interesting to monitor and visibilice the warnings that are affecting you.

  • Reorganization: Merge Self host a routing web application seed with host my own map seedling.

  • New: Create an ordered list of digital gardens.

    Created Best-of Digital Gardens a best-of-lists compilation of awesome list of digital gardens.

  • New: Beancount forecast.

    I'd like to see a forecast of the evolution of my accounts given an amount of time. Maybe by doing seasonality analysis and forecast in time series as stated here and here.

    It will also be interesting to see for a given account the evolution of the subaccounts.

  • New: Version Update Manager.

    Keeping software updated is not easy because:

    • There are many technologies involved: package managers (apt, yum, pip, yarn, npm, ...), programming languages (python, java, ruby, ...), operative systems (Debian, Ubuntu, ...), deployment technologies (OS install, Docker, Kubernetes, Ansible, Helm), template software (cruft).
    • Each software maintainers use a different version system.
    • Even a small increase in a version may break everything.
    • Sometimes only the latest version is the supported version.
    • It's not easy to check if the update went well.
    • You not only need the desired package to be updated, but also it's dependencies.

    I'd like to find a solution that:

    • Gives an overall insight of the update status of a system.
    • Automates the update process.
    • Support both single system installation or aggregator of multiple systems.



Antifascist Actions



  • New: How to reduce online racism.

    Add article How to reduce online racism by Mark Holden, a long essay with interesting tips and a lot of useful visualizations, I haven't checked the sources but it looks legit. (Thanks for the recommendation Laurie :)).


  • New: Introduce the concept and guidelines of mentorship.

    Mentoring is a process for the informal transmission of knowledge, social capital, and the psychosocial support perceived by the recipient as relevant to work, career, or professional development; mentoring entails informal communication, usually face-to-face and during a sustained period of time, between a person who is perceived to have greater relevant knowledge, wisdom, or experience (the mentor) and a person who is perceived to have less (the apprentice).


Life Management

Task Management


  • New: Introduce OpenProject.

    OpenProject is an Open source project management software.

    The benefits over other similar software are:

    The things I don't like are:

    • Data can be exported as XML or CSV but it doesn't export everything. You have access to the database though, so if you'd like a better extraction of the data you in theory can do a selective dump of whatever you need.
    • It doesn't yet have tag support. You can meanwhile add the strings you would use as tags in the description, and then filter by text in description.
    • There is no demo instance where you can try it. It's easy though to launch a Proof of Concept environment yourself if you already know docker-compose.
    • Even thought the Community (free) version has many features the next aren't:
      • Status boards: you can't have Kanban boards that show the state of the issues as columns. You can make it yourself through a Basic board and with the columns as the name of the state. But when you transition an issue from state, you need to move the issue and change the property yourself. I've thought of creating a script that works with the API to do this automatically, maybe through the webhooks of the openproject, but it would make more sense to spend time on pydo.
      • Version boards: Useful to transition issues between sprints when you didn't finish them in time. Probably this is easily solved through bulk editing the issues.
      • Custom actions looks super cool, but as this gives additional value compared with the competitors, I understand it's a paid feature.
      • Display relations in the work package list: It would be useful to quickly see which tasks are blocked, by whom and why. Nothing critical though.
      • Multiselect custom fields: You can only do single valued fields. Can't understand why this is a paid feature.
      • 2FA authentication is only an Enterprise feature.
      • OpenID and SAML are an enterprise feature.

    Also included:

  • New: Web based task manager.

    Life happened and the development of pydo has fallen behind in my priority list. I've also reached a point where simplest one is no longer suitable for my workflow because:

    • I loose a lot of time in the reviews.
    • I loose a lot of time when doing the different plannings (year, trimester, month, week, day).
    • I find it hard to organize and refine the backlog.

    As pydo is not ready yet and I need a solution that works today better than the simplest task manager, I've done an analysis of the state of the art of self-hosted applications of all of them the two that were more promising were Taiga and OpenProject.

    Finally I chose OpenProject.

  • New: Deal with big number of tasks.

    As the number of tasks increase, the views of your work packages starts becoming more cluttered. As you can't fold the hierarchy trees it's difficult to efficiently manage your backlog.

    I've tried setting up a work package type that is only used for the subtasks so that they are filtered out of the view, but then you don't know if they are parent tasks unless you use the details window. It's inconvenient but having to collapse the tasks every time it's more cumbersome. You'll also need to reserve the selected subtask type (in my case Task) for the subtasks.

  • New: Sorting work package views.

    They are sorted alphabetically, so the only way to sort them is by prepending a number. You can do 0. Today instead of Today. It's good to do big increments between numbers, so the next report could be 10. Backlog. That way if you later realize you want another report between Today and Backlog, you can use 5. New Report and not rename all the reports.

  • New: Pasting text into the descriptions.

    When I paste the content of the clipboard in the description, all new lines are removed (\n), the workaround is to paste it inside a code snippet.

Task Management Workflows

  • Correction: Update the task workflow of the month, and week plannings.
  • Correction: Update the workflows.

    To suggest to use a script to follow them

Life review

  • New: How to review your life.

    Sometimes is good to stop, get into your cave and do an introspection on how is your life going.

    I like to do this exercise the last week of the year. Although I'd like to do it at least twice a year.

    This article is the checklist I follow to do my life review, it may seem a lot to you or maybe very simple. You can take it as a base or maybe to get some ideas and then create your own that fits your needs.

    The process then has many phases:

News Management

  • New: Introduce news management.

    The information world of today is overwhelming. It can reach a point that you just want to disconnect so as to avoid the continuous bombardment, but that leads to loosing connection with what's happening in the world. Without knowing what's going on it's impossible to act to shape it better.

    I've started analyzing how to filter the content.

  • New: News management workflow explained.

Calendar Management

  • New: Introduce Calendar Management.

    Since the break of my taskwarrior instance I've used a physical calendar to manage the tasks that have a specific date. Can't wait for the first version of pydo to be finished.

    The next factors made me search for a temporal solution:

    • It's taking longer than expected.
    • I've started using a nextcloud calendar with some friends.
    • I frequently use Google calendar at work.
    • I'm sick of having to log in Nexcloud and Google to get the day's appointments.

    To fulfill my needs the solution needs to:

    • Import calendar events from different sources, basically through the CalDAV protocol.
    • Have a usable terminal user interface
    • Optionally have a command line interface or python library so it's easy to make scripts.
    • Optionally it can be based in python so it's easy to contribute
    • Support having a personal calendar mixed with the shared ones.
    • Show all calendars in the same interface

    Looking at the available programs I found khal, which looks like it may be up to the task.

    Go through the installation steps and configure the instance to have a local calendar.

    If you want to sync your calendar events through CalDAV, you need to set vdirsyncer.

Food Management

  • New: Introduce my food management workflow.

    As humans diet is an important factor in our health, we need to eat daily around three times a day, as such, each week we need to invest time into managing how to get food in front of us. Tasks like thinking what do you want to eat, buying the ingredients and cooking them make use a non negligible amount of time. Also something to keep in mind, is that eating is one of the great pleasures in our lives, so doing it poorly is a waste. The last part of the equation is that to eat good you either need time or money.

    This article explores my thoughts and findings on how to optimize the use of time, money and mental load in food management while keeping the desired level of quality to enjoy each meal, being healthy and following the principles of ecology and sustainability. I'm no expert at all on either of these topics. I'm learning and making my mind while writing these lines.

Grocy Management

  • New: Introduce my grocy management workflow.

    Buying stuff is an unpleasant activity that drains your energy and time, it's the main perpetrator of the broken capitalist system, but sadly we have to yield to survive.

    This article explores my thoughts and findings on how to optimize the use of time, money and mental load in grocy management to have enough stuff stored to live, while following the principles of ecology and sustainability. I'm no expert at all on either of these topics. I'm learning and making my mind while writing these lines.

    grocy is a web-based self-hosted groceries & household management solution for your home.

    It is really easy to deploy if you know how to use Docker. The hard part comes when you do the initial load, as you have to add all the:

    • User attributes.
    • Product locations.
    • Product groups.
    • Quantity conversions.
    • Products.



  • New: How your brain generates sleep.

    Brainwave activity of REM sleep looks similar to the one you have when you're awake. They cycle (going up and down) at a fast frequency of thirty or forty times per second in an unreliable pattern. This behaviour is explained by the fact that different parts of your waking brain are processing different pieces of information at different moments in time and in different ways.


Learning to code

  • New: Introduce guidelines to learn how to code.

    Learning to code is a never ending, rewarding, frustrating, enlightening task. In this article you can see what is the generic roadmap (in my personal opinion) of a developer. As each of us is different, probably a generic roadmap won't suit your needs perfectly, if you are new to coding, I suggest you find a mentor so you can both tweak it to your case.

  • New: Suggest a workflow to learn Git.

    Git is a software for tracking changes in any set of files, usually used for coordinating work among programmers collaboratively developing source code during software development. Its goals include speed, data integrity, and support for distributed, non-linear workflows (thousands of parallel branches running on different systems).

    Git is a tough nut to crack, no matter how experience you are you'll frequently get surprised. Sadly it's one of the main tools to develop your code, so you must master it as soon as possible.

    I've listed you some resources here on how to start. From that article I think it's also interesting that you read about:

Frontend developer

  • New: Introduce guidelines to learn how to become a frontend developer.

    This section is the particularization of the Development learning article for a frontend developer, in particular a Vue developer.

    A Front-End Developer is someone who creates websites and web applications. It's main responsibility is to create what the user sees.

    The basic languages for Front-End Development are HTML, CSS, and JavaScript. Nowadays writing interfaces with only the basic languages makes no sense as there are other languages and frameworks that make better and quicker solutions. One of them is Vue, which is the one I learnt, so the whole document will be focused on this path, nevertheless there are others popular ones like: Bootstrap, React, jQuery or Angular.

    The difference between Front-End and Back-End is that Front-End refers to how a web page looks, while back-end refers to how it works.


  • New: Sum up the VueJS tutorial.
  • New: Introduce Cypress.

    Cypress is a next generation front end testing tool built for the modern web.

  • New: Introduce Vite.

    Vite is a build tool that aims to provide a faster and leaner development experience for modern web projects. It consists of two major parts:

    • A dev server that provides rich feature enhancements over native ES modules, for example extremely fast Hot Module Replacement (HMR).

    • A build command that bundles your code with Rollup, pre-configured to output highly optimized static assets for production.

    Vite is opinionated and comes with sensible defaults out of the box, but is also highly extensible via its Plugin API and JavaScript API with full typing support.

  • New: Introduce Vitest.

    Vitest is a blazing fast unit-test framework powered by Vite.

  • New: Display time ago from timestamp.

    Use vue2-timeago

    Install with:

    npm install vue2-timeago@next
  • New: Introduce Vuetify.

    Vuetify is a Vue UI Library with beautifully handcrafted Material Components.

  • New: Sum up all Cypress documentation.

    In particular how to:

    • Install it
    • Write tests
    • Setup the tests
    • Do component testing
    • Do visual regression testing
  • New: Truncate text given a height.

    By default css is able to truncate text with the size of the screen but only on one line, if you want to fill up a portion of the screen (specified in number of lines or height css parameter) and then truncate all the text that overflows, you need to use vue-clamp.

  • New: Environment variables.

    If you're using Vue 3 and Vite you can use the environment variables by defining them in .env files.

    You can specify environment variables by placing the following files in your project root:

    • .env: Loaded in all cases.
    • .env.local: Loaded in all cases, ignored by git.
    • .env.[mode]: Only loaded in specified mode.
    • .env.[mode].local: Only loaded in specified mode, ignored by git.

    An env file simply contains key=value pairs of environment variables, by default only variables that start with VITE_ will be exposed.:


    Only VITE_SOME_KEY will be exposed as import.meta.env.VITE_SOME_KEY to your client source code, but DB_PASSWORD will not. So for example in a component you can use:

    export default {
      props: {},
      mounted() {
      data: () => ({
  • New: Deploy with docker.

    And fight to make the environment variables of the docker work, the problem is that these environment variables are set at build time, and can't be changed at runtime by default, so you can't offer a generic fronted Docker and particularize for the different cases. I've literally cried for hours trying to find a solution for this until José Silva came to my rescue. The tweak is to use a docker entrypoint to inject the values we want.

  • New: Testing.

    I tried doing component tests with Jest, Vitest and Cypress and found no way of making component tests, they all fail one way or the other.

    E2E tests worked with Cypress however, that's going to be my way of action till this is solved.

  • New: Add Cypress commands.

    For the functions you write a lot you can use commands in /cypress/support/commands.ts.

    Cypress.Commands.add('getById', (selector, ...args) => {
      return cy.get(`[data-cy=${selector}]`, ...args)
    Cypress.Commands.add('getByIdLike', (selector, ...args) => {
      return cy.get(`[data-cy*=${selector}]`, ...args)
    Cypress.Commands.add('findById', {prevSubject: true}, (subject, selector, ...args) => {
      return subject.find(`[data-cy=${selector}]`, ...args)

    So you can now do


  • New: Add more ways to select elements.

    • Select by position in list

    Inside our list, we can select elements based on their position in the list, using .first(), .last() or .eq() selector.

      .first(); // select "red"
      .last(); // select "violet"
      .eq(2); // select "yellow"

    You can also use .next() and .prev() to navigate through the elements.

    • Select elements by filtering

    Once you select multiple elements, you can filter within these based on another selector.

      .filter('.primary') // select all elements with the class .primary

    To do the exact opposite, you can use .not() command.

      .not('.primary') // select all elements without the class .primary
  • New: Finding elements.

    You can specify your selector by first selecting an element you want to search within, and then look down the DOM structure to find a specific element you are looking for.

      .find('.violet') // finds an element with class .violet inside .list element

    Instead of looking down the DOM structure and finding an element within another element, we can look up. In this example, we first select our list item, and then try to find an element with a .list class.

      .parent('.list') // finds an element with class .list that is above our .violet element
  • New: Assert on the content of an attribute.

      .invoke('attr', 'href')
      .should('eq', '')
  • New: Use the content of a fixture set in a hook in a test.

    If you store and access the fixture data using this test context object, make sure to use function () { ... } callbacks both for the hook and the test. Otherwise the test engine will NOT have this pointing at the test context.

    describe('User page', () => {
      beforeEach(function () {
        // "this" points at the test context object
        cy.fixture('user').then((user) => {
          // "this" is still the test context object
          this.user = user
      // the test callback is in "function () { ... }" form
     it('has user', function () {
        // this.user exists
  • New: Run only failing tests.

    Cypress doesn't Allow to rerun failed tests but you can use it.only on the test you want to run.

  • New: Make HTTP requests with Vue.

    Compare Fetch API and Axios when doing http requests to external services.

    Explain how to do them with both methods and arrive to the conclusion that if you’re working on multiple requests, you’ll find that Fetch requires you to write more code than Axios, even when taking into consideration the setup needed for it. Therefore, for simple requests, Fetch API and Axios are quite the same. However, for more complex requests, Axios is better as it allows you to configure multiple requests in one place.

    If you're making a simple request use the Fetch API, for the other cases use axios because:

    Axios provides an easy-to-use API in a compact package for most of your HTTP communication needs. However, if you prefer to stick with native APIs, nothing stops you from implementing Axios features.

    For more information read:

  • New: Simulate errors.

    context('Errors', () => {
      const errorMsg = 'Oops! Try again later'
      it('simulates a server error', () => {
          { statusCode: 500 }
      it('simulates a network failure', () => {
          { forceNetworkError: true }
  • New: Handling errors doing requests to other endpoints.

    To catch errors when doing requests you could use:

    try {
        let res = await axios.get('/my-api-route');
        // Work with the response...
    } catch (error) {
        if (error.response) {
            // The client was given an error response (5xx, 4xx)
        } else if (error.request) {
            // The client never received a response, and the request was never left
        } else {
            // Anything else
            console.log('Error', err.message);

    The differences in the error object, indicate where the request encountered the issue.

    • error.response: If your error object has a response property, it means that your server returned a 4xx/5xx error. This will assist you choose what sort of message to return to users.

    • error.request: This error is caused by a network error, a hanging backend that does not respond instantly to each request, unauthorized or cross-domain requests, and lastly if the backend API returns an error.

      This occurs when the browser was able to initiate a request but did not receive a valid answer for any reason.

    • Other errors: It's possible that the error object does not have either a response or request object attached to it. In this case it is implied that there was an issue in setting up the request, which eventually triggered an error.

      For example, this could be the case if you omit the URL parameter from the .get() call, and thus no request was ever made.

  • New: Use Flexbox with Vuetify.

    Control the layout of flex containers with alignment, justification and more with responsive flexbox utilities.


    "I suggest you use this page only as a reference, if it's the first time
    you see this content, it's better to see it at the
    [source]( as you can see Flex in
    action at the same time you read, which makes it much more easy to

    Explain how to use:

  • New: Illustrations.

    You can get nice illustrations for your web on Drawkit, for example I like to use the Classic kit.

  • Correction: Correct the way to test for an attribute of an html element.

       .should('have.attr', 'href', '')
       .and('have.attr', 'target', '_blank') // Test it's meant to be opened
       // another tab
  • New: Sending different responses.

    To return different responses from a single GET /todos intercept, you can place all prepared responses into an array, and then use Array.prototype.shift to return and remove the first item.

    it('returns list with more items on page reload', () => {
      const replies = [{ fixture: 'articles.json' }, { statusCode: 404 }]
      cy.intercept('GET', '/api/inbox', req => req.reply(replies.shift()))
  • New: Get assets url.

    If you're using Vite, you can save the assets such as images or audios in the src/assets directory, and you can get the url with:

    getImage() {
      return new URL(`../assets/pictures/${this.active_id}.jpg`, import.meta.url).href

    This way it will give you the correct url whether you're in the development environment or in production.

  • New: Play audio files.

    You can get the file and save it into a data element with:

    getAudio() { = new Audio(new URL(`../assets/audio/${this.active_id}.mp3`, import.meta.url).href)

    You can start playing with, and stop with

  • New: Vue Router.

    Creating a Single-page Application with Vue + Vue Router feels natural, all we need to do is map our components to the routes and let Vue Router know where to render them.

  • New: Deploy static site on github pages.

  • New: Themes.

    Vuetify comes with two themes pre-installed, light and dark. To set the default theme of your application, use the defaultTheme option.

    File: src/plugins/vuetify.js

    import { createApp } from 'vue'
    import { createVuetify } from 'vuetify'
    export default createVuetify({
      theme: {
        defaultTheme: 'dark'

    Adding new themes is as easy as defining a new property in the theme.themes object. A theme is a collection of colors and options that change the overall look and feel of your application. One of these options designates the theme as being either a light or dark variation. This makes it possible for Vuetify to implement Material Design concepts such as elevated surfaces having a lighter overlay color the higher up they are.

    File: src/plugins/vuetify.js

    import { createApp } from 'vue'
    import { createVuetify, ThemeDefinition } from 'vuetify'
    export default createVuetify({
      theme: {
        defaultTheme: 'myCustomLightTheme',
        themes: {
          myCustomLightTheme: {
            dark: false,
            colors: {
              background: '#FFFFFF',
              surface: '#FFFFFF',
              primary: '#510560',
              'primary-darken-1': '#3700B3',
              secondary: '#03DAC6',
              'secondary-darken-1': '#018786',
              error: '#B00020',
              info: '#2196F3',
              success: '#4CAF50',
              warning: '#FB8C00',

    To dynamically change theme during runtime.

        <v-btn @click="toggleTheme">toggle theme</v-btn>
    import { useTheme } from 'vuetify'
    export default {
      setup () {
        const theme = useTheme()
        return {
          toggleTheme: () => = ? 'light' : 'dark'

    Most components support the theme prop. When used, a new context is created for that specific component and all of its children. In the following example, the v-btn uses the dark theme applied by its parent v-card.

        <v-card theme="dark">
          <!-- button uses dark theme -->
  • New: Add more elements.

  • New: Apply a style to a component given a condition.

    if you use :class you can write javascript code in the value, for example:

      class="user-retrieve-language p-2"
      :class="{'font-weight-bold': selected === language.key}"
      v-for="language in languages"
      :checked="selected === language.key"
  • New: Debug Jest tests.

    If you're not developing in Visual code, running a debugger is not easy in the middle of the tests, so to debug one you can use console.log() statements and when you run them with yarn test:unit you'll see the traces.


  • New: Add the di library to explore.

    di: a modern dependency injection system, modeled around the simplicity of FastAPI's dependency injection.

  • New: Add humanize library.

    humanize: This modest package contains various common humanization utilities, like turning a number into a fuzzy human-readable duration ("3 minutes ago") or into a human-readable size or throughput.

  • New: Add huey.

    huey is a little task queue for python.

  • New: Generators.

    Generator functions are a special kind of function that return a lazy iterator. These are objects that you can loop over like a list. However, unlike lists, lazy iterators do not store their contents in memory.

    An example would be an infinite sequence generator

    def infinite_sequence():
        num = 0
        while True:
            yield num
            num += 1

    You can use it as a list:

    for i in infinite_sequence():
    ...     print(i, end=" ")
    0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
    30 31 32 33 34 35 36 37 38 39 40 41 42

    Instead of using a for loop, you can also call next() on the generator object directly. This is especially useful for testing a generator in the console:.

    >>> gen = infinite_sequence()
    >>> next(gen)
    >>> next(gen)
    >>> next(gen)
    >>> next(gen)
  • New: Do the remainder or modulus of a number.

    expr 5 % 3
  • New: Update a json file with jq.

    Save the next snippet to a file, for example jqr and add it to your $PATH.

    jq "$query" $file > "$temp_file"
    cmp -s "$file" "$temp_file"
    if [[ $? -eq 0 ]] ; then
      /bin/rm "$temp_file"
      /bin/mv "$temp_file" "$file"

    Imagine you have the next json file:

      "property": true,
      "other_property": "value"

    Then you can run:

    jqr '.property = false' status.json

    And then you'll have:

      "property": false,
      "other_property": "value"
  • New: Interesting sources.

    Musa 550 looks like a nice way to learn how to process geolocation data.


  • New: Add asyncer as interesting library.
  • New: Introduce PDM.

    PDM is a modern Python package manager with PEP 582 support. It installs and manages packages in a similar way to npm that doesn't need to create a virtualenv at all!

  • New: Note that pdm update doesn't upgrade the constrains in pyproject.toml.

  • New: Add tutorial on how to use asyncio.

    Roguelynn tutorial

  • New: Version overriding now supports constrains.

    Before you had to pin specific versions, which is not maintainable, now you can use constrains

    asgiref = ">=3.2.10"
  • New: Show outdated packages.

    pdm update --dry-run --unconstrained
  • New: Round a number.

  • New: Remove focus from element.

  • New: Concatenate two arrays.

    const arr1 = ["Cecilie", "Lone"];
    const arr2 = ["Emil", "Tobias", "Linus"];
    const children = arr1.concat(arr2);

    To join more arrays you can use:

    const arr1 = ["Cecilie", "Lone"];
    const arr2 = ["Emil", "Tobias", "Linus"];
    const arr3 = ["Robin"];
    const children = arr1.concat(arr2,arr3);
  • New: Check if a variable is not undefined.

    if(typeof lastname !== "undefined")
      alert("Hi. Variable is defined.");
    feat(vue_snippets#Run function in background): Run function in background

    To achieve that you need to use the javascript method called setInterval(). It’s a simple function that would repeat the same task over and over again. Here’s an example:

    function myFunction() {
        setInterval(function(){ alert("Hello world"); }, 3000);

    If you add a call to this method for any button and click on it, it will print Hello world every 3 seconds (3000 milliseconds) until you close the page.

    In Vue you could do something like:

    export default {
      data: () => ({
        inbox_retry: undefined
      methods: {
        retryGetInbox() {
          this.inbox_retry = setInterval(() => {
            if (this.showError) {
              console.log('Retrying the fetch of the inbox')
              // Add your code here.
            } else {
          }, 30000)

    You can call this.retryGetInbox() whenever you want to start running the function periodically. Once this.showError is false, we stop running the function with clearInterval(this.inbox_retry).

  • New: Set variable if it's undefined.

    var x = (x === undefined) ? your_default_value : x;
  • New: Supporting pre-releases.

    To help package maintainers, you can allow pre-releases to be validate candidates, that way you'll get the issues sooner. It will mean more time to maintain the broken CIs if you update your packages daily (as you should!), but it's the least you can do to help your downstream library maintainers

    By default, pdm's dependency resolver will ignore prereleases unless there are no stable versions for the given version range of a dependency. This behavior can be changed by setting allow_prereleases to true in [tool.pdm] table:

    allow_prereleases = true
  • New: Solve circular dependencies.

    Sometimes pdm is not able to locate the best package combination, or it does too many loops, so to help it you can update your version constrains so that it has the minimum number of candidates.

    To solve circular dependencies we first need to locate what are the conflicting packages, pdm doesn't make it easy to detect them. Locate all the outdated packages by doing pdm show on each package until this issue is solved and run pdm update {package} --unconstrained for each of them. If you're already on the latest version, update your pyproject.toml to match the latest state.

    Once you have everything to the latest compatible version, you can try to upgrade the rest of the packages one by one to the latest with --unconstrained.

    In the process of doing these steps you'll see some conflicts in the dependencies that can be manually solved by preventing those versions to be installed or maybe changing the python-requires.

  • New: Suggest to use Asyncer.

    Asyncer looks very useful

  • Correction: Solve circular dependencies by manual constraining.

    It also helps to run pdm update with the -v flag, that way you see which are the candidates that are rejected, and you can put the constrain you want. For example, I was seeing the next traceback:

    pdm.termui: Conflicts detected:
      pyflakes>=3.0.0 (from <Candidate autoflake 2.0.0 from>)
      pyflakes<2.5.0,>=2.4.0 (from <Candidate flake8 4.0.1 from unknown>)

    So I added a new dependency to pin it:

    dependencies = [
        # Until flakeheaven supports flake8 5.x

    If none of the above works, you can override them:

    "importlib-metadata" = ">=3.10"
  • New: Suggest to use pydeps.

    If you get lost in understanding your dependencies, you can try using pydeps to get your head around it.


  • New: Introduce BeautifulSoup and how to use it.

    BeautifulSoup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree.

  • New: Modifying the tree.

    PageElement.replace_with() removes a tag or string from the tree, and replaces it with the tag or string of your choice:

    markup = '<a href="">I linked to <i></i></a>'
    soup = BeautifulSoup(markup)
    a_tag = soup.a
    new_tag = soup.new_tag("b")
    new_tag.string = ""

    Sometimes it doesn't work. If it doesn't use:

  • New: Introduce python gnupg.

    python-gnupg is a Python library to interact with gpg taking care of the internal details and allows its users to generate and manage keys, encrypt and decrypt data, and sign and verify messages.


    pip install python-gnupg


    ```python gpg = gnupg.GPG(gnupghome="/path/to/home/directory")


    public_keys = gpg.list_keys() private_keys = gpg.list_keys(True)

  • Correction: Use decrypt_file instead of decrypt for files.


    Note: You can't pass Path arguments to decrypt_file.

Configure Docker to host the application

  • New: Usage of ellipsis on Tuple type hints.

    The ellipsis is used to specify an arbitrary-length homogeneous tuples, for example Tuple[int, ...].

  • New: List the files of a bucket.

  • New: Suggest to use Sequence over List.

    Because using List could lead to some unexpected errors when combined with type inference. For example:

    class A: ...
    class B(A): ...
    lst = [A(), A()]  # Inferred type is List[A]
    new_lst = [B(), B()]  # inferred type is List[B]
    lst = new_lst  # mypy will complain about this, because List is invariant

    Possible strategies in such situations are:

    • Use an explicit type annotation:

      new_lst: List[A] = [B(), B()]
      lst = new_lst  # OK
    • Make a copy of the right hand side:

      lst = list(new_lst) # Also OK
    • Use immutable collections as annotations whenever possible:

      def f_bad(x: List[A]) -> A:
          return x[0]
      f_bad(new_lst) # Fails
      def f_good(x: Sequence[A]) -> A:
          return x[0]
      f_good(new_lst) # OK
  • New: Overloading the methods.

    Sometimes the types of several variables are related, such as “if x is type A, y is type B, else y is type C”. Basic type hints cannot describe such relationships, making type checking cumbersome or inaccurate. We can instead use @typing.overload to represent type relationships properly.

    from __future__ import annotations
    from import Sequence
    from typing import overload
    def double(input_: int) -> int:
    def double(input_: Sequence[int]) -> list[int]:
    def double(input_: int | Sequence[int]) -> int | list[int]:
        if isinstance(input_, Sequence):
            return [i * 2 for i in input_]
        return input_ * 2

    This looks a bit weird at first glance—we are defining double three times! Let’s take it apart.

    The first two @overload definitions exist only for their type hints. Each definition represents an allowed combination of types. These definitions never run, so their bodies could contain anything, but it’s idiomatic to use Python’s ... (ellipsis) literal.

    The third definition is the actual implementation. In this case, we need to provide type hints that union all the possible types for each variable. Without such hints, Mypy will skip type checking the function body.

    When Mypy checks the file, it collects the @overload definitions as type hints. It then uses the first non-@overload definition as the implementation. All @overload definitions must come before the implementation, and multiple implementations are not allowed.

    When Python imports the file, the @overload definitions create temporary double functions, but each is overridden by the next definition. After importing, only the implementation exists. As a protection against accidentally missing implementations, attempting to call an @overload definition will raise a NotImplementedError.

    @overload can represent arbitrarily complex scenarios. For a couple more examples, see the function overloading section of the Mypy docs.

  • Correction: Debug the Start request repeated too quickly error.

    Use journalctl -eu docker to debug

  • Correction: Update TypeVars nomenclature.

    Using UserT is not supported by pylint, use UserT instead.

  • New: Add common ec2 functions.

  • New: Using typing.cast.

    Sometimes the type hints of your program don't work as you expect, if you've given up on fixing the issue you can # type: ignore it, but if you know what type you want to enforce, you can use typing.cast() explicitly or implicitly from Any with type hints. With casting we can force the type checker to treat a variable as a given type.

    The main case to reach for cast() are when the type hints for a module are either missing, incomplete, or incorrect. This may be the case for third party packages, or occasionally for things in the standard library.

    Take this example:

    import datetime as dt
    from typing import cast
    from third_party import get_data
    data = get_data()
    last_import_time = cast(dt.datetime, data["last_import_time"])

    Imagine get_data() has a return type of dict[str, Any], rather than using stricter per-key types with a TypedDict. From reading the documentation or source we might find that the last_import_time key always contains a datetime object. Therefore, when we access it, we can wrap it in a cast(), to tell our type checker the real type rather than continuing with Any.

    When we encounter missing, incomplete, or incorrect type hints, we can contribute back a fix. This may be in the package itself, its related stubs package, or separate stubs in Python’s typeshed. But until such a fix is released, we will need to use cast() to make our code pass type checking.

  • New: Update dockers with Renovate.

    Renovate is a program that does automated dependency updates. Multi-platform and multi-language.

  • New: Connect multiple docker compose files.

    You can connect services defined across multiple docker-compose.yml files.

    In order to do this you’ll need to:

    • Create an external network with docker network create <network name>
    • In each of your docker-compose.yml configure the default network to use your externally created network with the networks top-level key.
    • You can use either the service name or container name to connect between containers.
  • New: Attach a docker to many networks.

    You can't do it through the docker run command, there you can only specify one network. However, you can attach a docker to a network with the command:

    docker network attach network-name docker-name


  • New: File System Isolation.

    For basic command line tools with file system operations, the CliRunner.isolated_filesystem() method is useful for setting the current working directory to a new, empty folder.

    from click.testing import CliRunner
    from cat import cat
    def test_cat():
        runner = CliRunner()
        with runner.isolated_filesystem():
            with open("hello.txt", "w") as f:
                f.write("Hello World!")
            result = runner.invoke(cat, ["hello.txt"])
            assert result.exit_code == 0
            assert result.output == "Hello World!\n"

    Pass temp_dir to control where the temporary directory is created. The directory will not be removed by Click in this case. This is useful to integrate with a framework like Pytest that manages temporary files.

    def test_keep_dir(tmp_path):
        runner = CliRunner()
        with runner.isolated_filesystem(temp_dir=tmp_path) as td:


  • Correction: Correct the type hints of the factory.

    Use Any

    class PersonFactory(ModelFactory[Any]):
  • New: Track issue when using with.

  • New: Creating your custom factories.

    If your model has an attribute that is not supported by pydantic-factories and it depends on third party libraries, you can create your custom extension subclassing the ModelFactory, and overriding the get_mock_value method to add your logic.

    from pydantic_factories import ModelFactory
    class CustomFactory(ModelFactory[Any]):
        """Tweak the ModelFactory to add our custom mocks."""
        def get_mock_value(cls, field_type: Any) -> Any:
            """Add our custom mock value."""
            if str(field_type) == "my_super_rare_datetime_field":
                return cls._get_faker().date_time_between()
            return super().get_mock_value(field_type)

    Where cls._get_faker() is a faker instance that you can use to build your returned value.

  • New: Solve W1514 pylint error.

    with open('file.txt', 'r', encoding='utf-8'):
  • Correction: Deprecate in favour of Streamlit.

    Streamlit is a much more easy, beautiful and clean library for the same purpose.

  • New: Running process in background.

    By default, each running command blocks until completion. If you have a long-running command, you can put it in the background with the _bg=True special kwarg:

    print("...3 seconds later")
    p = sleep(3, _bg=True)
    print("prints immediately!")
    print("...and 3 seconds later")

    You’ll notice that you need to call RunningCommand.wait() in order to exit after your command exits.

    Commands launched in the background ignore SIGHUP, meaning that when their controlling process (the session leader, if there is a controlling terminal) exits, they will not be signalled by the kernel. But because sh commands launch their processes in their own sessions by default, meaning they are their own session leaders, ignoring SIGHUP will normally have no impact. So the only time ignoring SIGHUP will do anything is if you use _new_session=False, in which case the controlling process will probably be the shell from which you launched python, and exiting that shell would normally send a SIGHUP to all child processes.

    If you want to terminate the process use p.kill().

  • New: Output callbacks.

    In combination with _bg=True, sh can use callbacks to process output incrementally by passing a callable function to _out and/or _err. This callable will be called for each line (or chunk) of data that your command outputs:

    from sh import tail
    def process_output(line):
    p = tail("-f", "/var/log/some_log_file.log", _out=process_output, _bg=True)

    To “quit” your callback, simply return True. This tells the command not to call your callback anymore. This does not kill the process though see Interactive callbacks for how to kill a process from a callback.

    The line or chunk received by the callback can either be of type str or bytes. If the output could be decoded using the provided encoding, a str will be passed to the callback, otherwise it would be raw bytes.


  • New: Tweak Poetry evaluation and add PDM.

    Check PDM's review, it's awesome!

  • New: Sum up the W3 HTML tutorial.

  • New: HTML beautifier.

    If you encounter html code that it's not well indented you can use html beautify.


  • New: Sum up the W3 CSS tutorial.
  • New: CSS Flexbox layout.

    The Flexbox Layout aims at providing a more efficient way to lay out, align and distribute space among items in a container, even when their size is unknown and/or dynamic.


  • New: Resolve the 307 error.

    Probably you've introduced an ending / to the endpoint, so instead of asking for /my/endpoint you tried to do /my/endpoint/.

  • New: Resolve the 422 error.

    You're probably passing the wrong arguments to the POST request, to solve it see the text attribute of the result. For example:

    result =
        json={"body": body},

    The error is telling us that the required url parameter is missing.

  • New: Resolve the 409 error.

    Probably an exception was raised in the backend, use pdb to follow the trace and catch where it happened.

  • New: Use ternary conditional operator.

    It's defined by a condition followed by a question mark ?, then an expression to execute if the condition is truthy followed by a colon :, and finally the expression to execute if the condition is falsy.

    condition ? exprIfTrue : exprIfFalse

    function getFee(isMember) {
      return (isMember ? '$2.00' : '$10.00');
    // expected output: "$2.00"
    // expected output: "$10.00"
    // expected output: "$10.00"
  • New: Filter the contents of an array.

    The filter() method creates a new array filled with elements that pass a test provided by a function.

    The filter() method does not execute the function for empty elements.

    The filter() method does not change the original array.

    For example:

    const ages = [32, 33, 16, 40];
    const result = ages.filter(checkAdult);
    function checkAdult(age) {
      return age >= 18;
  • New: Interacting with HTML.

  • New: Add endpoints only on testing environment.

    Sometimes you want to have some API endpoints to populate the database for end to end testing the frontend. If your app config has the environment attribute, you could try to do:

    app = FastAPI()
    def get_config() -> Config:
        """Configure the program settings."""
        # no cover: the dependency are injected in the tests"Loading the config")
        return Config()  # pragma: no cover
    if get_config().environment == "testing":
        @app.get("/seed", status_code=201)
        def seed_data(
            repo: Repository = Depends(get_repo),
            empty: bool = True,
            num_articles: int = 3,
            num_sources: int = 2,
        ) -> None:
            """Add seed data for the end to end tests.
                repo: Repository to store the data.
                repo=repo, empty=empty, num_articles=num_articles, num_sources=num_sources

    But the injection of the dependencies is only done inside the functions, so get_config().environment will always be the default value. I ended up doing that check inside the endpoint, which is not ideal.

    @app.get("/seed", status_code=201)
    def seed_data(
        config: Config = Depends(get_config),
        repo: Repository = Depends(get_repo),
        empty: bool = True,
        num_articles: int = 3,
        num_sources: int = 2,
    ) -> None:
        """Add seed data for the end to end tests.
            repo: Repository to store the data.
        if config.environment != "testing":
            raise HTTPException(status_code=404)
  • New: Coalescent operator.

    Is similar to the Logical OR operator (||), except instead of relying on truthy/falsy values, it relies on "nullish" values (there are only 2 nullish values, null and undefined).

    This means it's safer to use when you treat falsy values like 0 as valid.

    Similar to Logical OR, it functions as a control-flow operator; it evaluates to the first not-nullish value.

    It was introduced in Chrome 80 / Firefox 72 / Safari 13.1. It has no IE support.

    console.log(4 ?? 5);
    // 4, since neither value is nullish
    console.log(null ?? 10);
    // 10, since 'null' is nullish
    console.log(undefined ?? 0);
    // 0, since 'undefined' is nullish
    // Here's a case where it differs from
    // Logical OR (||):
    console.log(0 ?? 5); // 0
    console.log(0 || 5); // 5


  • New: Added memray profiling tool.

    memray looks very promising.

  • New: Introduce Qwik.

    Qwik is a new kind of web framework that can deliver instantly load web applications at any size or complexity. Your sites and apps can boot with about 1kb of JS (regardless of application complexity), and achieve consistent performance at scale.

    You can see a good overview in the Qwik presentation.


  • New: Introduce JWT.

    JWT (JSON Web Token) is a proposed Internet standard for creating data with optional signature and/or optional encryption whose payload holds JSON that asserts some number of claims. The tokens are signed either using a private secret or a public/private key.


  • New: Introduce pytest-httpserver.

    pytest-httpserver is a python package which allows you to start a real HTTP server for your tests. The server can be configured programmatically to how to respond to requests.

  • New: Add issue when using updated_parser.

    Deprecation warning when using updated_parsed

  • Correction: Update the tmpdir_factory type hints.

    You should now use TempPathFactory instead of TempdirFactory

  • Correction: Use pytest-freezegun globally.

    Most of the tests work with frozen time, so it's better to freeze it by default and unfreeze it on the ones that actually need time to move.

    To do that set in your tests/ a globally used fixture:

        from freezegun.api import FrozenDateTimeFactory
    def frozen_time() -> Generator['FrozenDateTimeFactory', None, None]:
        """Freeze all tests time"""
        with freezegun.freeze_time() as freeze:
            yield freeze
  • New: Ignore a warning of a specific package.

    In the pyproject.toml

    filterwarnings = [
      # Until is merged
  • New: Run tests in a random order.

    pytest-random-order is a pytest plugin that randomises the order of tests. This can be useful to detect a test that passes just because it happens to run after an unrelated test that leaves the system in a favourable state.

    To use it add the --random-order to your pytest run.

    It can't yet be used with pytest-xdist though :(.

  • New: Enforce serial execution of related tests.

    Implement a serial fixture with a session-scoped file lock fixture using the filelock package. You can add this to your

    import contextlib
    import os
    import filelock
    def lock(tmp_path_factory):
        base_temp = tmp_path_factory.getbasetemp()
        lock_file = base_temp.parent / 'serial.lock'
        yield filelock.FileLock(lock_file=str(lock_file))
        with contextlib.suppress(OSError):
    def serial(lock):
        with lock.acquire(poll_intervall=0.1):

    Then inject the serial fixture in any test that requires serial execution. All tests that use the serial fixture are executed serially while any tests that do not use the fixture are executed in parallel.

  • New: Using fixtures at class level.

    Sometimes test functions do not directly need access to a fixture object. For example, tests may require to operate with an empty directory as the current working directory but otherwise do not care for the concrete directory.

    class TestDirectoryInit:
    Due to the usefixtures marker, the cleandir fixture will be required for the execution of each test method, just as if you specified a cleandir function argument to each of them.

    You can specify multiple fixtures like this:

    @pytest.mark.usefixtures("cleandir", "anotherfixture")
  • Correction: Improve the snippet to run some tests in serial instead of parallel.

  • New: Parse a feed from a string.

    >>> import feedparser
    >>> rawdata = """<rss version="2.0">
    <title>Sample Feed</title>
    >>> d = feedparser.parse(rawdata)
    >>> d['feed']['title']
    u'Sample Feed'
  • New: Change log level of a dependency.

    caplog.set_level(logging.WARNING, logger="urllib3")
  • New: Show logging messages on the test run.

    Add to your pyproject.toml:

       log_cli = true
       log_cli_level = 10

    Or run it in the command itself pytest -o log_cli=true --log-cli-level=10

    Remember you can change the log level of the different components in case it's too verbose.

  • New: The tmp_path fixture.

    You can use the tmp_path fixture which will provide a temporary directory unique to the test invocation, created in the base temporary directory.

    tmp_path is a pathlib.Path object. Here is an example test usage:

    def test_create_file(tmp_path):
        d = tmp_path / "sub"
        p = d / "hello.txt"
        assert p.read_text() == CONTENT
        assert len(list(tmp_path.iterdir())) == 1
        assert 0
  • Correction: Deprecate the tmpdir fixture.

    Warning: Don't use tmpdir use tmp_path instead because tmpdir uses py which is unmaintained and has unpatched vulnerabilities.

  • Correction: Remove warning that pytest-random-order can't be used with pytest-xdist.

    The issue was fixed


  • New: Introduce gettext.

    Gettext is the defacto universal solution for internationalization (I18N) and localization (L10N), offering a set of tools that provides a framework to help other packages produce multi-lingual messages. It gives an opinionated way of how programs should be written to support translated message strings and a directory and file naming organisation for the messages that need to be translated.

  • New: Introduce Python Internationalization.

    To make your code accessible to more people, you may want to support more than one language. It's not as easy as it looks as it's not enough to translate it but also it must look and feel local. The answer is internationalization.

    Internationalization (numeronymed as i18n) can be defined as the design process that ensures a program can be adapted to various languages and regions without requiring engineering changes to the source code.

    Common internationalization tasks include:

    • Facilitating compliance with Unicode.
    • Minimizing the use of concatenated strings.
    • Accommodating support for double-byte languages (e.g. Japanese) and right-to-left languages (for example, Hebrew).
    • Avoiding hard-coded text.
    • Designing for independence from cultural conventions (e. g., date and time displays), limiting language, and character sets.

    Localization (l10n) refers to the adaptation of your program, once internationalized, to the local language and cultural habits. In theory it looks simple to implement. In practice though, it takes time and effort to provide the best Internationalization and Localization experience for your global audience.

    In Python, there is a specific bundled module for that and it’s called gettext, which consists of a public API and a set of tools that help extract and generate message catalogs from the source code.

Python Snippets

  • New: How to raise a warning.

    Warning messages are typically issued in situations where it is useful to alert the user of some condition in a program, where that condition (normally) doesn’t warrant raising an exception and terminating the program. For example, one might want to issue a warning when a program uses an obsolete module.

    import warnings
    def f():
        warnings.warn('Message', DeprecationWarning)

    To test the function with pytest you can use pytest.warns:

    import warnings
    import pytest
    def test_warning():
        with pytest.warns(UserWarning, match='my warning'):
            warnings.warn("my warning", UserWarning)
  • New: Parse XML file with beautifulsoup.

    You need both beautifulsoup4 and lxml:

    bs = BeautifulSoup(requests.get(url), "lxml")
  • New: Get a traceback from an exception.

    import traceback
    traceback_str = ''.join(traceback.format_tb(e.__traceback__))
  • New: Add the Warning categories.

    Class Description
    Warning This is the base class of all warning category classes.
    UserWarning The default category for warn().
    DeprecationWarning Warn other developers about deprecated features.
    FutureWarning Warn other end users of applications about deprecated features.
    SyntaxWarning Warn about dubious syntactic features.
    RuntimeWarning Warn about dubious runtime features.
    PendingDeprecationWarning Warn about features that will be deprecated in the future (ignored by default).
    ImportWarning Warn triggered during the process of importing a module (ignored by default).
    UnicodeWarning Warn related to Unicode.
    BytesWarning Warn related to bytes and bytearray.
    ResourceWarning Warn related to resource usage (ignored by default).
  • New: How to Find Duplicates in a List in Python.

    numbers = [1, 2, 3, 2, 5, 3, 3, 5, 6, 3, 4, 5, 7]
    duplicates = [number for number in numbers if numbers.count(number) > 1]
    unique_duplicates = list(set(duplicates))

    If you want to count the number of occurrences of each duplicate, you can use:

    from collections import Counter
    numbers = [1, 2, 3, 2, 5, 3, 3, 5, 6, 3, 4, 5, 7]
    counts = dict(Counter(numbers))
    duplicates = {key:value for key, value in counts.items() if value > 1}

    To remove the duplicates use a combination of list and set:

    unique = list(set(numbers))
  • New: How to decompress a gz file.

    import gzip
    import shutil
    with'file.txt.gz', 'rb') as f_in:
        with open('file.txt', 'wb') as f_out:
            shutil.copyfileobj(f_in, f_out)
  • New: How to compress/decompress a tar file.

    def compress(tar_file, members):
        Adds files (`members`) to a tar_file and compress it
        tar =, mode="w:gz")
        for member in members:
    def decompress(tar_file, path, members=None):
        Extracts `tar_file` and puts the `members` to `path`.
        If members is None, all members on `tar_file` will be extracted.
        tar =, mode="r:gz")
        if members is None:
            members = tar.getmembers()
        for member in members:
            tar.extract(member, path=path)
  • New: Get the attribute of an attribute when sorting.

    To sort the list in place:

    ut.sort(key=lambda x: x.count, reverse=True)

    To return a new list, use the sorted() built-in function:

    newlist = sorted(ut, key=lambda x: x.body.id_, reverse=True)
  • New: How to extend a dictionary.

  • New: How to close a subprocess process.

  • New: Define a property of a class.

    If you're using Python 3.9 or above you can directly use the decorators:

    class G:
        def __doc__(cls):
            return f'A doc for {cls.__name__!r}'

    If you're not, the solutions are not that good.

  • New: Fix SIM113 Use enumerate.

    Use enumerate to get a running number over an iterable.

    idx = 0
    for el in iterable:
        idx += 1
    for idx, el in enumerate(iterable):
  • New: Parse an RFC2822 date.

    Interesting to test the accepted format of RSS dates.

    >>> from email.utils import parsedate_to_datetime
    >>> datestr = 'Sun, 09 Mar 1997 13:45:00 -0500'
    >>> parsedate_to_datetime(datestr)
    datetime.datetime(1997, 3, 9, 13, 45, tzinfo=datetime.timezone(datetime.timedelta(-1, 68400)))
  • New: Convert a datetime to RFC2822.

    Interesting as it's the accepted format of RSS dates.

    >>> import datetime
    >>> from email import utils
    >>> nowdt =
    >>> utils.format_datetime(nowdt)
    'Tue, 10 Feb 2020 10:06:53 -0000'
  • New: Encode url.

    import urllib.parse
    from pydantic import AnyHttpUrl
    def _normalize_url(url: str) -> AnyHttpUrl:
        """Encode url to make it compatible with AnyHttpUrl."""
        return typing.cast(
            urllib.parse.quote(url, ":/"),

    The :/ is needed when you try to parse urls that have the protocol, otherwise https://www. gets transformed into https%3A//www..

  • New: Initialize a dataclass with kwargs.

    If you care about accessing attributes by name, or if you can't distinguish between known and unknown arguments during initialisation, then your last resort without rewriting __init__ (which pretty much defeats the purpose of using dataclasses in the first place) is writing a @classmethod:

    from dataclasses import dataclass
    from inspect import signature
    class Container:
        user_id: int
        body: str
        def from_kwargs(cls, **kwargs):
            # fetch the constructor's signature
            cls_fields = {field for field in signature(cls).parameters}
            # split the kwargs into native ones and new ones
            native_args, new_args = {}, {}
            for key, value in kwargs.items():
                if key in cls_fields:
                    native_args[key] = value
                    new_args[key] = value
            # use the native ones to create the class ...
            ret = cls(**native_args)
            # ... and add the new ones by hand
            for new_key, new_value in new_args.items():
                setattr(ret, new_key, new_value)
            return ret


    params = {'user_id': 1, 'body': 'foo', 'bar': 'baz', 'amount': 10}
    Container(**params)  # still doesn't work, raises a TypeError
    c = Container.from_kwargs(**params)
    print(  # prints: 'baz'
  • New: Replace a substring of a string.

    txt = "I like bananas"
    x = txt.replace("bananas", "apples")
  • New: Create random number.

    import random
  • New: Check if local port is available or in use.

    Create a temporary socket and then try to bind to the port to see if it's available. Close the socket after validating that the port is available.

    def port_in_use(port):
        """Test if a local port is used."""
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        with suppress(OSError):
            sock.bind(("", port))
            return True
        return False
  • New: Fix R1728: Consider using a generator.

    Removing [] inside calls that can use containers or generators should be considered for performance reasons since a generator will have an upfront cost to pay. The performance will be better if you are working with long lists or sets.

    Problematic code:

    list([0 for y in list(range(10))])  # [consider-using-generator]
    tuple([0 for y in list(range(10))])  # [consider-using-generator]
    sum([y**2 for y in list(range(10))])  # [consider-using-generator]
    max([y**2 for y in list(range(10))])  # [consider-using-generator]
    min([y**2 for y in list(range(10))])  # [consider-using-generator]

    Correct code:

    list(0 for y in list(range(10)))
    tuple(0 for y in list(range(10)))
    sum(y**2 for y in list(range(10)))
    max(y**2 for y in list(range(10)))
    min(y**2 for y in list(range(10)))
  • New: Fix W1510: Using without explicitly set check is not recommended.

    The run call in the example will succeed whether the command is successful or not. This is a problem because we silently ignore errors.

    import subprocess
    def example():
        proc ="ls")
        return proc.stdout

    When we pass check=True, the behavior changes towards raising an exception when the return code of the command is non-zero.

  • New: Convert bytes to string.

  • New: Use pipes with subprocess.

    To use pipes with subprocess you need to use the flag check=True which is a bad idea. Instead you should use two processes and link them together in python:

    ps = subprocess.Popen(('ps', '-A'), stdout=subprocess.PIPE)
    +output = subprocess.check_output(('grep', 'process_name'), stdin=ps.stdout)
  • New: Pass input to the stdin of a subprocess.

    import subprocess
    p =['myapp'], input='data_to_write', text=True)
  • New: Copy and paste from clipboard.

    You can use many libraries to do it, but if you don't want to add any other dependencies you can use subprocess run.

    To copy from the selection clipboard, assuming you've got xclip installed, you could do:
        ['xclip', '-selection', 'clipboard', '-i'],
        input='text to be copied',

    To paste it:

        ['xclip', '-o', '-selection', 'clipboard']

    Good luck testing that in the CI xD

  • New: Get an instance of an Enum by value.

    If you want to initialize a pydantic model with an Enum but all you have is the value of the Enum then you need to create a method to get the correct Enum. Otherwise mypy will complain that the type of the assignation is str and not Enum.

    So if the model is the next one:

    class ServiceStatus(BaseModel):
        """Model the docker status of a service."""
        name: str
        environment: Environment

    You can't do ServiceStatus(name='test', environment='production'). you need to add the get_by_value method to the Enum class:

    class Environment(str, Enum):
        """Set the possible environments."""
        STAGING = "staging"
        PRODUCTION = "production"
        def get_by_value(cls, value: str) -> Enum:
            """Return the Enum element that meets a value"""
            return [member for member in cls if member.value == value][0]

    Now you can do:

  • New: Print datetime with a defined format.

    now =
    today.strftime('We are the %d, %b %Y')

    Where the datetime format is a string built from these directives.

  • New: Print string with asciiart.

    pip install pyfiglet
    from pyfiglet import figlet_format
    print(figlet_format('09 : 30'))

    If you want to change the default width of 80 caracteres use:

    from pyfiglet import Figlet
    f = Figlet(font="standard", width=100)
  • New: Print specific time format.'%Y-%m-%dT%H:%M:%S')

    Code Meaning Example %a Weekday as locale’s abbreviated name. Mon %A Weekday as locale’s full name. Monday %w Weekday as a decimal number, where 0 is Sunday and 6 is Saturday. 1 %d Day of the month as a zero-padded decimal number. 30 %-d Day of the month as a decimal number. (Platform specific) 30 %b Month as locale’s abbreviated name. Sep %B Month as locale’s full name. September %m Month as a zero-padded decimal number. 09 %-m Month as a decimal number. (Platform specific) 9 %y Year without century as a zero-padded decimal number. 13 %Y Year with century as a decimal number. 2013 %H Hour (24-hour clock) as a zero-padded decimal number. 07 %-H Hour (24-hour clock) as a decimal number. (Platform specific) 7 %I Hour (12-hour clock) as a zero-padded decimal number. 07 %-I Hour (12-hour clock) as a decimal number. (Platform specific) 7 %p Locale’s equivalent of either AM or PM. AM %M Minute as a zero-padded decimal number. 06 %-M Minute as a decimal number. (Platform specific) 6 %S Second as a zero-padded decimal number. 05 %-S Second as a decimal number. (Platform specific) 5 %f Microsecond as a decimal number, zero-padded on the left. 000000 %z UTC offset in the form +HHMM or -HHMM (empty string if the the object is naive). %Z Time zone name (empty string if the object is naive). %j Day of the year as a zero-padded decimal number. 273 %-j Day of the year as a decimal number. (Platform specific) 273 %U Week number of the year (Sunday as the first day of the week) as a zero padded decimal number. All days in a new year preceding the first Sunday are considered to be in week 0. 39 %W Week number of the year (Monday as the first day of the week) as a decimal number. All days in a new year preceding the first Monday are considered to be in week 0. %c Locale’s appropriate date and time representation. Mon Sep 30 07:06:05 2013 %x Locale’s appropriate date representation. 09/30/13 %X Locale’s appropriate time representation. 07:06:05 %% A literal '%' character. %

  • Correction: Deprecate tmpdir in favour of tmp_path.

  • New: Pad integer with zeros.

    >>> length = 1
    >>> print(f'length = {length:03}')
    length = 001
  • New: Pathlib make parent directories if they don't exist.

    pathlib.Path("/tmp/sub1/sub2").mkdir(parents=True, exist_ok=True)

    From the docs:

    • If parents is true, any missing parents of this path are created as needed; they are created with the default permissions without taking mode into account (mimicking the POSIX mkdir -p command).

    • If parents is false (the default), a missing parent raises FileNotFoundError.

    • If exist_ok is false (the default), FileExistsError is raised if the target directory already exists.

    • If exist_ok is true, FileExistsError exceptions will be ignored (same behavior as the POSIX mkdir -p command), but only if the last path component is not an existing non-directory file.

  • New: Pathlib touch a file.

    Create a file at this given path.


    If the file already exists, the function succeeds if exist_ok is true (and its modification time is updated to the current time), otherwise FileExistsError is raised.

    If the parent directory doesn't exist you need to create it first.

    global_conf_path = xdg_home / "autoimport" / "config.toml"
  • New: Pad a string with spaces.

    >>> name = 'John'
    >>> name.ljust(15)
    'John           '
  • New: Get hostname of the machine.

    Any of the next three options:

    import os
    import platform
    import socket
  • New: Get common elements of two lists.

    >>> a = ['a', 'b']
    >>> b = ['c', 'd', 'b']
    >>> set(a) & set(b)
  • New: Recursively find files.

    from pathlib import Path
    for path in Path("src").rglob("*.c"):
  • New: Print an exception using the logging module.

    Logging an exception can be done with the module-level function logging.exception() like so:

    import logging
        1 / 0
    except BaseException:
        logging.exception("An exception was thrown!")
    ERROR:root:An exception was thrown!
    Traceback (most recent call last):
    File ".../Desktop/", line 4, in <module>
    ZeroDivisionError: division by zero


    • The function logging.exception() should only be called from an exception handler.

    • The logging module should not be used inside a logging handler to avoid a RecursionError.

    It's also possible to log the exception with another log level but still show the exception details by using the keyword argument exc_info=True, like so:

    logging.critical("An exception was thrown!", exc_info=True)
    logging.error("An exception was thrown!", exc_info=True)
    logging.warning("An exception was thrown!", exc_info=True)"An exception was thrown!", exc_info=True)
    logging.debug("An exception was thrown!", exc_info=True)
    logging.log(level, "An exception was thrown!", exc_info=True)
  • New: Print an exception with the traceback module.

    The traceback module provides methods for formatting and printing exceptions and their tracebacks, e.g. this would print exception like the default handler does:

    import traceback
        1 / 0
    except Exception:
    Traceback (most recent call last):
      File "C:\scripts\", line 4, in <module>
    ZeroDivisionError: division by zero


  • New: Introduce ICS.

    ics is a pythonic iCalendar library. Its goals are to read and write ics data in a developer-friendly way.


  • New: Introduce Maison.

    Maison is a Python library to read configuration settings from configuration files using pydantic behind the scenes.

    It's useful to parse TOML config files.


  • New: Use mypy pydantic's plugin.

    If you use mypy I highly recommend you to activate the pydantic plugin by adding to your pyproject.toml:

    plugins = [
    init_forbid_extra = true
    init_typed = true
    warn_required_dynamic_aliases = true
    warn_untyped_fields = true
  • New: Ignore a field when representing an object.

    Use repr=False. This is useful for properties that don't return a value quickly, for example if you save an sh background process.

    class Temp(BaseModel):
        foo: typing.Any
        boo: typing.Any = Field(..., repr=False)


  • New: Delete snapshot repository.

    curl -XDELETE {{ url }}/_snapshot/{{ backup_path }}
  • New: Searching documents.

    We use HTTP requests to talk to ElasticSearch. A HTTP request is made up of several components such as the URL to make the request to, HTTP verbs (GET, POST etc) and headers. In order to succinctly and consistently describe HTTP requests the ElasticSearch documentation uses cURL command line syntax. This is also the standard practice to describe requests made to ElasticSearch within the user community.

    An example HTTP request using CURL syntax looks like this:

    curl -XPOST "https://localhost:9200/_search" -d' { "query": { "match_all": {} }
  • New: Get documents that match a string.

    curl \
        -H 'Content-Type: application/json' \
        -XPOST "https://localhost:9200/_search" \
        -d' { "query": { "query_string": {"query": "test company"} }}'
  • New: Introduce python elasticsearch library.

    Python elasticsearch is the Official low-level client for Elasticsearch. Its goal is to provide common ground for all Elasticsearch-related code in Python; because of this it tries to be opinion-free and very extendable.


    pip install elasticsearch


    from elasticsearch import Elasticsearch
    client = Elasticsearch("http://localhost:9200")
    resp ="test-index", query={"match_all": {}})
    documents = resp.body["hits"]["hits"]
    doc = {"partial_document": "value"}
    resp = client.update(index=INDEX, id=id_, doc=doc)

Python Mysql


  • New: Introduce pythonping.

    pythonping is simple way to ping in Python. With it, you can send ICMP Probes to remote devices like you would do from the terminal.

    Warning: Since using pythonping requires root permissions or granting cap_net_raw capability to the python interpreter, try to measure the latency to a server by other means such as using requests.

Python VLC

  • New: Introduce python's vlc library.

    Python VLC is a library to control vlc from python.

    There is not usable online documentation, you'll have to go through the help(<component>) inside the python console.


  • New: Exit when using control + c.

    If you want the question to exit when it receives a KeyboardInterrupt event, use unsafe_ask instead of ask.


  • New: Live display text.

    import time
    from import Live
    with Live("Test") as live:
        for row in range(12):
            live.update(f"Test {row}")

    If you don't want the text to have the default colors, you can embed it all in a Text object.


  • New: Click on element.

    Once you've opened the page you want to interact with driver.get(), you need to get the Xpath of the element to click on. You can do that by using your browser inspector, to select the element, and once on the code if you right click there is a "Copy XPath"

    Once that is done you should have something like this when you paste it down.


    Similarly it is the same process for the input fields for username, password, and login button.

    We can go ahead and do that on the current page. We can store these xpaths as strings in our code to make it readable.

    We should have three xpaths from this page and one from the initial login.

    first_login = '//*[@id=”react-root”]/section/main/article/div[2]/div[2]/p/a'
    username_input = '//*[@id="react-root"]/section/main/div/article/div/div[1]/div/form/div[2]/div/label/input'
    password_input = '//*[@id="react-root"]/section/main/div/article/div/div[1]/div/form/div[3]/div/label/input'
    login_submit = '//*[@id="react-root"]/section/main/div/article/div/div[1]/div/form/div[4]/button/div'

    Now that we have the xpaths defined we can now tell Selenium webdriver to click and send some keys over for the input fields.

    from import By
    driver.find_element(By.XPATH, first_login).click()
    driver.find_element(By.XPATH, username_input).send_keys("username")
    driver.find_element(By.XPATH, password_input).send_keys("password")
    driver.find_element(By.XPATH, login_submit).click()
  • New: Bypass Selenium detectors.

    Sometimes web servers react differently if they notice that you're using selenium. Browsers can be detected through different ways and some commonly used mechanisms are as follows:

    • Implementing captcha / recaptcha to detect the automatic bots.
    • Non-human behaviour (browsing too fast, not scrolling to the visible elements, ...)
    • Using an IP that's flagged as suspicious (VPN, VPS, Tor...)
    • Detecting the term HeadlessChrome within headless Chrome UserAgent
    • Using Bot Management service from Distil Networks, Akamai, Datadome.

    They do it through different mechanisms:

    If you've already been detected, you might get blocked for a plethora of other reasons even after using these methods. So you may have to try accessing the site that was detecting you using a VPN, different user-agent, etc.

  • New: Basic Selenium commands.

    Open a URL:


    Get page source:


    Get current url:

  • New: Disable loading of images.

    You can pass options to the initialization of the chromedriver to tweak how does the browser behave. To get a list of the actual prefs you can go to chrome://prefs-internals, there you can get the code you need to tweak.

    options = ChromeOptions()
            "profile.default_content_setting_values.images": 2,


  • New: Get a list of the tables.

    sql_query = """SELECT name FROM sqlite_master
      WHERE type='table';"""
    cursor = sqliteConnection.cursor()


  • New: Avoid exception logging when killing a background process.

    In order to catch this exception execute your process with _bg_exec=False and execute p.wait() if you want to handle the exception. Otherwise don't use the p.wait().

    p = sh.sleep(100, _bg=True, _bg_exc=False)
    except sh.SignalException_SIGKILL as err:


  • New: Introduce Typer.

    Typer is a library for building CLI applications that users will love using and developers will love creating. Based on Python 3.6+ type hints.

    The key features are:

    • Intuitive to write: Great editor support. Completion everywhere. Less time debugging. Designed to be easy to use and learn. Less time reading docs.
    • Easy to use: It's easy to use for the final users. Automatic help, and automatic completion for all shells.
    • Short: Minimize code duplication. Multiple features from each parameter declaration. Fewer bugs.
    • Start simple: The simplest example adds only 2 lines of code to your app: 1 import, 1 function call.
    • Grow large: Grow in complexity as much as you want, create arbitrarily complex trees of commands and groups of subcommands, with options and arguments.
  • New: Get the command line application directory.

    You can get the application directory where you can, for example, save configuration files with typer.get_app_dir():

    from pathlib import Path
    import typer
    APP_NAME = "my-super-cli-app"
    def main() -> None:
        """Define the main command line interface."""
        app_dir = typer.get_app_dir(APP_NAME)
        config_path: Path = Path(app_dir) / "config.json"
        if not config_path.is_file():
            print("Config file doesn't exist yet")
    if __name__ == "__main__":

    It will give you a directory for storing configurations appropriate for your CLI program for the current user in each operating system.

  • New: Exiting with an error code.

    typer.Exit() takes an optional code parameter. By default, code is 0, meaning there was no error.

    You can pass a code with a number other than 0 to tell the terminal that there was an error in the execution of the program:

    import typer
    def main(username: str):
        if username == "root":
            print("The root user is reserved")
            raise typer.Exit(code=1)
        print(f"New user created: {username}")
    if __name__ == "__main__":
  • New: Create a --version command.

    You could use a callback to implement a --version CLI option.

    It would show the version of your CLI program and then it would terminate it. Even before any other CLI parameter is processed.

    from typing import Optional
    import typer
    __version__ = "0.1.0"
    def version_callback(value: bool) -> None:
        """Print the version of the program."""
        if value:
            print(f"Awesome CLI Version: {__version__}")
            raise typer.Exit()
    def main(
        version: Optional[bool] = typer.Option(
            None, "--version", callback=version_callback, is_eager=True
    ) -> None:
    if __name__ == "__main__":
  • New: Testing.

    Testing is similar to click testing, but you import the CliRunner directly from typer:

    from typer.testing import CliRunner

Generic Coding Practices

Use warnings to evolve your code

  • New: Using warnings to evolve your package.

    Regardless of the versioning system you're using, once you reach your first stable version, the commitment to your end users must be that you give them time to adapt to the changes in your program. So whenever you want to introduce a breaking change release it under a new interface, and in parallel, start emitting DeprecationWarning or UserWarning messages whenever someone invokes the old one. Maintain this state for a defined period (for example six months), and communicate explicitly in the warning message the timeline for when users have to migrate.

    This gives everyone time to move to the new interface without breaking their system, and then the library may remove the change and get rid of the old design chains forever. As an added benefit, only people using the old interface will ever see the warning, as opposed to affecting everyone (as seen with the semantic versioning major version bump).

  • New: Change signature of method if you can.

    You can take the chance of the deprecation to change the signature of the function, so that if the user is using the old argument, it uses the old behaviour and gets the warning, and if it's using the new argument, it uses the new. The advantage of changing the signature is that you don't need to do another deprecation for the temporal argument flag.

  • New: Use environmental variables to evolve your packages.

    A cleaner way to handle the package evolve is with environmental variables, that way you don't need to change the signature of the function twice. I've learned this from boto where they informed their users this way:

    • If you wish to test the new feature we have created a new environment variable BOTO_DISABLE_COMMONNAME. Setting this to true will suppress the warning and use the new functionality.
    • If you are concerned about this change causing disruptions, you can pin your version of botocore to <1.28.0 until you are ready to migrate.
    • If you are only concerned about silencing the warning in your logs, use warnings.filterwarnings when instantiating a new service client.

      import warnings
      warnings.filterwarnings('ignore', category=FutureWarning, module='botocore.client')

Abstract Syntax Trees

  • New: Introduce abstract syntax trees.

    Abstract syntax trees (AST) is a tree representation of the abstract syntactic structure of text (often source code) written in a formal language. Each node of the tree denotes a construct occurring in the text.

    The syntax is "abstract" in the sense that it does not represent every detail appearing in the real syntax, but rather just the structural or content-related details. For instance, grouping parentheses are implicit in the tree structure, so these do not have to be represented as separate nodes. Likewise, a syntactic construct like an if-condition-then statement may be denoted by means of a single node with three branches.

    This distinguishes abstract syntax trees from concrete syntax trees, traditionally designated parse trees. Parse trees are typically built by a parser during the source code translation and compiling process. Once built, additional information is added to the AST by means of subsequent processing, e.g., contextual analysis.

    Abstract syntax trees are also used in program analysis and program transformation systems.

    pyparsing looks to be a good candidate to construct an AST

Frontend Development


  • New: Introduce git and how to learn it.

    Git is a software for tracking changes in any set of files, usually used for coordinating work among programmers collaboratively developing source code during software development. Its goals include speed, data integrity, and support for distributed, non-linear workflows (thousands of parallel branches running on different systems).

  • Improvement: Master to main branch change's controversy.

    The change is not free of controversy, for example in the PDM project some people are not sure that it's needed for many reasons. Let's see each of them:

    As we're not part of the deciding organisms of the collectives doing the changes, all we can use are their statements and discussions to guess what are the reasons behind their support of the change. Despite that some of them do use the argument that other communities do support the change to emphasize the need of the change, all of them mention that the main reason is that the term is offensive to some people.

    • I don't see an issue using the term master: If you relate to this statement it can be because you're not part of the communities that suffer the oppression tied to the term, and that makes you blind to the issue. It's a lesson I learned on my own skin throughout the years. There are thousand of situations, gestures, double meaning words and sentences that went unnoticed by me until I started discussing it with the people that are suffering them (women, racialized people, LGTBQI+, ...). Throughout my experience I've seen that the more privileged you are, the blinder you become. You can read more on privileged blindness here, here or here (I've skimmed through the articles, and are the first articles I've found, there are probably better references).

      I'm not saying that privileged people are not aware of the issues or that they can even raise them. We can do so and more we read, discuss and train ourselves, the better we'll detect them. All I'm saying is that a non privileged person will always detect more because they suffer them daily.

      I understand that for you there is no issue using the word master, there wasn't an issue for me either until I saw these projects doing the change, again I was blinded to the issue as I'm not suffering it. That's because change is not meant for us, as we're not triggered by it. The change is targeted to the people that do perceive that master is an offensive term. What we can do is empathize with them and follow this tiny tiny tiny gesture. It's the least we can do.

      Think of a term that triggers you, such as heil hitler, imagine that those words were being used to define the main branch of your code, and that everyday you sit in front of your computer you see them. You'll probably be reminded of the historic events, concepts, feelings that are tied to that term each time you see it, and being them quite negative it can slowly mine you. Therefore it's legit that you wouldn't want to be exposed to that negative effects.

    • I don't see who will benefit from this change: Probably the people that belongs to communities that are and have been under constant oppression for a very long time, in this case, specially the racialized ones which have suffered slavery.

      Sadly you will probably won't see many the affected people speak in these discussions, first because there are not that many, sadly the IT world is dominated by middle aged, economically comfortable, white, cis, hetero, males. Small changes like this are meant to foster diversity in the community by allowing them being more comfortable. Secondly because when they see these debates they move on as they are so fed up on teaching privileged people of their privileges. They not only have to suffer the oppression, we also put the burden on their shoulders to teach us.

    As and ending thought, if you see yourself being specially troubled by the change, having a discomfort feeling and strong reactions. In my experience these signs are characteristic of privileged people that feel that their privileges are being threatened, I've felt them myself countless times. When I feel it, +I usually do two things, fight them as strong as I can, or embrace them, analyze them, and go to the root of them. Depending on how much energy I have I go with the easy or the hard one. I'm not saying that it's you're case, but it could be.

  • Correction: Update the git flow to match my current one.

  • New: Revert a commit.

    git revert commit_id
  • New: Get interesting stats of the repo.

    Number of commits of the last year per user:

    git shortlog -sne --since="31 Dec 2020" --before="31 Dec 2021"

    You can also use git-fame to extract a more detailed report:

    $: git-fame --since 1.year --cost hour --loc ins -w -M -C
    | Author          |   hrs |   loc |   coms |   fils |  distribution   |
    | Lyz             |    10 | 28933 |    112 |    238 | 64.1/33.3/75.8  |
    | GitHub Action   |     2 | 16194 |    220 |     73 | 35.9/65.5/23.2  |
    | Alexander Gil   |     2 |     9 |      1 |      1 | 0.0/ 0.3/ 0.3   |
    | n0rt3y5ur       |     2 |     1 |      1 |      1 | 0.0/ 0.3/ 0.3   |
    | Guilherme Danno |     2 |     1 |      1 |      1 | 0.0/ 0.3/ 0.3   |
    | lyz-code        |     2 |     0 |      1 |      0 | 0.0/ 0.3/ 0.0   |

    You can use pipx install git-fame to install it.

Park programming

  • New: Introduce the park programming concept.

    Park programming is as you may guess, programming in parks. It includes guidelines on:

    • How to park program
    • Finding the best spots
    • Finding the best times
  • New: Introduce sponsorship analysis.

    It may arrive the moment in your life where someone wants to sponsor you. There are many sponsoring platforms you can use, each has their advantages and disadvantages.

    • Liberapay.
    • Ko-fi.
    • Buy me a coffee.
    • Github Sponsor.
    [Liberapay][3] [Ko-fi][4] [Buy Me a Coffee][6] [Github Sponsor][7]
    Non-profit [Yes][1] No No No! (Microsoft!)
    Monthly fee No No No No
    Donation Commission 0% 0% 5% Not clear
    Paid plan No [Yes][5] No No
    Payment Processors Stripe, Paypal Stripe, Paypal Stripe, Standard Payout Stripe
    One time donations [Possible but not user friendly][2] Yes Yes Yes
    Membership Yes Yes Yes Yes
    Shop/Sales No Yes No No
    Based in France ? United States United States?
    + Pay delay Instant Instant Instant
    User friendliness OK Good Good Good

    Liberapay is the only non-profit recurrent donations platform. It's been the most recommended platform from the people I know from the open-source, activist environment.

    Ko-fi would be my next choice, as they don't do commissions on the donations and they support more features (that I don't need right now) than Liberapay.


  • New: Introduce Ombi.

    Ombi is a self-hosted web application that automatically gives your shared Jellyfin users the ability to request content by themselves! Ombi can be linked to multiple TV Show and Movie DVR tools to create a seamless end-to-end experience for your users.

    If Ombi is not for you, you may try Overseerr.

Infrastructure as Code


  • New: Introduce gitea.

    Gitea is a community managed lightweight code hosting solution written in Go. It's the best self hosted Github alternative in my opinion.


  • Correction: Tweak the update process to make it more resilient.

    • Check that all the helm deployments are well deployed with helm list -A | grep -v deployed
    • Change the context to the production cluster before running the production steps
  • New: Fix Cannot patch X field is immutable error.

    You may think that deleting the resource, usually a deployment or daemonset will fix it, but helmfile apply will end without any error, the resource won't be recreated , and if you do a helm list, the deployment will be marked as failed.

    The solution we've found is disabling the resource in the chart's values so that it's uninstalled an install it again.

    This can be a problem with the resources that have persistence. To patch it, edit the volume resource with kubectl edit pv -n namespace volume_pvc, change the persistentVolumeReclaimPolicy to Retain, apply the changes to uninstall, and when reinstalling configure the chart to use that volume (easier said than done).

Ansible Snippets

  • New: Stop running docker containers.

    - name: Get running containers
        containers: yes
      register: docker_info
    - name: Stop running containers
        name: "{{ item }}"
        state: stopped
      loop: "{{ docker_info.containers | map(attribute='Id') | list }}"
  • New: Moving a file remotely.

  • New: Speed up the stat module.

    The stat module calculates the checksum and the md5 of the file in order to get the required data. If you just want to check if the file exists use:

    - name: Verify swapfile status
        path: "{{ common_swapfile_location }}"
        get_checksum: no
        get_md5: no
        get_mime: no
        get_attributes: no
      register: swap_status
      changed_when: not swap_status.stat.exists
  • New: Get the hosts of a dynamic ansible inventory.

    ansible-inventory -i environments/production --graph

    You can also use the --list flag to get more info of the hosts.

Infrastructure Solutions


  • New: Find if external IP belongs to you.

    You can list the network interfaces that match the IP you're searching for

    aws ec2 describe-network-interfaces --filters Name=association.public-ip,Values="{{ your_ip_address}}"
  • New: Introduce krew.

    Krew is a tool that makes it easy to use kubectl plugins. Krew helps you discover plugins, install and manage them on your machine. It is similar to tools like apt, dnf or brew.

AWS Savings plan

  • New: Create a configmap from a file.

    kubectl create configmap {{ configmap_name }} --from-file {{ path/to/file }}
  • New: Restart pods without taking the service down.

    kubectl rollout deployment {{ deployment_name }}
  • New: Introduce Ksniff.

    Ksniff is a Kubectl plugin to ease sniffing on kubernetes pods using tcpdump and wireshark.

  • New: Introduce AWS Savings plan.

    Saving plans offer a flexible pricing model that provides savings on AWS usage. You can save up to 72 percent on your AWS compute workloads.

    !!! note "Please don't make Jeff Bezos even richer, try to pay as less money to AWS as you can."


  • New: Introduce mizu.

    Mizu is an API Traffic Viewer for Kubernetes, think TCPDump and Chrome Dev Tools combined.


  • New: Port forward / Tunnel to an internal service.

    If you have a service running in kubernetes and you want to directly access it instead of going through the usual path, you can use kubectl port-forward.

    kubectl port-forward allows using resource name, such as a pod name, service replica set or deployment, to select the matching resource to port forward to. For example, the next commands are equivalent:

    kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017
    kubectl port-forward deployment/mongo 28015:27017
    kubectl port-forward replicaset/mongo-75f59d57f4 28015:27017
    kubectl port-forward service/mongo 28015:27017

    The output is similar to this:

    Forwarding from -> 27017
    Forwarding from [::1]:28015 -> 27017

    If you don't need a specific local port, you can let kubectl choose and allocate the local port and thus relieve you from having to manage local port conflicts, with the slightly simpler syntax:

    $: kubectl port-forward deployment/mongo :27017
    Forwarding from -> 27017
    Forwarding from [::1]:63753 -> 27017
  • New: Run a command against a specific context.

    If you have multiple contexts and you want to be able to run commands against a context that you have access to but is not your active context you can use the --context global option for all kubectl commands:

    kubectl get pods --context <context_B>

    To get a list of available contexts use kubectl config get-contexts


  • New: Pod limit per node.

    AWS EKS supports native VPC networking with the Amazon VPC Container Network Interface (CNI) plugin for Kubernetes. Using this plugin allows Kubernetes Pods to have the same IP address inside the pod as they do on the VPC network.

    This is a great feature but it introduces a limitation in the number of Pods per EC2 Node instance. Whenever you deploy a Pod in the EKS worker Node, EKS creates a new IP address from VPC subnet and attach to the instance.

    The formula for defining the maximum number of pods per instance is as follows:

    N * (M-1) + 2


    • N is the number of Elastic Network Interfaces (ENI) of the instance type.
    • M is the number of IP addresses of a single ENI.

    So, for t3.small, this calculation is 3 * (4-1) + 2 = 11. For a list of all the instance types and their limits see this document



  • New: How to extract information from AWS WAF.

    AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that control bot traffic and block common attack patterns, such as SQL injection or cross-site scripting. You can also customize rules that filter out specific traffic patterns.

    In the article there are many queries you can do on it's data and a workflow to understand your traffic.

Continuous Integration

  • New: Introduce MDFormat.

    MDFormat is an opinionated Markdown formatter that can be used to enforce a consistent style in Markdown files. Mdformat is a Unix-style command-line tool as well as a Python library.

    The features/opinions of the formatter include:

    • Consistent indentation and whitespace across the board
    • Always use ATX style headings
    • Move all link references to the bottom of the document (sorted by label)
    • Reformat indented code blocks as fenced code blocks
    • Use 1. as the ordered list marker if possible, also for noninitial list items.

    It's based on the markdown-it-py Markdown parser, which is a Python implementation of markdown-it.

  • New: Issues.


  • New: Deprecate flakehell in favour of flakeheaven.

    It's a fork maintained by the community, instead of an absent code dictator.

  • New: Introduce Flakeheaven.

    Flakeheaven is a Flake8 wrapper to make it cool. The community maintained fork of flakehell.

  • New: Introduce Drone.

    Drone is a modern Continuous Integration platform that empowers busy teams to automate their build, test and release workflows using a powerful, cloud native pipeline engine.

    Check how to install it here

Automating Processes


  • New: Unable to interpret changes between current project and cookiecutter template as unicode.

    Typically a result of hidden binary files in project folder. Maybe you have a hook that initializes the .git directory, don't do that.

  • New: Use skip to avoid problems with .git.

    Since 2.10.0 you can add a skip category inside the .cruft.json, so that it doesn't check that directory:

      "template": "xxx",
      "commit": "xxx",
      "checkout": null,
      "context": {
        "cookiecutter": {
      "directory": null,
      "skip": [


  • New: Introduce Renovate.

    Renovate is a program that does automated dependency updates. Multi-platform and multi-language.

    Why use Renovate?

    • Get pull requests to update your dependencies and lock files.
    • Reduce noise by scheduling when Renovate creates PRs.
    • Renovate finds relevant package files automatically, including in monorepos.
    • You can customize the bot's behavior with configuration files.
    • Share your configuration with ESLint-like config presets.
    • Get replacement PRs to migrate from a deprecated dependency to the community suggested replacement (npm packages only).
    • Open source.
    • Popular (more than 9.7k stars and 1.3k forks)
    • Beautifully integrate with main Git web applications (Gitea, Gitlab, Github).
    • It supports most important languages: Python, Docker, Kubernetes, Terraform, Ansible, Node, ...


  • New: Introduce storage.

    I have a server at home to host some services for my closest ones. The server is an Intel NUC which is super awesome in terms of electric consumption, CPU and RAM versus cost. The downside is that it has no hard drive to store the services data. It does have some USB ports to connect external hard drives though. As the data kept growing I started buying bigger drives. While it was affordable I purchased two so as to have one to store the backup of the data. The problem came when it became unaffordable for me. Then I took the good idea to assume that I could only have one drive of 16TB with my data. Obviously the inevitable happened. The hard drive died and those 10TB of data that were not stored in any backup were lost.

    Luckily enough, it was not unique data like personal photos. The data could be regenerated by manual processes at the cost of precious time (I'm still suffering this :(). But every cloud has a silver lining, this failure gave me the energy and motivation to improve my home architecture. To prevent this from happening again, the solution needs to be:

    • Robust: If disks die I will have time to replace them before data is lost.
    • Flexible: It needs to expand as the data grows.
    • Not very expensive.
    • Easy to maintain.

    There are two types of solutions to store data:

    • On one host: All disks are attached to a server and the storage capacity is shared to other devices by the local network.
    • Distributed: The disks are attached to many servers and they work together to provide the storage through the local network.

    A NAS server represents the first solution, while systems like Ceph or GlusterFS over Odroid HC4 fall into the second.

    Both are robust and flexible but I'm more inclined towards building a NAS because it can hold the amount of data that I need, it's easier to maintain and the underlying technology has been more battle proven throughout the years.


  • New: Introduce NAS.

    Network-attached storage or NAS, is a computer data storage server connected to a computer network providing data access to many other devices. Basically a computer where you can attach many hard drives.

    I've done an analysis to choose what solution I'm going to build in terms of:

    More will come in the next days.

  • New: Analyze RAM to buy.

    Most ZFS resources suggest using ECC RAM. The provider gives me two options:

    • Kingston Server Premier DDR4 3200MHz 16GB CL22
    • Kingston Server Premier DDR4 2666MHz 16GB CL19

    I'll go with two modules of 3200MHz CL22 because it has a smaller RAM latency.

  • New: Analyze motherboard to buy.

    After reading these reviews(1, 2) I've come to the decision to purchase the ASRock X570M Pro4 because, It supports:

    • 8 x SATA3 disks
    • 2 x M.2 disks
    • 4 x DDR4 RAM slots with speeds up to 4200+ and ECC support
    • 1 x AMD AM4 Socket Ryzen™ 2000, 3000, 4000 G-Series, 5000 and 5000 G-Series Desktop Processors
    • Supports NVMe SSD as boot disks
    • Micro ATX Form Factor.

    And it gives me room enough to grow:

    • It supports PCI 4.0 for the M.2 which is said to be capable of perform twice the speed compared to previous 3rd generation. the chosen M2 are of 3rd generation, so if I need more speed I can change them.
    • I'm only going to use 2 slots of RAM giving me 32GB, but I could grow 32 more easily.
  • New: Analyze CPU to buy.

    After doing some basic research I'm between:

    Property Ryzen 7 5800x Ryzen 5 5600x Ryzen 7 5700x Ryzen 5 5600G
    Cores 8 6 8 6
    Threads 16 12 16 12
    Clock 3.8 3.7 3.4 3.9
    Socket AM4 AM4 AM4 AM4
    PCI 4.0 4.0 4.0 3.0
    Thermal Not included Wraith Stealth Not included Wraith Stealth
    Default TDP 105W 65W 65W 65W
    System Mem spec >= 3200 MHz >= 3200 MHz >= 3200 MHz >= 3200 MHz
    Mem type DDR4 DDR4 DDR4 DDR4
    Price 315 232 279 179

    The data was extracted from AMD's official page.

    They all support the chosen RAM and the motherboard.

    I'm ruling out Ryzen 7 5800x because it's too expensive both on monetary and power consumption terms. Also ruling out Ryzen 5 5600G because it has comparatively bad properties.

    Between Ryzen 5 5600x and Ryzen 7 5700x, after checking these comparisons (1, 2) it looks like:

    • Single core performance is similar.
    • 7 wins when all cores are involved.
    • 7 is more power efficient.
    • 7 is better rated.
    • 7 is newer (1.5 years).
    • 7 has around 3.52 GB/s (7%) higher theoretical RAM memory bandwidth
    • They have the same cache
    • 7 has 5 degrees less of max temperature
    • They both support ECC
    • 5 has a greater market share
    • 5 is 47$ cheaper

    I think that for 47$ it's work the increase on cores and theoretical RAM memory bandwidth.

  • New: Analyze CPU coolers to buy.

    It looks that the Ryzen CPUs don't require a cooler to work well. Usually it adds another 250W to the consumption. I don't plan to overclock it and I've heard that ZFS doesn't use too much CPU, so I'll start without it and monitor the temperature.

    If I were to take one, I'd go with air cooling with something like the Dark Rock 4 but I've also read that Noctua are a good provider.

  • New: Analyze server cases to buy.

    I'm ruling out the next ones:

    • Fractal Design R6: More expensive than the Node 804 and it doesn't have hot swappable disks.
    • Silverstone Technology SST-CS381: Even though it's gorgeous it's too expensive.
    • Silverstone DS380: It only supports Mini-ITX which I don't have.

    The remaining are:

    Model Fractal Node 804 Silverstone CS380
    Form factor Micro - ATX Mid tower
    Motherboard Micro ATX Micro ATX
    Drive bays 8 x 3.5", 2 x 2.5" 8 x 3.5", 2 x 5.25"
    Hot-swap No yes
    Expansion Slots 5 7
    CPU cooler height 160mm 146 mm
    PSU compatibility ATX ATX
    Fans Front: 4, Top: 4, Rear 3 Side: 2, Rear: 1
    Price 115 184
    Size 34 x 31 x 39 cm 35 x 28 x 21 cm

    I like the Fractal Node 804 better and it's cheaper.

  • New: Choose the Power Supply Unit and CPU cooler.

    After doing some basic research I've chosen the Dark Rock 4 but just because the Enermax ETS-T50 AXE Silent Edition doesn't fit my case :(.

    Using PCPartPicker I've seen that with 4 disks it consumes approximately 264W, when I have the 8 disks, it will consume up to 344W, if I want to increase the ram then it will reach 373W. So in theory I can go with a 400W power supply unit.

    You need to make sure that it has enough wires to connect to all the disks. Although that usually is not a problem as there are adapters:

    After an analysis on the different power supply units, I've decided to go with Be Quiet! Straight Power 11 450W Gold


  • New: Learning.

    I've found that learning about ZFS was an interesting, intense and time consuming task. If you want a quick overview check this video. If you prefer to read, head to the awesome Aaron Toponce articles and read all of them sequentially, each is a jewel. The docs on the other hand are not that pleasant to read. For further information check JRS articles.

  • New: Storage planning.

    There are many variables that affect the number and type of disks, you first need to have an idea of what kind of data you want to store and what use are you going to give to that data.

  • New: Choosing the disks to hold data.

    Analysis on how to choose the disks taking into account:

    The conclusions are that I'm more interested on the 5400 RPM drives, but of all the NAS disks available to purchase only the WD RED of 8TB use it, and they use the SMR technology, so they aren't a choice.

    The disk prices offered by my cheapest provider are:

    Disk Size Price
    Seagate IronWolf 8TB 225$
    Seagate IronWolf Pro 8TB 254$
    WD Red Plus 8TB 265$
    Seagate Exos 7E8 8TB 277$
    WD Red Pro 8TB 278$

    WD Red Plus has 5,640 RPM which is different than the rest, so it's ruled out. Between the IronWolf and IronWolf Pro, they offer 180MB/s and 214MB/s respectively. The Seagate Exos 7E8 provides much better performance than the WD Red Pro so I'm afraid that WD is out of the question.

    There are three possibilities in order to have two different brands. Imagining we want 4 disks:

    Combination Total Price
    IronWolf + IronWolf Pro 958$
    IronWolf + Exos 7E8 1004$ (+46$ +4.5%)
    IronWolf Pro + Exos 7E8 1062$ (+54$ +5.4%)

    In terms of:

    • Consumption: both IronWolfs are equal, the Exos uses 2.7W more on normal use and uses 0.2W less on rest.
    • Warranty: IronWolf has only 3 years, the others 5.
    • Speed: Ironwolf has 210MB/s, much less than the Pro (255MB/s) and Exos (249MB/s), which are more similar.
    • Sostenibility: The Exos disks are much more robust (more workload, MTBF and Warranty).

    I'd say that for 104$ it makes sense to go with the IronWolf Pro + Exos 7E8 combination.

  • New: Choosing the disks for the cache.

    Using a ZLOG greatly improves the writing speed, equally using an SSD disk for the L2ARC cache improves the read speeds and improves the health of the rotational disks.

    The best M.2 NVMe SSD for NAS caching are the ones that have enough capacity to actually make a difference to overall system performance. It also requires a good endurance rating for better reliability and longer lifespan, and you should look for a drive with a specific NAND technology if possible.

    I've made an analysis based on:

    As conclusion, I’d recommend the Western Digital Red SN700, which has a good 1 DWPD endurance rating, is available in sizes up to 4TB, and is using SLC NAND technology, which is great for enhancing reliability through heavy caching workloads. A close second place goes to the Seagate IronWolf 525, which has similar specifications to the SN700 but utilizes TLC.

    Disk Size Speed Endurance Warranty Tech Price
    WD Red SN700 500 GB 3430MB/s 1 DWPD 5 years SLC 73$
    SG IronWolf 525 500 GB 5000MB/s 0.8 DWPD 5 years TLC ?
    WD Red SN700 1 TB 3430MB/s 1 DWPD 5 years SLC 127$
    SG IronWolf 525 1 TB 5000MB/s 0.8 DWPD 5 years TLC ?
  • New: Choosing the cold spare disks.

    It's good to think how much time you want to have your raids to be inconsistent once a drive has failed.

    In my case, for the data I want to restore the raid as soon as I can, therefore I'll buy another rotational disk. For the SSDs I have more confidence that they won't break so I don't feel like having a spare one.






  • New: Introduce CPU, attributes and how to buy it.

    A central processing unit or CPU, also known as the brain of the server, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program.

  • New: Add the market analysis.

  • New: Analyze the cpu coolers.
  • New: Analyze the use of cpu thermal paste.

    Thermal paste is designed to minimize microscopic air gaps and irregularities between the surface of the cooler and the CPU's IHS (integrated heat spreader), the piece of metal which is built into the top of the processor.

    Good thermal paste can have a profound impact on your performance, because it will allow your processor to transfer more of its waste heat to your cooler, keeping your processor running cool.

    Most pastes are comprised of ceramic or metallic materials suspended within a proprietary binder which allows for easy application and spread as well as simple cleanup.

    These thermal pastes can be electrically conductive or non-conductive, depending on their specific formula. Electrically conductive thermal pastes can carry current between two points, meaning that if the paste squeezes out onto other components, it can cause damage to motherboards and CPUs when you switch on the power. A single drop out of place can lead to a dead PC, so extra care is imperative.

    Liquid metal compounds are almost always electrically conductive, so while these compounds provide better performance than their paste counterparts, they require more focus and attention during application. They are very hard to remove if you get some in the wrong place, which would fry your system.

    In contrast, traditional thermal paste compounds are relatively simple for every experience level. Most, but not all, traditional pastes are electrically non-conductive.

    Most cpu coolers come with their own thermal paste, so check yours before buying another one.

  • Correction: Add GPU advice on shopping tips.

    • Check that the CPU has GPU if you don't want to use an external graphic card. Otherwise the BIOS won't start.
  • New: Installation tips for CPU.

    When installing an AM4 CPU in the motherboard, rotate the CPU so that the small arrow on one of the corners of the chip matches the arrow on the corner of the motherboard socket.


  • New: Introduce RAM, it's properties and how to buy it.

    RAM is a form of computer memory that can be read and changed in any order, typically used to store working data and machine code.

Power Supply Unit

  • New: Introduce Power Supply Unit.

    Power supply unit is the component of the computer that sources power from the primary source (the power coming from your wall outlet) and delivers it to its motherboard and all its components. Contrary to the common understanding, the PSU does not supply power to the computer; it instead converts the AC (Alternating Current) power from the source to the DC (Direct Current) power that the computer needs.

    There are two types of PSU: Linear and Switch-mode. Linear power supplies have a built-in transformer that steps down the voltage from the main to a usable one for the individual parts of the computer. The transformer makes the Linear PSU bulky, heavy, and expensive. Modern computers have switched to the switch-mode power supply, using switches instead of a transformer for voltage regulation. They’re also more practical and economical to use because they’re smaller, lighter, and cheaper than linear power supplies.

    PSU need to deliver at least the amount of power that each component requires, if it needs to deliver more, it simply won't work.

    Another puzzling question for most consumers is, “Does a PSU supply constant wattage to the computer?” The answer is a flat No. The wattage you see on the PSUs casing or labels only indicates the maximum power it can supply to the system, theoretically. For example, by theory, a 500W PSU can supply a maximum of 500W to the computer. In reality, the PSU will draw a small portion of the power for itself and distributes power to each of the PC components according to its need. The amount of power the components need varies from 3.3V to 12V. If the total power of the components needs to add up to 250W, it would only use 250W of the 500W, giving you an overhead for additional components or future upgrades.

    Additionally, the amount of power the PSU supplies varies during peak periods and idle times. When the components are pushed to their limits, say when a video editor maximizes the GPU for graphics-intensive tasks, it would require more power than when the computer is used for simple tasks like web-browsing. The amount of power drawn from the PSU would depend on two things; the amount of power each component requires and the tasks that each component performs.

    I've also added the next sections:

Pedal PC

  • New: Introduce Pedal PC.

    The Pedal PC idea gathers crazy projects that try to use the energy of your pedaling while you are working on your PC. The most interesting is PedalPC, but still crazy.

    Pedal-Power is another similar project, although it looks unmaintained.

Operating Systems


Linux Snippets

  • New: Clean up system space on debian based hosts.
  • New: Install one package from Debian unstable.
  • New: Monitor outgoing traffic.
  • Correction: Clean snap data.

    If you're using snap you can clean space by:

    • Reduce the number of versions kept of a package with snap set system refresh.retain=2
    • Remove the old versions with

      #Removes old revisions of snaps
      set -eu
      LANG=en_US.UTF-8 snap list --all | awk '/disabled/{print $1, $3}' |
          while read snapname revision; do
              snap remove "$snapname" --revision="$revision"
  • Correction: Clean journalctl data.

    • Check how much space it's using: journalctl --disk-usage
    • Rotate the logs: journalctl --rotate

    Then you have three ways to reduce the data:

    1. Clear journal log older than X days: journalctl --vacuum-time=2d
    2. Restrict logs to a certain size: journalctl --vacuum-size=100M
    3. Restrict number of log files: journactl --vacuum-files=5.

    The operations above will affect the logs you have right now, but it won't solve the problem in the future. To let journalctl know the space you want to use open the /etc/systemd/journald.conf file and set the SystemMaxUse to the amount you want (for example 1000M for a gigabyte). Once edited restart the service with sudo systemctl restart systemd-journald.

  • New: Set up docker logs rotation.

    By default, the stdout and stderr of the container are written in a JSON file located in /var/lib/docker/containers/[container-id]/[container-id]-json.log. If you leave it unattended, it can take up a large amount of disk space.

    If this JSON log file takes up a significant amount of the disk, we can purge it using the next command.

    truncate -s 0 <logfile>

    We could setup a cronjob to purge these JSON log files regularly. But for the long term, it would be better to setup log rotation. This can be done by adding the following values in /etc/docker/daemon.json.

      "log-driver": "json-file",
      "log-opts": {
        "max-size": "10m",
        "max-file": "10"
  • New: Clean old kernels.

    The full command is

    dpkg -l linux-* | awk '/^ii/{ print $2}' | grep -v -e `uname -r | cut -f1,2 -d"-"` | grep -e [0-9] | grep -E "(image|headers)" | xargs sudo apt-get -y purge

    To test what packages will it remove use:

    dpkg -l linux-* | awk '/^ii/{ print $2}' | grep -v -e `uname -r | cut -f1,2 -d"-"` | grep -e [0-9] | grep -E "(image|headers)" | xargs sudo apt-get --dry-run remove

    Remember that your running kernel can be obtained by uname -r.

  • Correction: Clean old kernels warning.

    I don't recommend using this step, rely on apt-get autoremove, it's safer.

  • New: Create Basic Auth header.

    $ echo -n user:password | base64

    Without the -n it won't work well.

  • New: Check vulnerabilities in Node.js applications.

    With yarn audit you'll see the vulnerabilities, with yarn outdated you can see the packages that you need to update.

  • New: Check vulnerabilities in rails dependencies.

    gem install bundler-audit
    cd project_with_gem_lock
  • New: Trim silences of sound files.

    To trim all silence longer than 2 seconds down to only 2 seconds long.

    sox in.wav out6.wav silence -l 1 0.1 1% -1 2.0 1%

    Note that SoX does nothing to bits of silence shorter than 2 seconds.

    If you encounter the sox FAIL formats: no handler for file extension 'mp3' error you'll need to install the libsox-fmt-all package.

  • New: Adjust the replay gain of many sound files.

    sudo apt-get install python-rgain
    replaygain -f *.mp3
  • New: Create QR code.

    qrencode -o qrcode.png 'Hello World!'
  • New: Git checkout to main with master as a fallback.

    I usually use the alias gcm to change to the main branch of the repository, given the change from main to master now I have some repos that use one or the other, but I still want gcm to go to the correct one. The solution is to use:

    alias gcm='git checkout "$(git symbolic-ref refs/remotes/origin/HEAD | cut -d'/' -f4)"'
  • New: Scan a physical page in Linux.

    Install xsane and run it.

  • New: Get the output of docker ps as a json.

    To get the complete json for reference.

    docker ps -a --format "{{json .}}" | jq -s

    To get only the required columns in the output with tab separated version

    docker ps -a --format "{{json .}}" | jq -r -c '[.ID, .State, .Names, .Image]'

    To get also the image's ID you can use:

    docker inspect --format='{{json .}}' $(docker ps -aq) | jq -r -c '[.Id, .Name, .Config.Image, .Image]'
  • New: Df and du showing different results.

    Sometimes on a linux machine you will notice that both df command (display free disk space) and du command (display disk usage statistics) report different output. Usually, df will output a bigger disk usage than du.

    The du command estimates file space usage, and the df command shows file system disk space usage.

    There are many reasons why this could be happening:

  • New: Clean up docker data.

    To remove unused docker data you can run docker system prune -a. This will remove:

    • All stopped containers
    • All networks not used by at least one container
    • All images without at least one container associated to them
    • All build cache

    Sometimes that's not enough, and your /var/lib/docker directory still weights more than it should. In those cases:

    • Stop the docker service.
    • Remove or move the data to another directory
    • Start the docker service.

    In order not to loose your persisted data, you need to configure your dockers to mount the data from a directory that's not within /var/lib/docker.

  • New: Download TS streams.

    Some sites give stream content with small .ts files that you can't download directly. Instead open the developer tools, reload the page and search for a request with extension .m3u8, that gives you the playlist of all the chunks of .ts files. Once you have that url you can use yt-dlp to download it.



  • New: Debug rtorrent docker problems.
  • New: Introduce Aleph.

    Aleph is a tool for indexing large amounts of both documents (PDF, Word, HTML) and structured (CSV, XLS, SQL) data for easy browsing and search. It is built with investigative reporting as a primary use case. Aleph allows cross-referencing mentions of well-known entities (such as people and companies) against watchlists, e.g. from prior research or public datasets.

  • New: Problems accessing redis locally.

    If you're with the VPN connected, turn it off.

  • New: PDB behaves weird.

    Sometimes you have two traces at the same time, so each time you run a PDB command it jumps from pdb trace. Quite confusing. Try to c the one you don't want so that you're left with the one you want. Or put the pdb trace in a conditional that only matches one of both threads.


  • New: Introduce Anki.

    Anki is a program which makes remembering things easy. Because it's a lot more efficient than traditional study methods, you can either greatly decrease your time spent studying, or greatly increase the amount you learn.

    Anyone who needs to remember things in their daily life can benefit from Anki. Since it is content-agnostic and supports images, audio, videos and scientific markup (via LaTeX), the possibilities are endless.

  • New: Interacting with python.

    Although there are some python libraries:

    I think the best way is to use AnkiConnect

    The installation process is similar to other Anki plugins and can be accomplished in three steps:

    • Open the Install Add-on dialog by selecting Tools | Add-ons | Get Add-ons... in Anki.
    • Input 2055492159 into the text box labeled Code and press the OK button to proceed.
    • Restart Anki when prompted to do so in order to complete the installation of Anki-Connect.

    Anki must be kept running in the background in order for other applications to be able to use Anki-Connect. You can verify that Anki-Connect is running at any time by accessing localhost:8765 in your browser. If the server is running, you will see the message Anki-Connect displayed in your browser window.

  • New: Use anki connect with python.


  • New: Introduce ferdium.

    Ferdium is a desktop application to have all your services in one place. It's similar to Rambox, Franz or Ferdi only that it's maintained by the community and respects your privacy.


  • New: Introduce finnix.

    Finnix is a live Linux distribution specialized in the recovery, maintenance, testing of systems.

Github cli

  • New: Trigger a workflow run.

    To manually trigger a workflow you need to first configure it to allow workflow_dispatch events.


    Then you can trigger the workflow with gh workflow run {{ workflow_name }}, where you can get the workflow_name with gh workflow list


  • New: Introduce goaccess.

    goaccess is a fast terminal-based log analyzer.

    Its core idea is to quickly analyze and view web server statistics in real time without needing to use your browser (great if you want to do a quick analysis of your access log via SSH, or if you simply love working in the terminal).

    While the terminal output is the default output, it has the capability to generate a complete, self-contained real-time HTML report (great for analytics, monitoring and data visualization), as well as a JSON, and CSV report.


  • New: Introduce i3wm.

    i3 is a tiling window manager.

  • New: Layout saving.

    Layout saving/restoring allows you to load a JSON layout file so that you can have a base layout to start working with after powering on your computer.

    First of all arrange the windows in the workspace, then you can save the layout of either a single workspace or an entire output:

    i3-save-tree --workspace "1: terminal" > ~/.i3/workspace-1.json

    You need to open the created file and remove the comments that match the desired windows under the swallows keys, so transform the next snippet:

        "swallows": [
            //  "class": "^URxvt$",
            //  "instance": "^irssi$"


        "swallows": [
                "class": "^URxvt$",
                "instance": "^irssi$"

    Once is ready close all the windows of the workspace you want to restore (moving them away is not enough!).

    Then on a terminal you can restore the layout with:

    i3-msg 'workspace "1: terminal"; append_layout ~/.i3/workspace-1.json'

    It's important that you don't use a relative path

    Even if you're in ~/.i3/ you have to use i3-msg append_layout ~/.i3/workspace-1.json.

    This command will create some fake windows (called placeholders) with the layout you had before, i3 will then wait for you to create the windows that match the selection criteria. Once they are, it will put them in their respective placeholders.

    If you wish to create the layouts at startup you can add the next snippet to your i3 config.

    exec --no-startup-id "i3-msg 'workspace \"1: terminal\"; append_layout ~/.i3/workspace-1.json'"


  • New: Convert VOB to mkv.

    • Unify your VOBs

      cat *.VOB > output.vob

    • Identify the streams

      ffmpeg -analyzeduration 100M -probesize 100M -i output.vob

      Select the streams that you are interested in, imagine that is 1, 3, 4, 5 and 6.

    • Encoding

      ffmpeg \
        -analyzeduration 100M -probesize 100M \
        -i output.vob \
        -map 0:1 -map 0:3 -map 0:4 -map 0:5 -map 0:6 \
        -metadata:s:a:0 language=ita -metadata:s:a:0 title="Italian stereo" \
        -metadata:s:a:1 language=eng -metadata:s:a:1 title="English stereo" \
        -metadata:s:s:0 language=ita -metadata:s:s:0 title="Italian" \
        -metadata:s:s:1 language=eng -metadata:s:s:1 title="English" \
        -codec:v libx264 -crf 21 \
        -codec:a libmp3lame -qscale:a 2 \
        -codec:s copy \


  • New: Introduce khal.

    khal is a standards based Python CLI (console) calendar program, able to synchronize with CalDAV servers through vdirsyncer.


    • Can read and write events/icalendars to vdir, so vdirsyncer can be used to synchronize calendars with a variety of other programs, for example CalDAV servers.
    • Fast and easy way to add new events
    • ikhal (interactive khal) lets you browse and edit calendars and events.


    • Only rudimentary support for creating and editing recursion rules
    • You cannot edit the timezones of events
  • New: Edit the events in a more pleasant way.

    The ikhal event editor is not comfortable for me. I usually only change the title or the start date and in the default interface you need to press many keystrokes to make it happen.

    A patch solution is to pass a custom script on the EDITOR environmental variable. Assuming you have questionary and ics installed you can save the next snippet into an edit_event file in your PATH:

    """Edit an ics calendar event."""
    import sys
    import questionary
    from ics import Calendar
    file = sys.argv[1]
    with open(file, "r") as fd:
        calendar = Calendar(
    event = list(calendar.timeline)[0] = questionary.text("Title: ",
    start = questionary.text(
        "Start: ",
    event.begin = event.begin.replace(
        hour=int(start.split(":")[0]), minute=int(start.split(":")[1])
    with open(file, "w") as fd:

    Now if you open ikhal as EDITOR=edit_event ikhal, whenever you edit one event you'll get a better interface. Add to your .zshrc or .bashrc:

    alias ikhal='EDITOR=edit_event ikhal'

    The default keybinding for the edition is not very comfortable either, add the next snippet on your config:

    external_edit = e
    export = meta e



  • New: Introduce pipx.

    Pipx is a command line tool to install and run Python applications in isolated environments.

    Very useful not to pollute your user or device python environments.

    Install it with:

    pip install pipx


  • New: Introduce profanity.

    profanity is a console based XMPP client written in C using ncurses and libstrophe, inspired by Irssi.


  • New: Introduce LibreElec.

    LibreElec is the lightweight distribution to run Kodi


  • New: Compare the different torrent clients.

    BitTorrent is a communication protocol for peer-to-peer file sharing (P2P), which enables users to distribute data and electronic files over the Internet in a decentralized manner.

    Each of us seeks something different for a torrent client, thus there is a wide set of software, you just need to find the one that's best for you. In my case I'm searching for a client that:

    • Scales well for many torrents

    • Is robust

    • Is maintained

    • Is popular

    • Is supported by the private trackers: Some torrent clients are banned by the tracker because they don't report correctly to the tracker when canceling/finishing a torrent session. If you use them then a few MB may not be counted towards the stats near the end, and torrents may still be listed in your profile for some time after you have closed the client. Each tracker has their list of allowed clients. Make sure to check them.

    Also, clients in alpha or beta versions should be avoided.

    • Can be easily monitored

    • Has a Python library or an API to interact with

    • Has clear and enough logs

    • Has RSS support

    • Has a pleasant UI

    • Supports categories

    • Can unpack content once it's downloaded

    • No ads

    • Easy to use behind a VPN with IP leakage protection.

    • Easy to deploy

    I don't need other features such as:

    • Preview content
    • Search in the torrent client

    The winner has been qBittorrent


  • New: Introduce vdirsyncer.

    vdirsyncer is a Python command-line tool for synchronizing calendars and addressbooks between a variety of servers and the local filesystem. The most popular usecase is to synchronize a server with a local folder and use a set of other programs such as khal to change the local events and contacts. Vdirsyncer can then synchronize those changes back to the server.

    However, vdirsyncer is not limited to synchronizing between clients and servers. It can also be used to synchronize calendars and/or addressbooks between two servers directly.

    It aims to be for calendars and contacts what OfflineIMAP is for emails.



  • New: Introduce VSCodium.

    VSCodium are binary releases of VS Code without MS branding/telemetry/licensing.


  • New: Introduce wallabag.

    Wallabag is a self-hosted read-it-later application: it saves a web page by keeping content only. Elements like navigation or ads are deleted.


  • New: Introduce Wireshark, it's installation and basic usage.

    Wireshark is the world’s foremost and widely-used network protocol analyzer. It lets you see what’s happening on your network at a microscopic level and is the de facto (and often de jure) standard across many commercial and non-profit enterprises, government agencies, and educational institutions.


Android Tips

  • New: Extend the life of your battery.

    Research has shown that keeping your battery charged between 0% and 80% can make your battery's lifespan last 2x longer than when you use a full battery cycle from 0-100%.

    As a non root user you can install Accubattery (not in F-droid :( ) to get an alarm when the battery reaches 80% so that you can manually unplug it. Instead of leaving the mobile charge in the night and stay connected at 100% a lot of hours until you unplug, charge it throughout the day.


  • New: Introduce GrapheneOS.

    GrapheneOS is a private and secure mobile operating system with Android app compatibility. Developed as a non-profit open source project.

  • New: Introduce GrapheneOS.

    GrapheneOS is a private and secure mobile operating system with great functionality and usability. It starts from the strong baseline of the Android Open Source Project (AOSP) and takes great care to avoid increasing attack surface or hurting the strong security model. GrapheneOS makes substantial improvements to both privacy and security through many carefully designed features built to function against real adversaries. The project cares a lot about usability and app compatibility so those are taken into account for all of our features.

  • New: Installation.

    I was not able to follow the web instructions so I had to follow the cli ones.

    Whenever I run a fastboot command it got stuck in < waiting for devices >, so I added the next rules on the udev configuration at /etc/udev/rules.d/51-android.rules

    SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", ATTR{idProduct}=="4ee7", MODE="0600", OWNER="myuser"

    The idProduct and idVendor were deduced from lsusb. Then after a restart everything worked fine.



Forking this garden

  • Correction: Update forking instructions.

    I recommend against forking the repository via Github. If you do that, you'll have all the history of my repository, which will make your repository more heavy than it should (as I have a lot of images), and it will make it hard for me to make pull requests to your digital garden.

    Furthermore, you'll always see a message in your repo similar to This branch is 909 commits ahead, 1030 commits behind lyz-code:master. like you can see in this fork. Also if you don't want to keep all the content I've made so far and want to start from scratch then the only thing that is useful for you is the skeleton I've made, and I don't need any attribution or credit for that :P.

    If on the other hand you do want to keep all my content, then wouldn't it be better to just make contributions to this repository instead?

    Therefore the best way to give credit and attribution is by building your garden (the more we are writing the merrier :) ), and then if you want to spread the word that my garden exists within your content then that would be awesome.

    If you end up building your own, remember to add yourself to the digital garden's list.


Cooking software

  • New: Analysis of existing recipe manager software.

    List the expected features from the recipe manager and add links of the software found after an analysis of the state of the art, it's still a work in progress

  • New: Finish the state of the art analysis.

    Review Cooklang, KookBook, RecipeSage, Mealie and Chowdown

Aerial Silk

  • New: Introduce Aerial Silk, some warmups and some figures.

    Aerial Silk is a type of performance in which one or more artists perform aerial acrobatics while hanging from a fabric. The fabric may be hung as two pieces, or a single piece, folded to make a loop, classified as hammock silks. Performers climb the suspended fabric without the use of safety lines and rely only on their training and skill to ensure safety. They use the fabric to wrap, suspend, drop, swing, and spiral their bodies into and out of various positions. Aerial silks may be used to fly through the air, striking poses and figures while flying. Some performers use dried or spray rosin on their hands and feet to increase the friction and grip on the fabric.




  • New: Introduce Redox.

    Redox is an awesome Do It Yourself mechanical keyboard

  • New: Installation instructions.

    First flash:

    Download the hex from the via website

    Try to flash it many times reseting the promicros.

    sudo avrdude -b 57600 -p m32u4 -P /dev/ttyACM0 -c avr109 -D -U flash:w:redox_rev1_base_via.hex

    Once the write has finished install via:

    Reboot the computer

    Launch it with via-nativia.

Video Gaming

King Arthur Gold

  • New: Introduce King Arthur Gold.

    King Arthur Gold, also known as KAG, is a free Medieval Build n'Kill Multiplayer Game with Destructible Environments.

    Construct freeform forts as a medieval Builder, fight in sword duels as a Knight or snipe with your bow as an Archer. KAG blends the cooperative aspects of Lost Vikings, mashes them with the full destructibility of Worms and the visual style and action of Metal Slug, brought to you by the creators of Soldat.

  • New: Builder guides.

    Turtlebutt and Bunnie guide is awesome.

Age of Empires

  • New: Introduce the Age of Empires videogame.

Board Gaming


  • New: Player modifiers extension.

    At the start of the game players can decide their suit, they will get a bonus on the played cards of their suit, and a penalization on the opposite suit. The opposite suits are:

    • ♠ opposite of ♥
    • ♣ opposite of ♦

    The bonus depends on the level of the enemy being:

    • J: +1 or -1
    • Q: +2 or -2
    • K: +3 or -3


  • New: Introduce the sudoku game.

    Sudoku is a logic-based, combinatorial number-placement puzzle. In classic Sudoku, the objective is to fill a 9 × 9 grid with digits so that each column, each row, and each of the nine 3 × 3 subgrids that compose the grid (also called "boxes", "blocks", or "regions") contain all of the digits from 1 to 9. The puzzle setter provides a partially completed grid, which for a well-posed puzzle has a single solution.

Book Binding

  • New: Introduce book binding.

    Book binding is the process of physically assembling a book of codex format from an ordered stack of paper sheets that are folded together into sections called signatures or sometimes left as a stack of individual sheets. Several signatures are then bound together along one edge with a thick needle and sturdy thread.


Data Analysis

Recommender Systems


  • Reorganization: Reorder the blue book navigation panel.
  • New: Sum up all the VueJS documentation.
  • New: Troubleshoot Failed to resolve component: X.

    If you've already imported the component with import X from './X.vue you may have forgotten to add the component to the components property of the module:

    export default {
      name: 'Inbox',
      components: {
  • Reorganization: Reorder the programming languages under a Languages section.

  • New: Bear with me or Bare with me.

    "Bear with me" is the correct form.

  • Correction: Correct argument to use pipes in terminals.

    You don't use check=True but shell=True, thanks pawamoy

  • Correction: Update http versions to HTTP/2.0.

    It seems that the correct protocol is HTTP/2.0 now:

  • New: Introduce the Alder tree.

    Alders are trees comprising the genus Alnus in the birch family Betulaceae (like the birch). The genus parts are "al" which means "close by" and "lan" which means "side of the river", so they are trees that grow close to rivers or creeks.

  • Correction: Give more details of the beech tree.

     The leaves of beech trees are elliptic, a little pointy at the end, flat, and with a short petiole. They are big and wide leaves ranging from 4-9 cm long. Very abundant, they have a light green colour with a darker tone and glossy on the upper side.

    The fruit is a small, sharply three-angled nut 10-15 mm long, borne singly or in pairs in soft-spined husks 1.5-2.5 cm long, known as cupules. The husk can have a variety of spine- to scale-like appendages, the character of which is, in addition to leaf shape, one of the primary ways beeches are differentiated. The nuts are edible, though bitter (though not nearly as bitter as acorns) with a high tannin content, and are called beechnuts or beechmast.

    They are big trees easily going between 30 and 45 meters. It looks very different if its isolated or being part of a forest. The first one the branches grow from the middle of the trunk and are horizontal, in the second, the branches go up and start over the half of the trunk. The principal root is very powerful, with very strong secondary roots, developing lateral superficial roots.

    The trunk is right with a grayish-cinder bark, smooth until it's old, usually covered by moss an lichen. Smaller branches are zigzagging with reddish-brown pointy buds.

    The canopy is big, dense, rounded and semi spheric, giving a lot of shadow.

    It grows slow in the first years, being the most active between the tenth and twelve year, reaching it's maximum height when it's twenty five, although it lives around three hundred years.

  • Correction: Give more details of the birch tree.

    The simple leaves are rhomboidal, between 3 and 6 cm, singly or doubly serrate except at the base, feather-veined, petiolate and stipulate. Although they are alternate, many leaves spawn from each side of the branch, making some think that they are not alternate. They often appear in pairs, but these pairs are really borne on spur-like, two-leaved, lateral branchlets.

    The canopy is rounded and irregular giving few shadow.

    The fruit is a small samara, although the wings may be obscure in some species. They differ from the alders in that the female catkins are not woody and disintegrate at maturity, falling apart to release the seeds, unlike the woody, cone-like female alder catkins.

    The bark of all birches is characteristically smooth and white, although in older ones the lower part is usually cracked and takes blackish brown colours. It's marked with long, horizontal lenticels, and often separates into thin, papery plates, especially upon the paper birch.

  • New: How to tell apart the different trees.

    Alder vs Beech:

    Property Beech Alder
    Leaf border flat sparsely toothed
    Leaf form elliptic rounded
    Same colour both sides no (darker and glossy up) yes
    Sticky leafs no yes
    Size 30-45m 10-12m (in Europe)
    Knots on the roots with fungi no yes
    Where they grow everywhere close to rivers or creeks

    Alder vs Birch:

    Property Birch Alder
    Leaf border heavy toothed sparsely toothed
    Leaf form rhomboidal rounded
    Sticky leafs no yes
    Where they grow very close to each other close to rivers or creeks

    Beech vs Birch:

    Property Beech Birch
    Leaf border flat heavy toothed
    Leaf form elliptic rhomboidal
    Size 30-45m 10-15m (in Europe)
    Same colour both sides no (darker and glossy up) yes
    Where they grow everywhere very close to each other