Skip to content

January of 2024

Activism

  • New: Introduction to activism.

    Activism consists of efforts to promote, impede, direct or intervene in social, political, economic or environmental reform with the desire to make changes in society toward a perceived greater good.

Life Management

Life Management

Life review

  • New: Thoughts on the reviews themselves.

    • Keep It Simple: It's important for the process to be light enough that you want to actually do it, so you see it as a help instead of a burden. It's always better to do a small and quick review rather than nothing at all. At the start of the review analyze yourself to assess how much energy do you have and decide which steps of the review you want to do.

    • Review approaches: In the past I used the life logging tools to analyze the past in order to understand what I achieved and take it as a base to learn from my mistakes. It was useful when I needed the endorphines boost of seeing all the progress done. Once I assumed that progress speed and understood that we always do the best we can given how we are, I started to feel that the review process was too cumbersome and that it was holding me into the past.

    Nowadays I try not to look back but forward, analyze the present: how I feel, how's the environment around me, and how can I tweak both to fulfill my life goals. This approach leads to less reviewing of achievements and logs and more introspection, thinking and imagining. Which although may be slower to correct mistakes of the past, will surely make you live closer to the utopy.

    The reviews below then follow that second approach.

    • Personal alive reviews: Reviews have to reflect ourselves, and we change continuously, so take for granted that your review is going to change.

    I've gone for full blown reviews of locking myself up for a week to not doing reviews for months.

    This article represent the guidelines I follow to do my life review. It may seem a lot to you or may be very simple. Please take it as a base or maybe to get some ideas and then create your own that fits your needs.

  • New: Update the Month review process.

  • New: When to do the trimester reviews.

    As with moth reviews, it's interesting to do analysis at representative moments. It gives it an emotional weight. You can for example use the solstices or my personal version of the solstices:

    • Spring analysis (1st of March): For me the spring is the real start of the year, it's when life explodes after the stillness of the winter. The sun starts to set later enough so that you have light in the afternoons, the climate gets warmer thus inviting you to be more outside, the nature is blooming new leaves and flowers. It is then a moment to build new projects and set the current year on track.

    • Summer analysis (1st of June): I hate heat, so summer is a moment of retreat. Everyone temporarily stop their lives, we go on holidays and all social projects slow their pace. Even the news have even less interesting things to report. It's so hot outside that some of us seek the cold refuge of home or remote holiday places. Days are long and people love to hang out till late, so usually you wake up later, thus having less time to actually do stuff. Even in the moments when you are alone the heat drains your energy to be productive. It is then a moment to relax and gather forces for the next trimester. It's also perfect to develop easy and chill personal projects that have been forgotten in a drawer. Lower your expectations and just flow with what your body asks you.

    • Autumn analysis (1st of September): September it's another key moment for many people. We have it hardcoded in our life since we were children as it was the start of school. People feel energized after the summer holidays and are eager to get back to their lives and stopped projects. You're already 6 months into the year, so it's a good moment to review your year plan and decide how you want to invest your energy reserves.

    • Winter analysis (1st of December): December is the cue that the year is coming to an end. The days grow shorter and colder, they basically invite you to enjoy a cup of tea under a blanket. It is then a good time to get into your cave and do an introspection analysis on the whole year and prepare the ground for the coming year. Some of the goals of this season are:

    • Think everything you need to guarantee a good, solid and powerful spring start.
    • Do the year review to adjust your principles.

    The year is then divided in two sets of an expansion trimester and a retreat one. We can use this information to adjust our life plan accordingly. In the expansion trimester we could invest more energies in the planning, and in the retreat ones we can do more throughout reviews.

  • New: The principle documents.

    Principle documents for me are orgmode documents where I think about the principle itself. It acts both as a way of understanding it and evolving my idea around it, and to build the roadmap to materialize the principle's path.

    Without ever having created one I feel that it makes sense to make the reflection part public in the blue book, while I keep for myself the private one. This may also change between principles.

  • New: The life path document.

    The life path document is an orgmode document where I think about what I want to do with my life and how. It's the highest level of abstraction of the life management system.

    The structure so far is as follows:

    * Life path
    ** {year}
    *** Principles of {season} {year}
        {Notes on the season}
        - Principle 1
        - Principle 2
        ...
    
    **** Objectives of {month} {year}
         - [-] Objective 1
           - [X] SubObjective 1
           - [ ] SubObjective 2
         - [ ] Objective 2
         - [ ] ...
    

    Where the principles are usually links to principle documents and the objectives links to tasks.

  • New: Trimester prepare.

    The trimester review requires an analysis that doesn't fill in a day session. It requires slow thinking over some time. So I'm creating a task 10 days before the actual review to start thinking about the next trimester. Whether it's ideas, plans, desires, objectives, or principles.

    Is useful for that document to be available wherever you go, so that in any spare time you can pop it up and continue with the train of thought.

    Doing the reflection without seeing your life path prevents you from being tainted by it, thus representing the real you of right now.

    On the day to actually do the review, follow the steps of the Month review prepare adjusting them to the trimester case.

Task Management

Org Mode

  • New: Start working on a task dates.

    SCHEDULED defines when you are plan to start working on that task.

    The headline is listed under the given date. In addition, a reminder that the scheduled date has passed is present in the compilation for today, until the entry is marked as done or disabled.

    *** TODO Call Trillian for a date on New Years Eve.
        SCHEDULED: <2004-12-25 Sat>
    

    Although is not a good idea (as it promotes the can pushing through the street), if you want to delay the display of this task in the agenda, use SCHEDULED: <2004-12-25 Sat -2d> the task is still scheduled on the 25th but will appear two days later. In case the task contains a repeater, the delay is considered to affect all occurrences; if you want the delay to only affect the first scheduled occurrence of the task, use --2d instead.

    Scheduling an item in Org mode should not be understood in the same way that we understand scheduling a meeting. Setting a date for a meeting is just a simple appointment, you should mark this entry with a simple plain timestamp, to get this item shown on the date where it applies. This is a frequent misunderstanding by Org users. In Org mode, scheduling means setting a date when you want to start working on an action item.

    You can set it with <leader>s (Default: <leader>ois)

  • New: Deadlines.

    DEADLINE are like appointments in the sense that it defines when the task is supposed to be finished on. On the deadline date, the task is listed in the agenda. The difference with appointments is that you also see the task in your agenda if it is overdue and you can set a warning about the approaching deadline, starting org_deadline_warning_days before the due date (14 by default). It's useful then to set DEADLINE for those tasks that you don't want to miss that the deadline is over.

    An example:

    * TODO Do this
    DEADLINE: <2023-02-24 Fri>
    

    You can set it with <leader>d (Default: <leader>oid).

    If you need a different warning period for a special task, you can specify it. For example setting a warning period of 5 days DEADLINE: <2004-02-29 Sun -5d>.

    If you're as me, you may want to remove the warning feature of DEADLINES to be able to keep your agenda clean. Most of times you are able to finish the task in the day, and for those that you can't specify a SCHEDULED date. To do so set the default number of days to 0.

    require('orgmode').setup({
      org_deadline_warning_days = 0,
    })
    

    Using too many tasks with a DEADLINE will clutter your agenda. Use it only for the actions that you need to have a reminder, instead try to using appointment dates instead. The problem of using appointments is that once the date is over you don't get a reminder in the agenda that it's overdue, if you need this, use DEADLINE instead.

  • New: How to deal with overdue SCHEDULED and DEADLINE tasks.

  • New: Introduce org-rw.

    org-rw is a Python library to process your orgmode files.

    Installation:

    pip install org-rw
    

    Load an orgmode file:

    from org_rw import load
    
    with open('your_file.org', 'r') as f:
        doc = load(f)
    

Habit management

  • New: Introduce habit management.

    A habit is a routine of behavior that is repeated regularly and tends to occur subconsciously.

    A 2002 daily experience study found that approximately 43% of daily behaviors are performed out of habit. New behaviours can become automatic through the process of habit formation. Old habits are hard to break and new habits are hard to form because the behavioural patterns that humans repeat become imprinted in neural pathways, but it is possible to form new habits through repetition.

    When behaviors are repeated in a consistent context, there is an incremental increase in the link between the context and the action. This increases the automaticity of the behavior in that context. Features of an automatic behavior are all or some of: efficiency, lack of awareness, unintentionality, and uncontrollability.

    Mastering habit formation can be a powerful tool to change yourself. Usually with small changes you get massive outcomes in the long run. The downside is that it's not for the impatient people as it often appears to make no difference until you cross a critical threshold that unlocks a new level of performance.

  • New: Why are habits interesting.

    Whenever you face a problem repeatedly, your brain begins to automate the process of solving it. Habits are a series of automatic resolutions that solve the problems and stresses you face regularly.

    As habits are created, the level of activity in the brain decreases. You learn to lock in on the cues that predict success and tune out everything else. When a similar situation arises in the future, you know exactly what you look for. There is no longer a need to analyze every angle of a situation. Your brain skips the process of trial and error and creates a mental rule: if this, then that.

    Habit formation is incredibly useful because the conscious mind is the bottleneck of the brain. It can only pay attention to one problem at a time. Habits reduce the cognitive load and free up mental capacity, so they can be carried on with your nonconscious mind and you can allocate your attention to other tasks.

  • New: Identity focused changes.

    Changing our habits is challenging because we try to change the wrong thing in the wrong way.

    There are three levels at which change can occur:

    • Outcomes: Changing your results. Goals fall under this category: publishing a book, run daily
    • Process: Changing your habits and systems: decluttering your desk for a better workflow, developing a meditation practice.
    • Identity: Changing your beliefs, assumptions and biases: your world view, your self-image, your judgments.

    Many people begin the process of changing their habits by focusing on what they want to achieve. This leads to outcome-based habits. The alternative is to build identity-based habits. With this approach, we start by focusing on who we wish to become.

    The first path of change is doomed because maintaining behaviours that are incongruent with the self is expensive and will not last. Even if they make rational sense. Thus it's hard to change your habits if you never change the underlying beliefs that led to your past behaviour. On the other hand it's easy to find motivation once a habit has changed your identity as you may be proud of it and will be willing to maintain all the habits and systems associated with it. For example: The goal is not to read a book, but to become a reader.

    Focusing on outcomes may also bring the next problems:

    • Focusing on the results may lead you to temporal solutions. If you focus on the source of the issue at hand you may solve it with less effort and get you to a more stable one.
    • Goals create an "either-or" conflict: either you achieve your goal and are successful or you fail and you are disappointed. Thus you only get a positive reward if you fulfill a goal. If you instead focus on the process rather than the result, you will be satisfied anytime your system is running.
    • When your hard work is focused on a goal you may feel depleted once you meet it and that could make you loose the condition that made you meet the goal in the first place.

    Research has shown that once a person believes in a particular aspect of their identity, they are more likely to act in alignment with that belief. This of course is a double-edged sword. Identity change can be a powerful force for self-improvement. When working against you, identity change can be a curse.

  • New: Changing your identity.

    Whatever your identity is right now, you only believe it because you have proof of it. The more evidence you have for a belief, the more strongly you will believe it.

    Your habits and systems are how you embody your identity. When you make your bed each day, you embody the identity of an organized person. The more you repeat a behaviour, the more you reinforce the identity associated with that behaviour. To the point that your self-image begins to change. The effect of one-off experiences tends to fade away while the effect of habits gets reinforced with time, which means your habits contribute most of the evidence that shapes your identity.

    Every action you take is a vote for the type of person you wish to become. This is one reason why meaningful change does not require radical change. Small habits can make a meaningful difference by providing evidence of a new identity.

    Once you start the ball rolling things become easier as building habits is a feedback loop. Your habits shape your identity, and your identity shapes your habits.

    The most practical way to change the identity is to:

    Another advantage of focusing in what type of person you want to be is that maybe the outcome you wanted to focus on is not the wisest smallest step to achieve your identity change. Thinking on the identity you want to embrace can make you think outside the box.

  • New: Decide the type of person you want to be.

    One way to decide the person you want to be is to answer big questions like: what do you want to stand for? What are your principles and values? Who do you wish to become?

    As we're more result oriented, another way is to work backwards from them to the person you want to be. Ask yourself: Who is the type of person that could get the outcome I want?

  • New: How to change a habit.

    The process of building a habit from a behaviour can be divided into four stages:

    • Reward is the end goal.
    • Cue is the trigger in your brain that initiate a behaviour. It's contains the information that predicts a reward.
    • Cravings are the motivational force fueled by the desire of the reward. Without motivation we have no reason to act.
    • Response is the thought or action you perform to obtain the reward. The response depends on the amount of motivation you have, how much friction is associated with the behaviour and your ability to actually do it.

    If a behaviour is insufficient in any of the four stages, it will not become a habit. Eliminate the cue and your habit will never start. Reduce the craving and you won't have enough motivation to act. Make the behaviour difficult and you won't be able to do it. And if the reward fails to satisfy your desire, then you'll have no reason to do it again in the future.

    We chase rewards because they:

    • Deliver contentment.
    • Satisfy your craving.
    • Teach us which actions are worth remembering in the future.

    If a reward is met then it becomes associated with the cue, thus closing the habit feedback loop.

    If we keep these stages in mind then:

    • To build good habits we need to:

      • Cue: Make it obvious
      • Craving: Make it attractive
      • Response: Make it easy
      • Reward: Make it satisfying
    • To break bad habits we need to:

      • Cue: Make it invisible
      • Craving: Make it unattractive
      • Response: Make it difficult
      • Reward: Make it unsatisfying

Knowledge Management

Anki

Grocy Management

  • New: Doing the inventory review.

    I haven't found a way to make the grocy inventory match the reality because for me it's hard to register when I consume a product. Even more if other people also use them. Therefore I use grocy only to know what to buy without thinking about it. For that use case the inventory needs to meet reality only before doing the groceries. I usually do a big shopping of non-perishable goods at the supermarket once each two or three months, and a weekly shopping of the rest.

    Tracking the goods that are bought each week makes no sense as those are things that are clearly seen and are very variable depending on the season. Once I've automated the ingestion and consumption of products it will, but so far it would mean investing more time than the benefit it gives.

    This doesn't apply to the big shopping, as this one is done infrequently, so we need a better planning.

    To do the inventory review I use a tablet and the android app.

    • Open the stock overview and iterate through the locations to:
    • Make sure that the number of products match the reality
      • Iterate over the list of products checking the quantity
      • Look at the location to see if there are missing products in the inventory
    • Adjust the product properties (default location, minimum amount)
    • Check the resulting shopping list and adjust the minimum values.
    • Check the list of missing products to adjust the minimum values. I have a notepad in the fridge where I write the things I miss.

Coding

Languages

Bash snippets

Configure Docker to host the application

  • New: Inspect contents of Lua table in Neovim.

    When using Lua inside of Neovim you may need to view the contents of Lua tables, which are a first class data structure in Lua world. Tables in Lua can represent ordinary arrays, lists, symbol tables, sets, records, graphs, trees, etc.

    If you try to just print a table directly, you will get the reference address for that table instead of the content, which is not very useful for most debugging purposes:

    :lua print(vim.api.nvim_get_mode())
    " table: 0x7f5b93e5ff88
    

    To solve this, Neovim provides the vim.inspect function as part of its API. It serializes the content of any Lua object into a human readable string.

    For example you can get information about the current mode like so:

    :lua print(vim.inspect(vim.api.nvim_get_mode()))
    " {  blocking = false,  mode = "n"}
    
  • New: Send logs to journald.

    The journald logging driver sends container logs to the systemd journal. Log entries can be retrieved using the journalctl command, through use of the journal API, or using the docker logs command.

    In addition to the text of the log message itself, the journald log driver stores the following metadata in the journal with each message: | Field | Description | | --- | ---- | | CONTAINER_ID | The container ID truncated to 12 characters. | | CONTAINER_ID_FULL | The full 64-character container ID. | | CONTAINER_NAME | The container name at the time it was started. If you use docker rename to rename a container, the new name isn't reflected in the journal entries. | | CONTAINER_TAG, | SYSLOG_IDENTIFIER The container tag ( log tag option documentation). | | CONTAINER_PARTIAL_MESSAGE | A field that flags log integrity. Improve logging of long log lines. |

    To use the journald driver as the default logging driver, set the log-driver and log-opts keys to appropriate values in the daemon.json file, which is located in /etc/docker/.

    {
      "log-driver": "journald"
    }
    

    Restart Docker for the changes to take effect.

  • New: Send the logs to loki.

    There are many ways to send logs to loki

    • Using the json driver and sending them to loki with promtail with the docker driver
    • Using the docker plugin: Grafana Loki officially supports a Docker plugin that will read logs from Docker containers and ship them to Loki.

    I would not recommend to use this path because there is a known issue that deadlocks the docker daemon :S. The driver keeps all logs in memory and will drop log entries if Loki is not reachable and if the quantity of max_retries has been exceeded. To avoid the dropping of log entries, setting max_retries to zero allows unlimited retries; the driver will continue trying forever until Loki is again reachable. Trying forever may have undesired consequences, because the Docker daemon will wait for the Loki driver to process all logs of a container, until the container is removed. Thus, the Docker daemon might wait forever if the container is stuck.

    The wait time can be lowered by setting loki-retries=2, loki-max-backoff_800ms, loki-timeout=1s and keep-file=true. This way the daemon will be locked only for a short time and the logs will be persisted locally when the Loki client is unable to re-connect.

    To avoid this issue, use the Promtail Docker service discovery. - Using the journald driver and sending them to loki with promtail with the journald driver. This has worked for me but the labels extracted are not that great.

  • New: Solve syslog getting filled up with docker network recreation.

    If you find yourself with your syslog getting filled up by lines similar to:

     Jan 15 13:19:19 home kernel: [174716.097109] eth2: renamed from veth0adb07e
     Jan 15 13:19:20 home kernel: [174716.145281] IPv6: ADDRCONF(NETDEV_CHANGE): vethcd477bc: link becomes ready
     Jan 15 13:19:20 home kernel: [174716.145337] br-1ccd0f48be7c: port 5(vethcd477bc) entered blocking state
     Jan 15 13:19:20 home kernel: [174716.145338] br-1ccd0f48be7c: port 5(vethcd477bc) entered forwarding state
     Jan 15 13:19:20 home kernel: [174717.081132] br-fbe765bc7d0a: port 2(veth31cdd6f) entered disabled state
     Jan 15 13:19:20 home kernel: [174717.081176] vethc4da041: renamed from eth0
     Jan 15 13:19:21 home kernel: [174717.214911] br-fbe765bc7d0a: port 2(veth31cdd6f) entered disabled state
     Jan 15 13:19:21 home kernel: [174717.215917] device veth31cdd6f left promiscuous mode
     Jan 15 13:19:21 home kernel: [174717.215919] br-fbe765bc7d0a: port 2(veth31cdd6f) entered disabled state
    

    It probably means that some docker is getting recreated continuously. Those traces are normal logs of docker creating the networks, but as they do each time the docker starts, if it's restarting continuously then you have a problem.

Boto3

  • New: Get running instances.

    import boto3
    
    ec2 = boto3.client('ec2')
    
    running_instances = [
        instance
        for page in ec2.get_paginator('describe_instances').paginate()
        for reservation in page['Reservations']
        for instance in reservation['Instances']]
        if instance['State']['Name'] == 'running'
    ]
    

SQLite

Python Snippets

Inotify

  • New: Introduce python_inotify.

    inotify is a python library that acts as a bridge to the inotify linux kernel which allows you to register one or more directories for watching, and to simply block and wait for notification events. This is obviously far more efficient than polling one or more directories to determine if anything has changed.

    Installation:

    pip install inotify
    

    Basic example using a loop:

    import inotify.adapters
    
    def _main():
        i = inotify.adapters.Inotify()
    
        i.add_watch('/tmp')
    
        with open('/tmp/test_file', 'w'):
            pass
    
        for event in i.event_gen(yield_nones=False):
            (_, type_names, path, filename) = event
    
            print("PATH=[{}] FILENAME=[{}] EVENT_TYPES={}".format(
                  path, filename, type_names))
    
    if __name__ == '__main__':
        _main()
    

    Output:

    PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_MODIFY']
    PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_OPEN']
    PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_CLOSE_WRITE']
    

    Basic example without a loop:

    import inotify.adapters
    
    def _main():
        i = inotify.adapters.Inotify()
    
        i.add_watch('/tmp')
    
        with open('/tmp/test_file', 'w'):
            pass
    
        events = i.event_gen(yield_nones=False, timeout_s=1)
        events = list(events)
    
        print(events)
    
    if __name__ == '__main__':
        _main()
    

    The wait will be done in the list(events) line

Pydantic

  • New: Nicely show validation errors.

    A nice way of showing it is to capture the error and print it yourself:

    try:
        model = Model(
            state=state,
        )
    except ValidationError as error:
        log.error(f'Error building model with state {state}')
        raise error
    
  • New: Load a pydantic model from json.

    You can use the model_validate_json method that will validate and return an object with the loaded data.

    from datetime import date
    
    from pydantic import BaseModel, ConfigDict, ValidationError
    
    class Event(BaseModel):
        model_config = ConfigDict(strict=True)
    
        when: date
        where: tuple[int, int]
    
    json_data = '{"when": "1987-01-28", "where": [51, -1]}'
    print(Event.model_validate_json(json_data))
    
    try:
        Event.model_validate({'when': '1987-01-28', 'where': [51, -1]})
    
    except ValidationError as e:
        print(e)
        """
        2 validation errors for Event
        when
          Input should be a valid date [type=date_type, input_value='1987-01-28', input_type=str]
        where
          Input should be a valid tuple [type=tuple_type, input_value=[51, -1], input_type=list]
        """
    
  • New: Create part of the attributes in the initialization stage.

    class Sqlite(BaseModel):
        model_config = ConfigDict(arbitrary_types_allowed=True)
    
        path: Path
        db: sqlite3.Cursor
    
        def __init__(self, **kwargs):
            conn = sqlite3.connect(kwargs['path'])
            kwargs['db'] = conn.cursor()
            super().__init__(**kwargs)
    

Git

DevOps

  • New: Set default quality of request per user.

    Sometimes one specific user continuously asks for a better quality of the content. If you go into the user configuration (as admin) you can set the default quality profiles for that user.

Infrastructure as Code

Ansible Snippets

Gitea

  • Correction: Update disable regular login with oauth.

    The last signin_inner.tmpl failed with the latest version. I've uploaded the new working one.

Infrastructure Solutions

Kubernetes

  • New: Introduce IceKube.

    IceKube tool for finding complex attack paths in Kubernetes clusters. It's like Bloodhound for Kubernetes. It uses Neo4j to store & analyze Kubernetes resource relationships → identify attack paths & security misconfigs

AWS Savings plan

  • New: Understanding how reserved instances are applied.

    A Reserved Instance that is purchased for a Region is called a regional Reserved Instance, and provides Availability Zone and instance size flexibility.

    • The Reserved Instance discount applies to instance usage in any Availability Zone in that Region.
    • The Reserved Instance discount applies to instance usage within the instance family, regardless of size—this is known as instance size flexibility.

    With instance size flexibility, the Reserved Instance discount applies to instance usage for instances that have the same family, generation, and attribute. The Reserved Instance is applied from the smallest to the largest instance size within the instance family based on the normalization factor.

    The discount applies either fully or partially to running instances of the same instance family, depending on the instance size of the reservation, in any Availability Zone in the Region. The only attributes that must be matched are the instance family, tenancy, and platform.

    The following table lists the different sizes within an instance family, and the corresponding normalization factor. This scale is used to apply the discounted rate of Reserved Instances to the normalized usage of the instance family.

    Instance size Normalization factor
    nano 0.25
    micro 0.5
    small 1
    medium 2
    large 4
    xlarge 8
    2xlarge 16
    3xlarge 24
    4xlarge 32
    6xlarge 48
    8xlarge 64
    9xlarge 72
    10xlarge 80
    12xlarge 96
    16xlarge 128
    18xlarge 144
    24xlarge 192
    32xlarge 256
    48xlarge 384
    56xlarge 448
    112xlarge 896

    For example, a t2.medium instance has a normalization factor of 2. If you purchase a t2.medium default tenancy Amazon Linux/Unix Reserved Instance in the US East (N. Virginia) and you have two running t2.small instances in your account in that Region, the billing benefit is applied in full to both instances.

    Or, if you have one t2.large instance running in your account in the US East (N. Virginia) Region, the billing benefit is applied to 50% of the usage of the instance.

    Limitations:

    • Supported: Instance size flexibility is only supported for Regional Reserved Instances.
    • Not supported: Instance size flexibility is not supported for the following Reserved Instances:
      • Reserved Instances that are purchased for a specific Availability Zone (zonal Reserved Instances)
      • Reserved Instances for G4ad, G4dn, G5, G5g, and Inf1 instances
      • Reserved Instances for Windows Server, Windows Server with SQL Standard, Windows Server with SQL Server Enterprise, Windows Server with SQL Server Web, RHEL, and SUSE Linux Enterprise Server
      • Reserved Instances with dedicated tenancy
  • New: EC2 Instance savings plan versus reserved instances.

    I've been comparing the EC2 Reserved Instances and of the EC2 instance family savings plans and decided to go with the second because:

    • They both have almost the same rates. Reserved instances round the price at the 3rd decimal and the savings plan at the fourth, but this difference is neglegible.
    • Savings plan are easier to calculate, as you just need to multiply the number of instances you want times the current rate and add them all up.
    • Easier to understand: To reserve instances you need to take into account the instance flexibility and the normalization factors which makes it difficult both to make the plans and also to audit how well you're using it.
    • Easier to audit: In addition to the above point, you have nice dashboards to see the coverage and utilization over time of your ec2 instance savings plans, which are at the same place as the other savings plans.
  • New: Important notes when doing a savings plan.

    • Always use the reservation rates instead of the on-demand rates!
    • Analyze your coverage reports. You don't want to have many points of 100% coverage as it means that you're using less resources than you've reserved. On the other hand it's fine to sometimes use less resources than the reserved if that will mean a greater overall savings. It's a tight balance.
    • The Savings plan reservation is taken into account at hour level, not at month or year level. That means that if you reserve 1/hour of an instance type and you use for example 2/hour half the day and 0/hour half the day, you'll have a 100% coverage of your plan the first hour and another 1/hour of on-demand infrastructure cost for the first part of the day. On the second part of the day you'll have a 0% coverage. This means that you should only reserve the amount of resources you plan to be using 100% of the time throughout your savings plan. Again you may want to overcommit a little bit, reducing the utilization percentage of a plan but getting better savings in the end.

Storage

OpenZFS

  • New: Solve the pool or dataset is busy error.

    If you get an error of pool or dataset is busy run the next command to see which process is still running on the pool:

    lsof 2>/dev/null | grep dataset-name
    

ZFS Prometheus exporter

  • Correction: Tweak the zfs_exporter target not available error.

    Remember to set the scrape_timeout to at least of 60s as the exporter is sometimes slow to answer, specially on low hardware resources.

     - job_name: zfs_exporter
       metrics_path: /metrics
       scrape_timeout: 60s
       static_configs:
       - targets: [192.168.3.236:9134]
       metric_relabel_configs:
       ...
    

Monitoring

Promtail

  • New: Introduce Promtail.

    Promtail is an agent which ships the contents of local logs to a Loki instance.

    It is usually deployed to every machine that runs applications which need to be monitored.

    It primarily:

    • Discovers targets
    • Attaches labels to log streams
    • Pushes them to the Loki instance.
  • New: Scrape journald logs.

    On systems with systemd, Promtail also supports reading from the journal. Unlike file scraping which is defined in the static_configs stanza, journal scraping is defined in a journal stanza:

    scrape_configs:
      - job_name: journal
        journal:
          json: false
          max_age: 12h
          path: /var/log/journal
          labels:
            job: systemd-journal
        relabel_configs:
          - source_labels: ['__journal__systemd_unit']
            target_label: unit
          - source_labels: ['__journal__hostname']
            target_label: hostname
          - source_labels: ['__journal_syslog_identifier']
            target_label: syslog_identifier
          - source_labels: ['__journal_transport']
            target_label: transport
          - source_labels: ['__journal_priority_keyword']
            target_label: keyword
    

    All fields defined in the journal section are optional, and are just provided here for reference.

    • max_age ensures that no older entry than the time specified will be sent to Loki; this circumvents entry too old errors.
    • path tells Promtail where to read journal entries from.
    • labels map defines a constant list of labels to add to every journal entry that Promtail reads.
    • matches field adds journal filters. If multiple filters are specified matching different fields, the log entries are filtered by both, if two filters apply to the same field, then they are automatically matched as alternatives.
    • When the json field is set to true, messages from the journal will be passed through the pipeline as JSON, keeping all of the original fields from the journal entry. This is useful when you don’t want to index some fields but you still want to know what values they contained.
    • When Promtail reads from the journal, it brings in all fields prefixed with __journal_ as internal labels. Like in the example above, the _SYSTEMD_UNIT field from the journal was transformed into a label called unit through relabel_configs. Keep in mind that labels prefixed with __ will be dropped, so relabeling is required to keep these labels. Look at the systemd man pages for a list of fields exposed by the journal.

    By default, Promtail reads from the journal by looking in the /var/log/journal and /run/log/journal paths. If running Promtail inside of a Docker container, the path appropriate to your distribution should be bind mounted inside of Promtail along with binding /etc/machine-id. Bind mounting /etc/machine-id to the path of the same name is required for the journal reader to know which specific journal to read from.

    docker run \
      -v /var/log/journal/:/var/log/journal/ \
      -v /run/log/journal/:/run/log/journal/ \
      -v /etc/machine-id:/etc/machine-id \
      grafana/promtail:latest \
      -config.file=/path/to/config/file.yaml
    
  • New: Scrape docker logs.

    Docker service discovery allows retrieving targets from a Docker daemon. It will only watch containers of the Docker daemon referenced with the host parameter. Docker service discovery should run on each node in a distributed setup. The containers must run with either the json-file or journald logging driver.

    Note that the discovery will not pick up finished containers. That means Promtail will not scrape the remaining logs from finished containers after a restart.

    scrape_configs:
      - job_name: docker
        docker_sd_configs:
          - host: unix:///var/run/docker.sock
            refresh_interval: 5s
        relabel_configs:
          - source_labels: ['__meta_docker_container_id']
            target_label: docker_id
          - source_labels: ['__meta_docker_container_name']
            target_label: docker_name
    
    The available meta labels are:

    • __meta_docker_container_id: the ID of the container
    • __meta_docker_container_name: the name of the container
    • __meta_docker_container_network_mode: the network mode of the container
    • __meta_docker_container_label_<labelname>: each label of the container
    • __meta_docker_container_log_stream: the log stream type stdout or stderr
    • __meta_docker_network_id: the ID of the network
    • __meta_docker_network_name: the name of the network
    • __meta_docker_network_ingress: whether the network is ingress
    • __meta_docker_network_internal: whether the network is internal
    • __meta_docker_network_label_<labelname>: each label of the network
    • __meta_docker_network_scope: the scope of the network
    • __meta_docker_network_ip: the IP of the container in this network
    • __meta_docker_port_private: the port on the container
    • __meta_docker_port_public: the external port if a port-mapping exists
    • __meta_docker_port_public_ip: the public IP if a port-mapping exists

    These labels can be used during relabeling. For instance, the following configuration scrapes the container named flog and removes the leading slash (/) from the container name. yaml

    scrape_configs:
      - job_name: flog_scrape
        docker_sd_configs:
          - host: unix:///var/run/docker.sock
            refresh_interval: 5s
            filters:
              - name: name
                values: [flog]
        relabel_configs:
          - source_labels: ['__meta_docker_container_name']
            regex: '/(.*)'
            target_label: 'container'
    

Hardware

GPU

  • New: Introduce GPU.

    GPU or Graphic Processing Unit is a specialized electronic circuit initially designed to accelerate computer graphics and image processing (either on a video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles).

    For years I've wanted to buy a graphic card but I've been stuck in the problem that I don't have a desktop. I have a X280 lenovo laptop used to work and personal use with an integrated card that has let me so far to play old games such as King Arthur Gold or Age of Empires II, but has hard times playing "newer" games such as It takes two. Last year I also bought a NAS with awesome hardware. So it makes no sense to buy a desktop just for playing.

    Now that I host Jellyfin on the NAS and that machine learning is on the hype with a lot of interesting solutions that can be self-hosted (whisper, chatgpt similar solutions...), it starts to make sense to add a GPU to the server. What made me give the step is that you can also self-host a gaming server to stream to any device! It makes so much sense to have all the big guns inside the NAS and stream the content to the less powerful devices.

    That way if you host services, you make the most use of the hardware.

Operating Systems

Linux

Tabs vs Buffers

  • New: Makefile use bash instead of sh.

    The program used as the shell is taken from the variable SHELL. If this variable is not set in your makefile, the program /bin/sh is used as the shell.

    So put SHELL := /bin/bash at the top of your makefile, and you should be good to go.

  • New: Recover the message of a commit if the command failed.

    git commit can fail for reasons such as gpg.commitsign = true && gpg fails, or when running a pre-commit. Retrying the command opens a blank editor and the message seems to be lost.

    The message is saved though in .git/COMMIT_EDITMSG, so you can:

    git commit -m "$(cat .git/COMMIT_EDITMSG)"
    

    Or in general (suitable for an alias for example):

    git commit -m "$(cat "$(git rev-parse --git-dir)/COMMIT_EDITMSG)")"
    
  • New: Switch to the previous opened buffer.

    Often the buffer that you want to edit is the buffer that you have just left. Vim provides a couple of convenient commands to switch back to the previous buffer. These are <C-^> (or <C-6>) and :b#. All of them are inconvenient so I use the next mapping:

    nnoremap <Leader><Tab> :b#<CR>
    

beancount

  • New: Comments.

    Any text on a line after the character ; is ignored, text like this:

    ; I paid and left the taxi, forgot to take change, it was cold.
    2015-01-01 * "Taxi home from concert in Brooklyn"
      Assets:Cash      -20 USD  ; inline comment
      Expenses:Taxi
    

Dino

  • New: Disable automatic OMEMO key acceptance.

    Dino automatically accepts new OMEMO keys from your own other devices and your chat partners by default. This default behaviour leads to the fact that the admin of the XMPP server could inject own public OMEMO keys without user verification, which enables the owner of the associated private OMEMO keys to decrypt your OMEMO secured conversation without being easily noticed.

    To prevent this, two actions are required, the second consists of several steps and must be taken for each new chat partner.

    • First, the automatic acceptance of new keys from your own other devices must be deactivated. Configure this in the account settings of your own accounts.
    • Second, the automatic acceptance of new keys from your chat partners must be deactivated. Configure this in the contact details of every chat partner. Be aware that in the case of group chats, the entire communication can be decrypted unnoticed if even one partner does not actively deactivate automatic acceptance of new OMEMO keys.

    Always confirm new keys from your chat partner before accepting them manually

  • New: Dino does not use encryption by default.

    You have to initially enable encryption in the conversation window by clicking the lock-symbol and choose OMEMO. Future messages and file transfers to this contact will be encrypted with OMEMO automatically.

    • Every chat partner has to enable encryption separately.
    • If only one of two chat partner has activated OMEMO, only this part of the communication will be encrypted. The same applies with file transfers.
    • If you get a message "This contact does not support OMEMO" make sure that your chatpartner has accepted the request to add him to your contact list and you accepted vice versa
  • New: Install in Tails.

    If you have more detailed follow this article at the same time as you read this one. That one is more outdated but more detailed.

    • Boot a clean Tails
    • Create and configure the Persistent Storage
    • Restart Tails and open the Persistent Storage

    • Configure the persistence of the directory:

      echo -e '/home/amnesia/.local/share/dino source=dino' | sudo tee -a /live/persistence/TailsData_unlocked/persistence.conf > /dev/null
      

    • Restart Tails

    • Install the application:

      sudo apt-get update
      sudo apt-get install dino-im
      

    • Configure the dino-im alias to use torsocks

      sudo echo 'alias dino="torsocks dino-im &> /dev/null &"' >> /live/persistence/TailsData_unlocked/dotfiles/.bashrc
      echo 'alias dino="torsocks dino-im &> /dev/null &"' >> ~/.bashrc
      

Jellyfin

  • New: Python library.

    This is the API client from Jellyfin Kodi extracted as a python package so that other users may use the API without maintaining a fork of the API client. Please note that this API client is not complete. You may have to add API calls to perform certain tasks.

    It doesn't (yet) support async

journald

  • New: Introduce journald.

    journald is a system service that collects and stores logging data. It creates and maintains structured, indexed journals based on logging information that is received from a variety of sources:

    • Kernel log messages, via kmsg
    • Simple system log messages, via the libc syslog call
    • Structured system log messages via the native Journal API.
    • Standard output and standard error of service units.
    • Audit records, originating from the kernel audit subsystem.

    The daemon will implicitly collect numerous metadata fields for each log messages in a secure and unfakeable way.

    Journald provides a good out-of-the-box logging experience for systemd. The trade-off is, journald is a bit of a monolith, having everything from log storage and rotation, to log transport and search. Some would argue that syslog is more UNIX-y: more lenient, easier to integrate with other tools. Which was its main criticism to begin with. When the change was made not everyone agreed with the migration from syslog or the general approach systemd took with journald. But by now, systemd is adopted by most Linux distributions, and it includes journald as well. journald happily coexists with syslog daemons, as:

    • Some syslog daemons can both read from and write to the journal
    • journald exposes the syslog API

    It provides lots of features, most importantly:

    • Indexing. journald uses a binary storage for logs, where data is indexed. Lookups are much faster than with plain text files.
    • Structured logging. Though it’s possible with syslog, too, it’s enforced here. Combined with indexing, it means you can easily filter specific logs (e.g. with a set priority, in a set timeframe).
    • Access control. By default, storage files are split by user, with different permissions to each. As a regular user, you won’t see everything root sees, but you’ll see your own logs.
    • Automatic log rotation. You can configure journald to keep logs only up to a space limit, or based on free space.

Kodi

  • New: Start working on a migration script to mediatracker.
  • New: Extract kodi data from the database.

    At ~/.kodi/userdata/Database/MyVideos116.db you can extract the data from the next tables:

    • In the movie_view table there is:
    • idMovie: kodi id for the movie
    • c00: Movie title
    • userrating
    • uniqueid_value: The id of the external web service
    • uniqueid_type: The web it extracts the id from
    • lastPlayed: The reproduction date
    • In the tvshow_view table there is:
    • idShow: kodi id of a show
    • c00: title
    • userrating
    • lastPlayed: The reproduction date
    • uniqueid_value: The id of the external web service
    • uniqueid_type: The web it extracts the id from
    • In the season_view there is no interesting data as the userrating is null on all rows.
    • In the episode_view table there is:
    • idEpisodie: kodi id for the episode
    • idShow: kodi id of a show
    • `idSeason: kodi id of a season
    • c00: title
    • userrating
    • lastPlayed: The reproduction date
    • uniqueid_value: The id of the external web service
    • uniqueid_type: The web it extracts the id from. I've seen mainly tvdb and sonarr
    • Don't use the rating table as it only stores the ratings from external webs such as themoviedb:

Matrix Highlight

  • New: Introduce matrix_highlight.

    Matrix Highlight is a decentralized and federated way of annotating the web based on Matrix.

    Think of it as an open source alternative to hypothesis.

    It's similar to Populus but for the web.

    I want to try it and investigate further specially if you can:

    • Easily extract the annotations
    • Activate it by default everywhere

Mediatracker

  • New: How to use the mediatracker API.

    I haven't found a way to see the api docs from my own instance. Luckily you can browse it at the official instance.

    You can create an application token on your user configuration. Then you can use it with something similar to:

    curl -H 'Content-Type: application/json' https://mediatracker.your-domain.org/api/logs\?token\=your-token | jq
    
  • New: Introduce python library.

    There is a python library although it's doesn't (yet) have any documentation and the functionality so far is only to get information, not to push changes.

  • New: Get list of tv shows.

    With /api/items?mediaType=tv you can get a list of all tv shows with the next interesting fields:

    • id: mediatracker id
    • tmdbId:
    • tvdbId:
    • imdbId:
    • title:
    • lastTimeUpdated: epoch time
    • lastSeenAt: epoch time
    • seen: bool
    • onWatchlist: bool
    • firstUnwatchedEpisode:
    • id: mediatracker episode id
    • episodeNumber:
    • seasonNumber
    • tvShowId:
    • seasonId:
    • lastAiredEpisode: same schema as before

    Then you can use the api/details/{mediaItemId} endpoint to get all the information of all the episodes of each tv show.

Moonlight

  • New: Introduce moonlight.

    Moonlight is an open source client implementation of NVIDIA GameStream that allows you to to stream your collection of games and apps from your GameStream-compatible PC to another device on your network or the Internet. You can play your favorite games on your PC, phone, tablet, or TV with Moonlight..

    References:

Rocketchat

  • New: How to use Rocketchat's API.

    The API docs are a bit weird, you need to go to endpoints and find the one you need. Your best bet though is to open the browser network console and see which requests they are doing and then to find them in the docs.

Syncthing

  • New: Change the path of a folder.

    • Shutdown Syncthing
    • Edit the config file (~/.config/syncthing/config.xml)
    • Search and replace the path
    • Start again syncthing

Android

GrapheneOS

Arts

Cleaning

Cleaning tips

  • New: Cleaning car headlights.

    If you need to clean the car headlights you can use a mixture of one squeezed lemon and two spoonfuls of baking soda

Music

Sister Rosetta Tharpe

  • New: Introduce Sister Rosetta Tharpe.

    Sister Rosetta Tharpe was a visionary, born in 1915 she started shredding the guitar in ways that did not exist in that time. Yes, she founded Rock and Roll. It's lovely to see a gospel singer with an electrical guitar.

    In this video you'll be able to understand how awesome she ws.

    Videos:

Languages

Castellano

  • New: El agua o la agua?.

    El sustantivo agua es de género femenino, pero tiene la particularidad de comenzar por /a/ tónica (la vocal tónica de una palabra es aquella en la que recae el acento de intensidad: [água]). Por razones de fonética histórica, este tipo de palabras seleccionan en singular la forma el del artículo, en lugar de la forma femenina normal la. Esta regla solo opera cuando el artículo antecede inmediatamente al sustantivo, de ahí que digamos el agua, el área, el hacha; pero, si entre el artículo y el sustantivo se interpone otra palabra, la regla queda sin efecto, de ahí que digamos la misma agua, la extensa área, la afilada hacha. Puesto que estas palabras son femeninas, los adjetivos deben concordar siempre en femenino: el agua clara, el área extensa, el hacha afilada (y no el agua claro, el área extenso, el hacha afilado).

    Por su parte, el indefinido una toma generalmente la forma un cuando antecede inmediatamente a sustantivos femeninos que comienzan por /a/ tónica: un área, un hacha, un águila (si bien no es incorrecto, aunque sí poco frecuente, utilizar la forma plena una: una área, una hacha, una águila). Asimismo, los indefinidos alguna y ninguna pueden adoptar en estos casos las formas apocopadas (algún alma, ningún alma) o mantener las formas plenas (alguna alma, ninguna alma).

    Al tratarse de sustantivos femeninos, con los demostrativos este, ese, aquel o con cualquier otro adjetivo determinativo, como todo, mucho, poco, otro, etc., deben usarse las formas femeninas correspondientes: esta hacha, aquella misma arma, toda el agua, mucha hambre, etc. (y no este hacha, aquel mismo arma, todo el agua, mucho hambre, etc.)

Galego

  • New: Add some galego vocabulary.
  • New: Introduce galego.

    O galego é unha lingua indoeuropea que pertence á póla de linguas románicas. É a lingua propia de Galiza, onde é falada por uns 2.4 millóns de galegas. Á parte de en Galiza, a lingua falase tamén en territórios limítrofes con esta comunidade, ainda que sen estatuto de oficialidade, asi como pola diáspora galega que emigrou a outras partes do estado español, América latina, os Estados Unidos, Suíza e outros países do Europa.

  • New: Te e che. Trucos para saber diferencialos.

    En galego temos dúas formas para o pronome átono da segunda persoa do singular: te e che.

    O pronome te ten a función de complemento directo (CD) e o pronome che de complemento indirecto (CI).

    Cando se utiliza o pronome te?

    O pronome te utilízase cando ten a función de CD, propio dos verbos transitivos, xa que alude ao ser ou ao obxecto sobre o que recae a acción verbal.

    Se convertemos a oración en pasiva, o CD pasa a ser o suxeito. Por exemplo:

    Vinte na cafetería / Ti fuches visto por min na cafetería.

    Cando se utiliza o pronome che?

    O pronome che utilízase cando ten a función de CI, xa que indica o destinatario da acción expresada polo verbo. Por exemplo:

    Díxenche a verdade.

    Compreiche unhas lambonadas.

    Truco para saber diferencialos

    Un truco moi rápido para diferenciarmos os pronomes te e che é substituír eses pronomes de segunda persoa polos de terceira.

    Se podemos cambiar ese pronome por o/lo/no ou a/la/na, quere dicir que o pronome vai ser de CD. Polo tanto, temos que poñer te.

    Saudeite onte pola rúa / Saudeino onte pola rúa.

    Chameite por teléfono / Chameina por teléfono.

    Se podemos substituílo por un lle, significa que é un pronome de CI e que debemos utilizar o che.

    Lévoche mañá os apuntamentos / Lévolle mañá os apuntamentos.

    Collinche as entradas do concerto / Collinlle as entradas do concerto.

  • New: Uso de asemade.

    Asemade pode utilizarse como adverbio cando ten o significado de ‘ao mesmo tempo’ ou ‘simultaneamente’. Ainda que normalmente úsase no registro culto, non utilizalo na fala.

    • Non se pode comer e falar asemade.
    • Non podes facer os deberes e ver a televisión asemade, pois non te concentras.

    Tamén se pode utilizar como conxunción co significado de ‘tan pronto como’.

    • Foi o primeiro que vimos asemade entramos.
    • Recoñecino asemade o vin.

    É incorrecto empregar asemade como sinónimo de tamén, ademais ou igualmente.

  • New: Referencias e libros de gramática.

    Referencias:

    Libros gramática:

Science

Data Analysis

Parsers

  • New: Learning about parsers.

    Parsers are a whole world. I kind of feel a bit lost right now and I'm searching for good books on the topic. So far I've found:

    Pros: - Pleasant to read - Doesn't use external tools, you implement it from scratch. - Multiple format: EPUB, PDF, web - You can read it for free - Cute drawings <3

    Cons: - Code snippets are on Java and C - Doesn't use external tools, you implement it from scratch - It's long

    • Compilers: Principles, Techniques, and Tools by Aho, Alfred V. & Monica S. Lam & Ravi Sethi & Jeffrey D. Ullman

    Pros: - EPUB

    Cons: - Code snippets are on C++

    • Parsing Techniques: A Practical Guide by Dick Grune and Ceriel J.H Jacobs

    Pros: - Gives an overview of many grammars and parsers

    Cons: - Only in PDF - It's long - Too focused on the theory, despite the name xD

Other

  • New: Inhibit rules between times.

    To prevent some alerts to be sent between some hours you can use the time_intervals alertmanager configuration.

    This can be useful for example if your backup system triggers some alerts that you don't need to act on.

    yaml route: receiver: 'email' group_by: [job, alertname, severity] group_wait: 5m group_interval: 5m repeat_interval: 12h routes: - receiver: 'email' matchers: - alertname =~ "HostCpuHighIowait|HostContextSwitching|HostUnusualDiskWriteRate" - hostname = backup_server mute_time_intervals: - night time_intervals: - name: night time_intervals: - times: - start_time: 02:00 end_time: 07:00