Skip to content

47th Week of 2024

Life Management

Time management

vdirsyncer

Gancio

  • New: Wordpress plugin.

    This plugin allows you to embed a list of events or a single event from your Gancio website using a shortcode. It also allows you to connects a Gancio instance to a your wordpress website to automatically push events published on WordPress: for this to work an event manager plugin is required, Event Organiser and The Events Calendar are supported. Adding another plugin it’s an easy task and you have a guide available in the repo that shows you how to do it.

    The source code of the plugin is in the wp-plugin directory of the official repo

Life chores management

himalaya

  • New: Introduce himalaya.

    himalaya is a Rust CLI to manage emails.

    Features:

    • Multi-accounting
    • Interactive configuration via wizard (requires wizard feature)
    • Mailbox, envelope, message and flag management
    • Message composition based on $EDITOR
    • IMAP backend (requires imap feature)
    • Maildir backend (requires maildir feature)
    • Notmuch backend (requires notmuch feature)
    • SMTP backend (requires smtp feature)
    • Sendmail backend (requires sendmail feature)
    • Global system keyring for managing secrets (requires keyring feature)
    • OAuth 2.0 authorization (requires oauth2 feature)
    • JSON output via --output json
    • PGP encryption:
    • via shell commands (requires pgp-commands feature)
    • via GPG bindings (requires pgp-gpg feature)
    • via native implementation (requires pgp-native feature)

    Cons:

    • Documentation is inexistent, you have to dive into the --help to understand stuff.

    Installation

    The v1.0.0 is currently being tested on the master branch, and is the prefered version to use. Previous versions (including GitHub beta releases and repositories published versions) are not recommended.

    Himalaya CLI v1.0.0 can be installed with a pre-built binary. Find the latest pre-release GitHub workflow and look for the Artifacts section. You should find a pre-built binary matching your OS.

    Himalaya CLI v1.0.0 can also be installed with cargo:

    $ cargo install --git https://github.com/pimalaya/himalaya.git --force himalaya
    
    Configuration

    Just run himalaya, the wizard will help you to configure your default account.

    You can also manually edit your own configuration, from scratch:

    • Copy the content of the documented ./config.sample.toml
    • Paste it in a new file ~/.config/himalaya/config.toml
    • Edit, then comment or uncomment the options you want

    If using mbrsync

    My generic configuration for an mbrsync account is:

    [accounts.account_name]
    
    email = "lyz@example.org"
    display-name = "lyz"
    envelope.list.table.unseen-char = "u"
    envelope.list.table.replied-char = "r"
    backend.type = "maildir"
    backend.root-dir = "/home/lyz/.local/share/mail/lyz-example"
    backend.maildirpp = false
    message.send.backend.type = "smtp"
    message.send.backend.host = "example.org"
    message.send.backend.port = 587
    message.send.backend.encryption = "start-tls"
    message.send.backend.login = "lyz"
    message.send.backend.auth.type = "password"
    message.send.backend.auth.command = "pass show mail/lyz.example"
    

    Once you've set it then you need to fix the INBOX directory.

    Then you can check if it works by running himalaya envelopes list -a lyz-example

    Vim plugin installation

    Using lazy:

    return {
      {
        "pimalaya/himalaya-vim",
      },
    }
    

    You can then run :Himalaya account_name and it will open himalaya in your editor.

    Configure the account bindings

    To avoid typing :Himalaya account_name each time you want to check the email you can set some bindings:

    return {
      {
        "pimalaya/himalaya-vim",
        keys = {
          { "<leader>ma", "<cmd>Himalaya account_name<cr>", desc = "Open account_name@example.org" },
          { "<leader>ml", "<cmd>Himalaya lyz<cr>", desc = "Open lyz@example.org" },
        },
      },
    }
    

    Setting the description is useful to see the configured accounts with which-key by typing <leader>m and waiting.

    Configure extra bindings

    The default plugin doesn't yet have all the bindings I'd like so I've added the next ones:

    • In the list of emails view:
    • dd in normal mode or d in visual: Delete emails
    • q: exit the program

    • In the email view:

    • d: Delete email
    • q: Return to the list of emails view

    If you want them too set the next config:

    return {
      {
        "pimalaya/himalaya-vim",
        config = function()
          vim.api.nvim_create_augroup("HimalayaCustomBindings", { clear = true })
          vim.api.nvim_create_autocmd("FileType", {
            group = "HimalayaCustomBindings",
            pattern = "himalaya-email-listing",
            callback = function()
              -- Bindings to delete emails
              vim.api.nvim_buf_set_keymap(0, "n", "dd", "<plug>(himalaya-email-delete)", { noremap = true, silent = true })
              vim.api.nvim_buf_set_keymap(0, "x", "d", "<plug>(himalaya-email-delete)", { noremap = true, silent = true })
              -- Bind `q` to close the window
              vim.api.nvim_buf_set_keymap(0, "n", "q", ":bd<CR>", { noremap = true, silent = true })
            end,
          })
    
          vim.api.nvim_create_augroup("HimalayaEmailCustomBindings", { clear = true })
          vim.api.nvim_create_autocmd("FileType", {
            group = "HimalayaEmailCustomBindings",
            pattern = "mail",
            callback = function()
              -- Bind `q` to close the window
              vim.api.nvim_buf_set_keymap(0, "n", "q", ":q<CR>", { noremap = true, silent = true })
              -- Bind `d` to delete the email and close the window
              vim.api.nvim_buf_set_keymap(
                0,
                "n",
                "d",
                "<plug>(himalaya-email-delete):q<CR>",
                { noremap = true, silent = true }
              )
            end,
          })
        end,
      },
    }
    

    Configure email fetching from within vim

    Fetching emails from within vim is not yet supported, so I'm manually refreshing by account:

    return {
      {
        "pimalaya/himalaya-vim",
        keys = {
          -- Email refreshing bindings
          { "<leader>rj", ':lua FetchEmails("lyz")<CR>', desc = "Fetch lyz@example.org" },
        },
        config = function()
          function FetchEmails(account)
            vim.notify("Fetching emails for " .. account .. ", please wait...", vim.log.levels.INFO)
            vim.cmd("redraw")
            vim.fn.jobstart("mbsync " .. account, {
              on_exit = function(_, exit_code, _)
                if exit_code == 0 then
                  vim.notify("Emails for " .. account .. " fetched successfully!", vim.log.levels.INFO)
                else
                  vim.notify("Failed to fetch emails for " .. account .. ". Check the logs.", vim.log.levels.ERROR)
                end
              end,
            })
          end
        end,
      },
    }
    

    You still need to open again :Himalaya account_name as the plugin does not reload if there are new emails.

    Show notifications when emails arrive

    You can set up mirador to get those notifications.

    Not there yet

    Troubleshooting

    Cannot find maildir matching name INBOX

    mbrsync uses Inbox instead of the default INBOX so it doesn't find it. In theory you can use folder.alias.inbox = "Inbox" but it didn't work with me, so I finally ended up doing a symbolic link from INBOX to Inbox.

    Cannot find maildir matching name Trash

    That's because the Trash directory does not follow the Maildir structure. I had to create the cur tmp and new directories.

    References - Source - Vim plugin source

  • New: Introduce mailbox.

    mailbox is a python library to work with MailDir and mbox local mailboxes.

    It's part of the core python libraries, so you don't need to install anything.

    Usage

    The docs are not very pleasant to read, so I got most of the usage knowledge from these sources:

    One thing to keep in mind is that an account can have many mailboxes (INBOX, Sent, ...), there is no "root mailbox" that contains all of the other

    initialise a mailbox

    mbox = mailbox.Maildir('path/to/your/mailbox')
    

    Where the path/to/your/mailbox is the directory that contains the cur, new, and tmp directories.

    Working with mailboxes

    It's not very clear how to work with them, the Maildir mailbox contains the emails in iterators [m for m in mbox], it acts kind of a dictionary, you can get the keys of the emails with [k for k in mbox.iterkeys], and then you can mbox[key] to get an email, you cannot modify those emails (flags, subdir, ...) directly in the mbox object (for example mbox[key].set_flags('P') doesn't work). You need to mail = mbox.pop(key), do the changes in the mail object and then mbox.add(mail) it again, with the downside that after you added it again, the key has changed! But it's the return value of the add method.

    If the program gets interrupted between the pop and the add then you'll loose the email. The best way to work with it would be then:

    • mail = mbox.get(key) the email
    • Do all the process you need to do with the email
    • mbox.pop(key) and key = mbox.add(mail)

    In theory mbox has an update method that does this, but I don't understand it and it doesn't work as expected :S.

    Moving emails around

    You can't just move the files between directories like you'd do with python as each directory contains it's own identifiers.

    Moving a message between the maildir directories

    The Message has a set_subdir

    Creating folders

    Even though you can create folders with mailbox it creates them in a way that mbsync doesn't understand it. It's easier to manually create the cur, tmp, and new directories. I'm using the next function:

    if not (mailbox_dir / "cur").exists():
        for dir in ["cur", "tmp", "new"]:
            (mailbox_dir / dir).mkdir(parents=True)
        log.info(f"Initialized mailbox: {mailbox}")
    else:
        log.debug(f"{mailbox} already exists")
    

    References

  • New: Introduce maildir.

    The Maildir e-mail format is a common way of storing email messages on a file system, rather than in a database. Each message is assigned a file with a unique name, and each mail folder is a file system directory containing these files.

    A Maildir directory (often named Maildir) usually has three subdirectories named tmp, new, and cur.

    • The tmp subdirectory temporarily stores e-mail messages that are in the process of being delivered. This subdirectory may also store other kinds of temporary files.
    • The new subdirectory stores messages that have been delivered, but have not yet been seen by any mail application.
    • The cur subdirectory stores messages that have already been seen by mail applications.

    References

  • New: My emails are not being deleted on the source IMAP server.

    That's the default behavior of mbsync, if you want it to actually delete the emails on the source you need to add:

    Expunge Both
    
    Under your channel (close to Sync All, Create Both)

  • New: Mbsync error: UID is beyond highest assigned UID.

    If during the sync you receive the following errors:

    mbsync error: UID is 3 beyond highest assigned UID 1
    

    Go to the place where mbsync is storing the emails and find the file that is giving the error, you need to find the files that contain U=3, imagine that it's something like 1568901502.26338_1.hostname,U=3:2,S. You can strip off everything from the ,U= from that filename and resync and it should be fine, e.g.

    mv '1568901502.26338_1.hostname,U=3:2,S' '1568901502.26338_1.hostname'
    
    feat(mirador): introduce mirador

    DEPRECATED: as of 2024-11-15 the tool has many errors (1, 2), few stars (4) and few commits (8). use watchdog instead and build your own solution.

    mirador is a CLI to watch mailbox changes made by the maintaner of himalaya.

    Features:

    • Watches and executes actions on mailbox changes
    • Interactive configuration via wizard (requires wizard feature)
    • Supported events: on message added.
    • Supported actions: send system notification, execute shell command.
    • Supports IMAP mailboxes (requires imap feature)
    • Supports Maildir folders (requires maildir feature)
    • Supports global system keyring to manage secrets (requires keyring feature)
    • Supports OAuth 2.0 (requires oauth2 feature)

    Mirador CLI is written in Rust, and relies on cargo features to enable or disable functionalities. Default features can be found in the features section of the Cargo.toml.

    Installation

    The v1.0.0 is currently being tested on the master branch, and is the preferred version to use. Previous versions (including GitHub beta releases and repositories published versions) are not recommended.*

    Cargo (git)

    Mirador CLI v1.0.0 can also be installed with cargo:

    $ cargo install --frozen --force --git https://github.com/pimalaya/mirador.git
    

    Pre-built binary

    Mirador CLI v1.0.0 can be installed with a pre-built binary. Find the latest pre-release GitHub workflow and look for the Artifacts section. You should find a pre-built binary matching your OS.

    Configuration

    Just run mirador, the wizard will help you to configure your default account.

    You can also manually edit your own configuration, from scratch:

    • Copy the content of the documented ./config.sample.toml
    • Paste it in a new file ~/.config/mirador/config.toml
    • Edit, then comment or uncomment the options you want

    • Source

alot

  • Correction: Deprecate in favour of himalaya.

    DEPRECATED: Use himalaya instead.

  • New: Automatically sync emails.

    I have many emails, and I want to fetch them with different frequencies, in the background and be notified if anything goes wrong.

    For that purpose I've created a python script, a systemd service and some loki rules to monitor it.

    Script to sync emails and calendars with different frequencies

    The script iterates over the configured accounts in accounts_config and runs mbsync for email accounts and vdirsyncer for email accounts based on some cron expressions. It logs the output in logfmt format so that it's easily handled by loki

    To run it you'll first need to create a virtualenv, I use mkvirtualenv account_syncer which creates a virtualenv in ~/.local/share/virtualenv/account_syncer.

    Then install the dependencies:

    pip install aiocron
    

    Then place this script somewhere, for example (~/.local/bin/account_syncer.py)

    import asyncio
    import logging
    from datetime import datetime
    import asyncio.subprocess
    import aiocron
    
    accounts_config = {
        "emails": [
          {
              "account_name": "lyz",
              "cron_expressions": ["*/15 9-23 * * *"],
          },
          {
              "account_name": "work",
              "cron_expressions": ["*/60 8-17 * * 1-5"],  # Monday-Friday
          },
          {
              "account_name": "monitorization",
              "cron_expressions": ["*/5 * * * *"],
          },
        ],
        "calendars": [
          {
              "account_name": "lyz",
              "cron_expressions": ["*/15 9-23 * * *"],
          },
          {
              "account_name": "work",
              "cron_expressions": ["*/60 8-17 * * 1-5"],  # Monday-Friday
          },
        ],
    }
    
    class LogfmtFormatter(logging.Formatter):
        """Custom formatter to output logs in logfmt style."""
    
        def format(self, record: logging.LogRecord) -> str:
            log_message = (
                f"level={record.levelname.lower()} "
                f"logger={record.name} "
                f'msg="{record.getMessage()}"'
            )
            return log_message
    
    def setup_logging(logging_name: str) -> logging.Logger:
        """Configure logging to use logfmt format.
        Args:
            logging_name (str): The logger's name and identifier in the systemd journal.
        Returns:
            Logger: The configured logger.
        """
        console_handler = logging.StreamHandler()
        logfmt_formatter = LogfmtFormatter()
        console_handler.setFormatter(logfmt_formatter)
        logger = logging.getLogger(logging_name)
        logger.setLevel(logging.INFO)
        logger.addHandler(console_handler)
        return logger
    
    log = setup_logging("account_syncer")
    
    async def run_mbsync(account_name: str) -> None:
        """Run mbsync command asynchronously for email accounts.
    
        Args:
            account_name (str): The name of the email account to sync.
        """
        command = f"mbsync {account_name}"
        log.info(f"Syncing emails for {account_name}...")
        process = await asyncio.create_subprocess_shell(
            command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
        )
        stdout, stderr = await process.communicate()
        if stdout:
            log.info(f"Output for {account_name}: {stdout.decode()}")
        if stderr:
            log.error(f"Error for {account_name}: {stderr.decode()}")
    
    async def run_vdirsyncer(account_name: str) -> None:
        """Run vdirsyncer command asynchronously for calendar accounts.
    
        Args:
            account_name (str): The name of the calendar account to sync.
        """
        command = f"vdirsyncer sync {account_name}"
        log.info(f"Syncing calendar for {account_name}...")
        process = await asyncio.create_subprocess_shell(
            command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
        )
        _, stderr = await process.communicate()
        if stderr:
            command_log = stderr.decode().strip()
            if "error" in command_log or "critical" in command_log:
                log.error(f"Output for {account_name}: {command_log}")
            elif len(command_log.splitlines()) > 1:
                log.info(f"Output for {account_name}: {command_log}")
    
    def should_i_sync_today(cron_expr: str) -> bool:
        """Check if the current time matches the cron expression day and hour constraints."""
        _, hour, _, _, day_of_week = cron_expr.split()
        now = datetime.now()
        if "*" in hour:
            return True
        elif not (int(hour.split("-")[0]) <= now.hour <= int(hour.split("-")[1])):
            return False
        if day_of_week != "*" and str(now.weekday()) not in day_of_week.split(","):
            return False
        return True
    
    async def main():
        log.info("Starting account syncer for emails and calendars")
        accounts_to_sync = {"emails": [], "calendars": []}
    
        # Schedule email accounts
        for account in accounts_config["emails"]:
            account_name = account["account_name"]
            for cron_expression in account["cron_expressions"]:
                if (
                    should_i_sync_today(cron_expression)
                    and account_name not in accounts_to_sync["emails"]
                ):
                    accounts_to_sync["emails"].append(account_name)
                aiocron.crontab(cron_expression, func=run_mbsync, args=[account_name])
                log.info(
                    f"Scheduled mbsync for {account_name} with cron expression: {cron_expression}"
                )
    
        # Schedule calendar accounts
        for account in accounts_config["calendars"]:
            account_name = account["account_name"]
            for cron_expression in account["cron_expressions"]:
                if (
                    should_i_sync_today(cron_expression)
                    and account_name not in accounts_to_sync["calendars"]
                ):
                    accounts_to_sync["calendars"].append(account_name)
                aiocron.crontab(cron_expression, func=run_vdirsyncer, args=[account_name])
                log.info(
                    f"Scheduled vdirsyncer for {account_name} with cron expression: {cron_expression}"
                )
    
        log.info("Running an initial fetch on today's accounts")
        for account_name in accounts_to_sync["emails"]:
            await run_mbsync(account_name)
        for account_name in accounts_to_sync["calendars"]:
            await run_vdirsyncer(account_name)
    
        log.info("Finished loading accounts")
        while True:
            await asyncio.sleep(60)
    
    if __name__ == "__main__":
        asyncio.run(main())
    

    Where:

    • accounts_config: Holds your account configuration. Each account must contain an account_name which should be the name of the mbsync or vdirsyncer profile, and cron_expressions must be a list of cron valid expressions you want the email to be synced.

    Create the systemd service

    We're using a non-root systemd service. You can follow these instructions to configure this service:

    [Unit]
    Description=Account Sync Service for emails and calendars
    After=graphical-session.target
    
    [Service]
    Type=simple
    ExecStart=/home/lyz/.local/share/virtualenvs/account_syncer/bin/python /home/lyz/.local/bin/
    WorkingDirectory=/home/lyz/.local/bin
    Restart=on-failure
    StandardOutput=journal
    StandardError=journal
    SyslogIdentifier=account_syncer
    Environment="PATH=/home/lyz/.local/share/virtualenvs/account_syncer/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
    Environment="DISPLAY=:0"
    Environment="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus"
    
    [Install]
    WantedBy=graphical-session.target
    

    Remember to tweak the service to match your current case and paths.

    As we'll probably need to enter our pass password we need the service to start once we've logged into the graphical interface.

    Monitor the automation

    It's always nice to know if the system is working as expected without adding mental load. To do that I'm creating the next loki rules:

    groups:
      - name: account_sync
        rules:
          - alert: AccountSyncIsNotRunningWarning
            expr: |
              (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"}[15m])) or sum by(hostname) (count_over_time({hostname="my_computer"} [15m])) * 0 ) == 0
            for: 0m
            labels:
              severity: warning
            annotations:
              summary: "The account sync script is not running {{ $labels.hostname}}"
          - alert: AccountSyncIsNotRunningError
            expr: |
              (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"}[3h])) or sum by(hostname) (count_over_time({hostname="my_computer"} [3h])) * 0 ) == 0
            for: 0m
            labels:
              severity: error
            annotations:
              summary: "The account sync script has been down for at least 3 hours {{ $labels.hostname}}"
          - alert: AccountSyncError
            expr: |
              count(rate({job="systemd-journal", syslog_identifier="account_syncer"} |= `` | logfmt | level_extracted=`error` [5m])) > 0
            for: 0m
            labels:
              severity: warning
            annotations:
              summary: "There are errors in the account sync log at {{ $labels.hostname}}"
    
          - alert: EmailAccountIsOutOfSyncLyz
            expr: |
              (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"} | logfmt | msg=`Syncing emails for lyz...`[1h])) or sum by(hostname) (count_over_time({hostname="my_computer"} [1h])) * 0 ) == 0
            for: 0m
            labels:
              severity: error
            annotations:
              summary: "The email account lyz has been out of sync for 1h {{ $labels.hostname}}"
    
          - alert: CalendarAccountIsOutOfSyncLyz
            expr: |
              (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"} | logfmt | msg=`Syncing calendar for lyz...`[3h])) or sum by(hostname) (count_over_time({hostname="my_computer"} [3h])) * 0 ) == 0
            for: 0m
            labels:
              severity: error
            annotations:
              summary: "The calendar account lyz has been out of sync for 3h {{ $labels.hostname}}"
    
    Where: - You need to change my_computer for the hostname of the device running the service - Tweak the OutOfSync alerts to match your account (change the lyz part).

    These rules will raise: - A warning if the sync has not shown any activity in the last 15 minutes. - An error if the sync has not shown any activity in the last 3 hours. - An error if there is an error in the logs of the automation.

Rocketchat

Coding

Languages

aiocron

  • New: Introduce aiocron.

    aiocron is a python library to run cron jobs in python asyncronously.

    Usage

    You can run it using a decorator

    >>> import aiocron
    >>> import asyncio
    >>>
    >>> @aiocron.crontab('*/30 * * * *')
    ... async def attime():
    ...     print('run')
    ...
    >>> asyncio.get_event_loop().run_forever()
    

    Or by calling the function yourself

    >>> cron = crontab('0 * * * *', func=yourcoroutine, start=False)
    

    Here's a simple example on how to run it in a script:

    import asyncio
    from datetime import datetime
    import aiocron
    
    async def foo(param):
        print(datetime.now().time(), param)
    
    async def main():
        cron_min = aiocron.crontab('*/1 * * * *', func=foo, args=("At every minute",), start=True)
        cron_hour = aiocron.crontab('0 */1 * * *', func=foo, args=("At minute 0 past every hour.",), start=True)
        cron_day = aiocron.crontab('0 9 */1 * *', func=foo, args=("At 09:00 on every day-of-month",), start=True)
        cron_week = aiocron.crontab('0 9 * * Mon', func=foo, args=("At 09:00 on every Monday",), start=True)
    
        while True:
            await asyncio.sleep(1)
    
    asyncio.run(main())
    

    You have more complex examples in the repo

    Installation

    pip install aiocron
    

    References - Source

Logging

  • New: Configure the logging module to log directly to systemd's journal.

    To use systemd.journal in Python, you need to install the systemd-python package. This package provides bindings for systemd functionality.

    Install it using pip:

    pip install systemd-python
    
    Below is an example Python script that configures logging to send messages to the systemd journal:

    import logging
    from systemd.journal import JournalHandler
    
    logger = logging.getLogger('my_app')
    logger.setLevel(logging.DEBUG)  # Set the logging level
    
    journal_handler = JournalHandler()
    journal_handler.setLevel(logging.DEBUG)  # Adjust logging level if needed
    journal_handler.addFilter(
        lambda record: setattr(record, "SYSLOG_IDENTIFIER", "mbsync_syncer") or True
    )
    
    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    journal_handler.setFormatter(formatter)
    
    logger.addHandler(journal_handler)
    
    logger.info("This is an info message.")
    logger.error("This is an error message.")
    logger.debug("Debugging information.")
    

    When you run the script, the log messages will be sent to the systemd journal. You can view them using the journalctl command:

    sudo journalctl -f
    

    This command will show the latest log entries in real time. You can filter by your application name using:

    sudo journalctl -f -t my_app
    

    Replace my_app with the logger name you used (e.g., 'my_app').

    Additional Tips - Tagging: You can add a custom identifier for your logs by setting logging.getLogger('your_tag'). This will allow you to filter logs using journalctl -t your_tag. - Log Levels: You can control the verbosity of the logs by setting different levels (e.g., DEBUG, INFO, WARNING, ERROR, CRITICAL).

    Example Output in the Systemd Journal

    You should see entries similar to the following in the systemd journal:

    Nov 15 12:45:30 my_hostname my_app[12345]: 2024-11-15 12:45:30,123 - my_app - INFO - This is an info message.
    Nov 15 12:45:30 my_hostname my_app[12345]: 2024-11-15 12:45:30,124 - my_app - ERROR - This is an error message.
    Nov 15 12:45:30 my_hostname my_app[12345]: 2024-11-15 12:45:30,125 - my_app - DEBUG - Debugging information.
    

    This approach ensures that your logs are accessible through standard systemd tools and are consistent with other system logs. Let me know if you have any additional requirements or questions!

Inotify

  • Correction: Deprecate inotify.

    DEPRECATED: As of 2024-11-15 it's been 4 years since the last commit. watchdog has 6.6k stars and last commit was done 2 days ago.

watchdog

  • New: Introduce watchdog.

    watchdog is a Python library and shell utilities to monitor filesystem events.

    Cons:

    Installation

    pip install watchdog
    

    Usage

    A simple program that uses watchdog to monitor directories specified as command-line arguments and logs events generated:

    import time
    
    from watchdog.events import FileSystemEvent, FileSystemEventHandler
    from watchdog.observers import Observer
    
    class MyEventHandler(FileSystemEventHandler):
        def on_any_event(self, event: FileSystemEvent) -> None:
            print(event)
    
    event_handler = MyEventHandler()
    observer = Observer()
    observer.schedule(event_handler, ".", recursive=True)
    observer.start()
    try:
        while True:
            time.sleep(1)
    finally:
        observer.stop()
        observer.join()
    

    References - Source - Docs

DevSecOps

Storage

OpenZFS

  • New: Sync an already created cold backup.

    Mount the existent pool

    Imagine your pool is at /dev/sdf2:

    • Connect your device
    • Check for available ZFS pools: First, check if the system detects any ZFS pools that can be imported:
    sudo zpool import
    

    This command will list all pools that are available for import, including the one stored in /dev/sdf2. Look for the pool name you want to import.

    • Import the pool: If you see the pool listed and you know its name (let's say the pool name is mypool), you can import it with:
    sudo zpool import mypool
    
    • Import the pool from a specific device: If the pool isn't showing up or you want to specify the device directly, you can use:
    sudo zpool import -d /dev/sdf2
    

    This tells ZFS to look specifically at /dev/sdf2 for any pools. If you don't know the name of the pool this is also the command to run.

    This should list any pools found on the device. If it shows a pool, import it using:

    sudo zpool import -d /dev/sdf2 <pool_name>
    
    • Mount the pool: Once the pool is imported, ZFS should automatically mount any datasets associated with the pool. You can check the status of the pool with:
    sudo zpool status
    

    Additional options:

    • If the pool was exported cleanly, you can use zpool import without additional flags.
    • If the pool wasn’t properly exported or was interrupted, you might need to use -f (force) to import it:

    sudo zpool import -f mypool
    
    feat(linux_snippets#Create a systemd service for a non-root user): Create a systemd service for a non-root user

    To set up a systemd service as a non-root user, you can create a user-specific service file under your home directory. User services are defined in ~/.config/systemd/user/ and can be managed without root privileges.

    1. Create the service file:

    Open a terminal and create a new service file in ~/.config/systemd/user/. For example, if you want to create a service for a script named my_script.py, follow these steps:

    mkdir -p ~/.config/systemd/user
    nano ~/.config/systemd/user/my_script.service
    
    1. Edit the service file:

    In the my_script.service file, add the following configuration:

    [Unit]
    Description=My Python Script Service
    After=network.target
    
    [Service]
    Type=simple
    ExecStart=/usr/bin/python3 /path/to/your/script/my_script.py
    WorkingDirectory=/path/to/your/script/
    SyslogIdentifier=my_script
    Restart=on-failure
    StandardOutput=journal
    StandardError=journal
    
    [Install]
    WantedBy=default.target
    
    • Description: A short description of what the service does.
    • ExecStart: The command to run your script. Replace /path/to/your/script/my_script.py with the full path to your Python script. If you want to run the script within a virtualenv you can use /path/to/virtualenv/bin/python instead of /usr/bin/python3.

      You'll need to add the virtualenv path to Path

      # Add virtualenv's bin directory to PATH
      Environment="PATH=/path/to/virtualenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
      
      - WorkingDirectory: Set the working directory to where your script is located (optional). - Restart: Restart the service if it fails. - StandardOutput and StandardError: This ensures that the output is captured in the systemd journal. - WantedBy: Specifies the target to which this service belongs. default.target is commonly used for user services.

    • Reload systemd to recognize the new service:

    Run the following command to reload systemd's user service files:

    systemctl --user daemon-reload
    
    1. Enable and start the service:

    To start the service immediately and enable it to run on boot (for your user session), use the following commands:

    systemctl --user start my_script.service
    systemctl --user enable my_script.service
    
    1. Check the status and logs:

    2. To check if the service is running:

      systemctl --user status my_script.service
      
    3. To view logs specific to your service:

      journalctl --user -u my_script.service -f
      

    If you need to use the graphical interface

    If your script requires user interaction (like entering a GPG passphrase), it’s crucial to ensure that the service is tied to your graphical user session, which ensures that prompts can be displayed and interacted with.

    To handle this situation, you should make a few adjustments to your systemd service:

    Ensure service is bound to graphical session

    Change the WantedBy target to graphical-session.target instead of default.target. This makes sure the service waits for the full graphical environment to be available.

    Use Type=forking instead of Type=simple (optional)

    If you need the service to wait until the user is logged in and has a desktop session ready, you might need to tweak the service type. Usually, Type=simple is fine, but you can also experiment with Type=forking if you notice any issues with user prompts.

    Here’s how you should modify your mbsync_syncer.service file:

    [Unit]
    Description=My Python Script Service
    After=graphical-session.target
    
    [Service]
    Type=simple
    ExecStart=/usr/bin/python3 /path/to/your/script/my_script.py
    WorkingDirectory=/path/to/your/script/
    Restart=on-failure
    StandardOutput=journal
    StandardError=journal
    SyslogIdentifier=my_script
    Environment="DISPLAY=:0"
    Environment="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus"
    
    [Install]
    WantedBy=graphical-session.target
    

    After modifying the service, reload and restart it:

    systemctl --user daemon-reload
    systemctl --user restart my_script.service
    

Operating Systems

Linux

Linux Snippets

  • New: Debugging high IOwait.

    High I/O wait (iowait) on the CPU, especially at 50%, typically indicates that your system is spending a large portion of its time waiting for I/O operations (such as disk access) to complete. This can be caused by a variety of factors, including disk bottlenecks, overloaded storage systems, or inefficient applications making disk-intensive operations.

    Here’s a structured approach to debug and analyze high I/O wait on your server:

    ** Monitor disk I/O**

    First, verify if disk I/O is indeed the cause. Tools like iostat, iotop, and dstat can give you an overview of disk activity:

    • iostat: This tool reports CPU and I/O statistics. You can install it with apt-get install sysstat. Run the following command to check disk I/O stats:

    iostat -x 1
    
    The -x flag provides extended statistics, and 1 means it will report every second. Look for high values in the %util and await columns, which represent: - %util: Percentage of time the disk is busy (ideally should be below 90% for most systems). - await: Average time for I/O requests to complete.

    If either of these values is unusually high, it indicates that the disk subsystem is likely overloaded.

    • iotop: If you want a more granular look at which processes are consuming disk I/O, use iotop:
    sudo iotop -o
    

    This will show you the processes that are actively performing I/O operations.

    • dstat: Another useful tool for monitoring disk I/O in real-time:
    dstat -cdl 1
    

    This shows CPU, disk, and load stats, refreshing every second. Pay attention to the dsk/await value.

    Check disk health Disk issues such as bad sectors or failing drives can also lead to high I/O wait times. To check the health of your disks:

    • Use smartctl: This tool can give you a health check of your disks if they support S.M.A.R.T.
    sudo smartctl -a /dev/sda
    

    Check for any errors or warnings in the output. Particularly look for things like reallocated sectors or increasing "pending sectors."

    • dmesg logs: Look at the system logs for disk errors or warnings:
    dmesg | grep -i "error"
    

    If there are frequent disk errors, it may be time to replace the disk or investigate hardware issues.

    Look for disk saturation If the disk is saturated, no matter how fast the CPU is, it will be stuck waiting for data to come back from the disk. To further investigate disk saturation:

    • df -h: Check if your disk partitions are full or close to full.
    df -h
    
    • lsblk: Check how your disks are partitioned and how much data is written to each partition:
    lsblk -o NAME,SIZE,TYPE,MOUNTPOINT
    
    • blktrace: For advanced debugging, you can use blktrace, which traces block layer events on your system.
    sudo blktrace -d /dev/sda -o - | blkparse -i -
    

    This will give you very detailed insights into how the system is interacting with the block device.

    Check for heavy disk-intensive processes Identify processes that might be using excessive disk I/O. You can use tools like iotop (as mentioned earlier) or pidstat to look for processes with high disk usage:

    • pidstat: Track per-process disk activity:
    pidstat -d 1
    

    This command will give you I/O statistics per process every second. Look for processes with high I/O values (r/s and w/s).

    • top or htop: While top or htop can show CPU usage, they can also show process-level disk activity. Focus on processes consuming high CPU or memory, as they might also be performing heavy I/O operations.

    check file system issues Sometimes the file system itself can be the source of I/O bottlenecks. Check for any file system issues that might be causing high I/O wait.

    • Check file system consistency: If you suspect the file system is causing issues (e.g., due to corruption), run a file system check. For ext4:
    sudo fsck /dev/sda1
    

    Ensure you unmount the disk first or do this in single-user mode.

    • Check disk scheduling: Some disk schedulers (like cfq or deadline) might perform poorly depending on your workload. You can check the scheduler used by your disk with:
    cat /sys/block/sda/queue/scheduler
    

    You can change the scheduler with:

    echo deadline > /sys/block/sda/queue/scheduler
    

    This might improve disk performance, especially for certain workloads.

    Examine system logs The system logs (/var/log/syslog or /var/log/messages) may contain additional information about hardware issues, I/O bottlenecks, or kernel-related warnings:

    sudo tail -f /var/log/syslog
    

    or

    sudo tail -f /var/log/messages
    

    Look for I/O or disk-related warnings or errors.

    Consider hardware upgrades or tuning

    • SSD vs HDD: If you're using HDDs, consider upgrading to SSDs. HDDs can be much slower in terms of I/O, especially if you have a high number of random read/write operations.
    • RAID Configuration: If you are using RAID, check the RAID configuration and ensure it's properly tuned for performance (e.g., using RAID-10 for a good balance of speed and redundancy).
    • Memory and CPU Tuning: If the server is swapping due to insufficient RAM, it can result in increased I/O wait. You might need to add more RAM or optimize the system to avoid excessive swapping.

    Check for swapping issues Excessive swapping can contribute to high I/O wait times. If your system is swapping (which happens when physical RAM is exhausted), I/O wait spikes as the system reads from and writes to swap space on disk.

    • Check swap usage:
    free -h
    

    If swap usage is high, you may need to add more physical RAM or optimize applications to reduce memory pressure.

  • New: Create a file with random data.

    Of 3.5 GB

    dd if=/dev/urandom of=random_file.bin bs=1M count=3584
    

Arts

Parkour

  • New: Warming up.

    Never do static stretches if you're cold, it's better to do dynamic stretches.

    Take the joints through rotations

    • Head:
    • Nod 10 times
    • Say no 10 times
    • Ear shoulder 10 times
    • Circles 10 times each direction

    • Shoulders

    • Circles back 10 times
    • Circles forward 10 times

    • Elbows

    • Circles 10 each direction

    • Wrists:

    • Circle 10 each direction

    • Chest:

    • Chest out/in 10 times
    • Chest one side to the other 10 times
    • Chest in circles

    • Hips:

    • Circles 10 each direction
    • Figure eight 10 times each direction

    • Knees:

    • Circular rotations 10 each direction feet and knees together
    • 10 ups and downs with knees together
    • Circular rotations 10 each direction feet waist width

    • Ankles:

    • Circular 10 rotations each direction

    Light exercises

    • 10 steps forward of walking on your toes, 10 back
    • 10 steps forward of walking on your toes feet rotated outwards, 10 back
    • 10 steps forward of walking on your toes feet rotated inwards, 10 back
    • 10 steps forward of walking on your heels feet rotated outwards, 10 back
    • 10 steps forward of walking on your heels feet rotated inwards, 10 back

    • 2 x 10 x Side step, carry the leg up (from out to in) while you turn 180, keep on moving on that direction

    • 2 x 10 x Front step carrying the leg up (from in to out)while you turn 45, then side step, keep on moving on that direction
    • 10 light skips on one leg: while walking forward lift your knee and arms and do a slight jump
    • 10 steps with high knees
    • 10 steps with heel to butt
    • 10 side shuffles (like basketball defense)

    • 5 lunges forward, 5 backwards

    • 10 rollups and downs from standing position

    • 5 push-ups
    • 10 rotations from the pushup position on each direction with straigth arms
    • 5 push-ups
    • 10 rotations from the pushup position on each direction with shoulders at ankle level
    • 3 downward monkeys: from piramid do a low pushup and go to cobra, then a pushup

    • 10 steps forward walking on all fours

    Strengthen your knees

    Follow there steps

    Transit to the parkour place

    Go by bike, skate, jogging to the parkour place