Skip to content

October of 2022

Activism

Antifascism

Antifascist Actions

Life Management

Calendar Management

  • New: Introduce Calendar Management.

    Since the break of my taskwarrior instance I've used a physical calendar to manage the tasks that have a specific date. Can't wait for the first version of pydo to be finished.

    The next factors made me search for a temporal solution:

    • It's taking longer than expected.
    • I've started using a nextcloud calendar with some friends.
    • I frequently use Google calendar at work.
    • I'm sick of having to log in Nexcloud and Google to get the day's appointments.

    To fulfill my needs the solution needs to:

    • Import calendar events from different sources, basically through the CalDAV protocol.
    • Have a usable terminal user interface
    • Optionally have a command line interface or python library so it's easy to make scripts.
    • Optionally it can be based in python so it's easy to contribute
    • Support having a personal calendar mixed with the shared ones.
    • Show all calendars in the same interface

    Looking at the available programs I found khal, which looks like it may be up to the task.

    Go through the installation steps and configure the instance to have a local calendar.

    If you want to sync your calendar events through CalDAV, you need to set vdirsyncer.

Coding

Languages

Python

Configure Docker to host the application

  • New: Update dockers with Renovate.

    Renovate is a program that does automated dependency updates. Multi-platform and multi-language.

  • New: Connect multiple docker compose files.

    You can connect services defined across multiple docker-compose.yml files.

    In order to do this you’ll need to:

    • Create an external network with docker network create <network name>
    • In each of your docker-compose.yml configure the default network to use your externally created network with the networks top-level key.
    • You can use either the service name or container name to connect between containers.

Python Snippets

  • New: Get an instance of an Enum by value.

    If you want to initialize a pydantic model with an Enum but all you have is the value of the Enum then you need to create a method to get the correct Enum. Otherwise mypy will complain that the type of the assignation is str and not Enum.

    So if the model is the next one:

    class ServiceStatus(BaseModel):
        """Model the docker status of a service."""
    
        name: str
        environment: Environment
    

    You can't do ServiceStatus(name='test', environment='production'). you need to add the get_by_value method to the Enum class:

    class Environment(str, Enum):
        """Set the possible environments."""
    
        STAGING = "staging"
        PRODUCTION = "production"
    
        @classmethod
        def get_by_value(cls, value: str) -> Enum:
            """Return the Enum element that meets a value"""
            return [member for member in cls if member.value == value][0]
    

    Now you can do:

    ServiceStatus(
        name='test',
        environment=Environment.get_by_value('production')
    )
    
  • New: Print datetime with a defined format.

    now = datetime.now()
    today.strftime('We are the %d, %b %Y')
    

    Where the datetime format is a string built from these directives.

  • New: Print string with asciiart.

    pip install pyfiglet
    
    from pyfiglet import figlet_format
    print(figlet_format('09 : 30'))
    

    If you want to change the default width of 80 caracteres use:

    from pyfiglet import Figlet
    
    f = Figlet(font="standard", width=100)
    print(f.renderText("aaaaaaaaaaaaaaaaa"))
    
  • New: Print specific time format.

    datetime.datetime.now().strftime('%Y-%m-%dT%H:%M:%S')
    

    Code Meaning Example %a Weekday as locale’s abbreviated name. Mon %A Weekday as locale’s full name. Monday %w Weekday as a decimal number, where 0 is Sunday and 6 is Saturday. 1 %d Day of the month as a zero-padded decimal number. 30 %-d Day of the month as a decimal number. (Platform specific) 30 %b Month as locale’s abbreviated name. Sep %B Month as locale’s full name. September %m Month as a zero-padded decimal number. 09 %-m Month as a decimal number. (Platform specific) 9 %y Year without century as a zero-padded decimal number. 13 %Y Year with century as a decimal number. 2013 %H Hour (24-hour clock) as a zero-padded decimal number. 07 %-H Hour (24-hour clock) as a decimal number. (Platform specific) 7 %I Hour (12-hour clock) as a zero-padded decimal number. 07 %-I Hour (12-hour clock) as a decimal number. (Platform specific) 7 %p Locale’s equivalent of either AM or PM. AM %M Minute as a zero-padded decimal number. 06 %-M Minute as a decimal number. (Platform specific) 6 %S Second as a zero-padded decimal number. 05 %-S Second as a decimal number. (Platform specific) 5 %f Microsecond as a decimal number, zero-padded on the left. 000000 %z UTC offset in the form +HHMM or -HHMM (empty string if the the object is naive). %Z Time zone name (empty string if the object is naive). %j Day of the year as a zero-padded decimal number. 273 %-j Day of the year as a decimal number. (Platform specific) 273 %U Week number of the year (Sunday as the first day of the week) as a zero padded decimal number. All days in a new year preceding the first Sunday are considered to be in week 0. 39 %W Week number of the year (Monday as the first day of the week) as a decimal number. All days in a new year preceding the first Monday are considered to be in week 0. %c Locale’s appropriate date and time representation. Mon Sep 30 07:06:05 2013 %x Locale’s appropriate date representation. 09/30/13 %X Locale’s appropriate time representation. 07:06:05 %% A literal '%' character. %

ICS

  • New: Introduce ICS.

    ics is a pythonic iCalendar library. Its goals are to read and write ics data in a developer-friendly way.

rich

  • New: Live display text.

    import time
    
    from rich.live import Live
    
    with Live("Test") as live:
        for row in range(12):
            live.update(f"Test {row}")
            time.sleep(0.4)
    

    If you don't want the text to have the default colors, you can embed it all in a Text object.

Selenium

  • New: Click on element.

    Once you've opened the page you want to interact with driver.get(), you need to get the Xpath of the element to click on. You can do that by using your browser inspector, to select the element, and once on the code if you right click there is a "Copy XPath"

    Once that is done you should have something like this when you paste it down.

    //*[@id=react-root]/section/main/article/div[2]/div[2]/p/a
    

    Similarly it is the same process for the input fields for username, password, and login button.

    We can go ahead and do that on the current page. We can store these xpaths as strings in our code to make it readable.

    We should have three xpaths from this page and one from the initial login.

    first_login = '//*[@id=”react-root”]/section/main/article/div[2]/div[2]/p/a'
    username_input = '//*[@id="react-root"]/section/main/div/article/div/div[1]/div/form/div[2]/div/label/input'
    password_input = '//*[@id="react-root"]/section/main/div/article/div/div[1]/div/form/div[3]/div/label/input'
    login_submit = '//*[@id="react-root"]/section/main/div/article/div/div[1]/div/form/div[4]/button/div'
    

    Now that we have the xpaths defined we can now tell Selenium webdriver to click and send some keys over for the input fields.

    from selenium.webdriver.common.by import By
    
    driver.find_element(By.XPATH, first_login).click()
    driver.find_element(By.XPATH, username_input).send_keys("username")
    driver.find_element(By.XPATH, password_input).send_keys("password")
    driver.find_element(By.XPATH, login_submit).click()
    
  • New: Bypass Selenium detectors.

    Sometimes web servers react differently if they notice that you're using selenium. Browsers can be detected through different ways and some commonly used mechanisms are as follows:

    • Implementing captcha / recaptcha to detect the automatic bots.
    • Non-human behaviour (browsing too fast, not scrolling to the visible elements, ...)
    • Using an IP that's flagged as suspicious (VPN, VPS, Tor...)
    • Detecting the term HeadlessChrome within headless Chrome UserAgent
    • Using Bot Management service from Distil Networks, Akamai, Datadome.

    They do it through different mechanisms:

    If you've already been detected, you might get blocked for a plethora of other reasons even after using these methods. So you may have to try accessing the site that was detecting you using a VPN, different user-agent, etc.

  • New: Basic Selenium commands.

    Open a URL:

    driver.get("https://duckduckgo.com/")
    

    Get page source:

    driver.page_source
    

    Get current url:

    driver.current_url
    

Typer

  • New: Get the command line application directory.

    You can get the application directory where you can, for example, save configuration files with typer.get_app_dir():

    from pathlib import Path
    
    import typer
    
    APP_NAME = "my-super-cli-app"
    
    def main() -> None:
        """Define the main command line interface."""
        app_dir = typer.get_app_dir(APP_NAME)
        config_path: Path = Path(app_dir) / "config.json"
        if not config_path.is_file():
            print("Config file doesn't exist yet")
    
    if __name__ == "__main__":
        typer.run(main)
    

    It will give you a directory for storing configurations appropriate for your CLI program for the current user in each operating system.

  • New: Exiting with an error code.

    typer.Exit() takes an optional code parameter. By default, code is 0, meaning there was no error.

    You can pass a code with a number other than 0 to tell the terminal that there was an error in the execution of the program:

    import typer
    
    def main(username: str):
        if username == "root":
            print("The root user is reserved")
            raise typer.Exit(code=1)
        print(f"New user created: {username}")
    
    if __name__ == "__main__":
        typer.run(main)
    

DevOps

Infrastructure as Code

Gitea

  • New: Introduce gitea.

    Gitea is a community managed lightweight code hosting solution written in Go. It's the best self hosted Github alternative in my opinion.

Ansible Snippets

Continuous Integration

Drone

  • New: Introduce Drone.

    Drone is a modern Continuous Integration platform that empowers busy teams to automate their build, test and release workflows using a powerful, cloud native pipeline engine.

    Check how to install it here

Automating Processes

renovate

  • New: Introduce Renovate.

    Renovate is a program that does automated dependency updates. Multi-platform and multi-language.

    Why use Renovate?

    • Get pull requests to update your dependencies and lock files.
    • Reduce noise by scheduling when Renovate creates PRs.
    • Renovate finds relevant package files automatically, including in monorepos.
    • You can customize the bot's behavior with configuration files.
    • Share your configuration with ESLint-like config presets.
    • Get replacement PRs to migrate from a deprecated dependency to the community suggested replacement (npm packages only).
    • Open source.
    • Popular (more than 9.7k stars and 1.3k forks)
    • Beautifully integrate with main Git web applications (Gitea, Gitlab, Github).
    • It supports most important languages: Python, Docker, Kubernetes, Terraform, Ansible, Node, ...

Storage

  • New: Introduce storage.

    I have a server at home to host some services for my closest ones. The server is an Intel NUC which is super awesome in terms of electric consumption, CPU and RAM versus cost. The downside is that it has no hard drive to store the services data. It does have some USB ports to connect external hard drives though. As the data kept growing I started buying bigger drives. While it was affordable I purchased two so as to have one to store the backup of the data. The problem came when it became unaffordable for me. Then I took the good idea to assume that I could only have one drive of 16TB with my data. Obviously the inevitable happened. The hard drive died and those 10TB of data that were not stored in any backup were lost.

    Luckily enough, it was not unique data like personal photos. The data could be regenerated by manual processes at the cost of precious time (I'm still suffering this :(). But every cloud has a silver lining, this failure gave me the energy and motivation to improve my home architecture. To prevent this from happening again, the solution needs to be:

    • Robust: If disks die I will have time to replace them before data is lost.
    • Flexible: It needs to expand as the data grows.
    • Not very expensive.
    • Easy to maintain.

    There are two types of solutions to store data:

    • On one host: All disks are attached to a server and the storage capacity is shared to other devices by the local network.
    • Distributed: The disks are attached to many servers and they work together to provide the storage through the local network.

    A NAS server represents the first solution, while systems like Ceph or GlusterFS over Odroid HC4 fall into the second.

    Both are robust and flexible but I'm more inclined towards building a NAS because it can hold the amount of data that I need, it's easier to maintain and the underlying technology has been more battle proven throughout the years.

NAS

  • New: Introduce NAS.

    Network-attached storage or NAS, is a computer data storage server connected to a computer network providing data access to many other devices. Basically a computer where you can attach many hard drives.

    I've done an analysis to choose what solution I'm going to build in terms of:

    More will come in the next days.

  • New: Analyze RAM to buy.

    Most ZFS resources suggest using ECC RAM. The provider gives me two options:

    • Kingston Server Premier DDR4 3200MHz 16GB CL22
    • Kingston Server Premier DDR4 2666MHz 16GB CL19

    I'll go with two modules of 3200MHz CL22 because it has a smaller RAM latency.

  • New: Analyze motherboard to buy.

    After reading these reviews(1, 2) I've come to the decision to purchase the ASRock X570M Pro4 because, It supports:

    • 8 x SATA3 disks
    • 2 x M.2 disks
    • 4 x DDR4 RAM slots with speeds up to 4200+ and ECC support
    • 1 x AMD AM4 Socket Ryzen™ 2000, 3000, 4000 G-Series, 5000 and 5000 G-Series Desktop Processors
    • Supports NVMe SSD as boot disks
    • Micro ATX Form Factor.

    And it gives me room enough to grow:

    • It supports PCI 4.0 for the M.2 which is said to be capable of perform twice the speed compared to previous 3rd generation. the chosen M2 are of 3rd generation, so if I need more speed I can change them.
    • I'm only going to use 2 slots of RAM giving me 32GB, but I could grow 32 more easily.
  • New: Analyze CPU to buy.

    After doing some basic research I'm between:

    Property Ryzen 7 5800x Ryzen 5 5600x Ryzen 7 5700x Ryzen 5 5600G
    Cores 8 6 8 6
    Threads 16 12 16 12
    Clock 3.8 3.7 3.4 3.9
    Socket AM4 AM4 AM4 AM4
    PCI 4.0 4.0 4.0 3.0
    Thermal Not included Wraith Stealth Not included Wraith Stealth
    Default TDP 105W 65W 65W 65W
    System Mem spec >= 3200 MHz >= 3200 MHz >= 3200 MHz >= 3200 MHz
    Mem type DDR4 DDR4 DDR4 DDR4
    Price 315 232 279 179

    The data was extracted from AMD's official page.

    They all support the chosen RAM and the motherboard.

    I'm ruling out Ryzen 7 5800x because it's too expensive both on monetary and power consumption terms. Also ruling out Ryzen 5 5600G because it has comparatively bad properties.

    Between Ryzen 5 5600x and Ryzen 7 5700x, after checking these comparisons (1, 2) it looks like:

    • Single core performance is similar.
    • 7 wins when all cores are involved.
    • 7 is more power efficient.
    • 7 is better rated.
    • 7 is newer (1.5 years).
    • 7 has around 3.52 GB/s (7%) higher theoretical RAM memory bandwidth
    • They have the same cache
    • 7 has 5 degrees less of max temperature
    • They both support ECC
    • 5 has a greater market share
    • 5 is 47$ cheaper

    I think that for 47$ it's work the increase on cores and theoretical RAM memory bandwidth.

  • New: Analyze CPU coolers to buy.

    It looks that the Ryzen CPUs don't require a cooler to work well. Usually it adds another 250W to the consumption. I don't plan to overclock it and I've heard that ZFS doesn't use too much CPU, so I'll start without it and monitor the temperature.

    If I were to take one, I'd go with air cooling with something like the Dark Rock 4 but I've also read that Noctua are a good provider.

  • New: Analyze server cases to buy.

    I'm ruling out the next ones:

    • Fractal Design R6: More expensive than the Node 804 and it doesn't have hot swappable disks.
    • Silverstone Technology SST-CS381: Even though it's gorgeous it's too expensive.
    • Silverstone DS380: It only supports Mini-ITX which I don't have.

    The remaining are:

    Model Fractal Node 804 Silverstone CS380
    Form factor Micro - ATX Mid tower
    Motherboard Micro ATX Micro ATX
    Drive bays 8 x 3.5", 2 x 2.5" 8 x 3.5", 2 x 5.25"
    Hot-swap No yes
    Expansion Slots 5 7
    CPU cooler height 160mm 146 mm
    PSU compatibility ATX ATX
    Fans Front: 4, Top: 4, Rear 3 Side: 2, Rear: 1
    Price 115 184
    Size 34 x 31 x 39 cm 35 x 28 x 21 cm

    I like the Fractal Node 804 better and it's cheaper.

  • New: Choose the Power Supply Unit and CPU cooler.

    After doing some basic research I've chosen the Dark Rock 4 but just because the Enermax ETS-T50 AXE Silent Edition doesn't fit my case :(.

    Using PCPartPicker I've seen that with 4 disks it consumes approximately 264W, when I have the 8 disks, it will consume up to 344W, if I want to increase the ram then it will reach 373W. So in theory I can go with a 400W power supply unit.

    You need to make sure that it has enough wires to connect to all the disks. Although that usually is not a problem as there are adapters:

    After an analysis on the different power supply units, I've decided to go with Be Quiet! Straight Power 11 450W Gold

OpenZFS

  • New: Learning.

    I've found that learning about ZFS was an interesting, intense and time consuming task. If you want a quick overview check this video. If you prefer to read, head to the awesome Aaron Toponce articles and read all of them sequentially, each is a jewel. The docs on the other hand are not that pleasant to read. For further information check JRS articles.

  • New: Storage planning.

    There are many variables that affect the number and type of disks, you first need to have an idea of what kind of data you want to store and what use are you going to give to that data.

  • New: Choosing the disks to hold data.

    Analysis on how to choose the disks taking into account:

    The conclusions are that I'm more interested on the 5400 RPM drives, but of all the NAS disks available to purchase only the WD RED of 8TB use it, and they use the SMR technology, so they aren't a choice.

    The disk prices offered by my cheapest provider are:

    Disk Size Price
    Seagate IronWolf 8TB 225$
    Seagate IronWolf Pro 8TB 254$
    WD Red Plus 8TB 265$
    Seagate Exos 7E8 8TB 277$
    WD Red Pro 8TB 278$

    WD Red Plus has 5,640 RPM which is different than the rest, so it's ruled out. Between the IronWolf and IronWolf Pro, they offer 180MB/s and 214MB/s respectively. The Seagate Exos 7E8 provides much better performance than the WD Red Pro so I'm afraid that WD is out of the question.

    There are three possibilities in order to have two different brands. Imagining we want 4 disks:

    Combination Total Price
    IronWolf + IronWolf Pro 958$
    IronWolf + Exos 7E8 1004$ (+46$ +4.5%)
    IronWolf Pro + Exos 7E8 1062$ (+54$ +5.4%)

    In terms of:

    • Consumption: both IronWolfs are equal, the Exos uses 2.7W more on normal use and uses 0.2W less on rest.
    • Warranty: IronWolf has only 3 years, the others 5.
    • Speed: Ironwolf has 210MB/s, much less than the Pro (255MB/s) and Exos (249MB/s), which are more similar.
    • Sostenibility: The Exos disks are much more robust (more workload, MTBF and Warranty).

    I'd say that for 104$ it makes sense to go with the IronWolf Pro + Exos 7E8 combination.

  • New: Choosing the disks for the cache.

    Using a ZLOG greatly improves the writing speed, equally using an SSD disk for the L2ARC cache improves the read speeds and improves the health of the rotational disks.

    The best M.2 NVMe SSD for NAS caching are the ones that have enough capacity to actually make a difference to overall system performance. It also requires a good endurance rating for better reliability and longer lifespan, and you should look for a drive with a specific NAND technology if possible.

    I've made an analysis based on:

    As conclusion, I’d recommend the Western Digital Red SN700, which has a good 1 DWPD endurance rating, is available in sizes up to 4TB, and is using SLC NAND technology, which is great for enhancing reliability through heavy caching workloads. A close second place goes to the Seagate IronWolf 525, which has similar specifications to the SN700 but utilizes TLC.

    Disk Size Speed Endurance Warranty Tech Price
    WD Red SN700 500 GB 3430MB/s 1 DWPD 5 years SLC 73$
    SG IronWolf 525 500 GB 5000MB/s 0.8 DWPD 5 years TLC ?
    WD Red SN700 1 TB 3430MB/s 1 DWPD 5 years SLC 127$
    SG IronWolf 525 1 TB 5000MB/s 0.8 DWPD 5 years TLC ?
  • New: Choosing the cold spare disks.

    It's good to think how much time you want to have your raids to be inconsistent once a drive has failed.

    In my case, for the data I want to restore the raid as soon as I can, therefore I'll buy another rotational disk. For the SSDs I have more confidence that they won't break so I don't feel like having a spare one.

Hardware

CPU

  • New: Introduce CPU, attributes and how to buy it.

    A central processing unit or CPU, also known as the brain of the server, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program.

  • New: Add the market analysis.

  • New: Analyze the cpu coolers.
  • New: Analyze the use of cpu thermal paste.

    Thermal paste is designed to minimize microscopic air gaps and irregularities between the surface of the cooler and the CPU's IHS (integrated heat spreader), the piece of metal which is built into the top of the processor.

    Good thermal paste can have a profound impact on your performance, because it will allow your processor to transfer more of its waste heat to your cooler, keeping your processor running cool.

    Most pastes are comprised of ceramic or metallic materials suspended within a proprietary binder which allows for easy application and spread as well as simple cleanup.

    These thermal pastes can be electrically conductive or non-conductive, depending on their specific formula. Electrically conductive thermal pastes can carry current between two points, meaning that if the paste squeezes out onto other components, it can cause damage to motherboards and CPUs when you switch on the power. A single drop out of place can lead to a dead PC, so extra care is imperative.

    Liquid metal compounds are almost always electrically conductive, so while these compounds provide better performance than their paste counterparts, they require more focus and attention during application. They are very hard to remove if you get some in the wrong place, which would fry your system.

    In contrast, traditional thermal paste compounds are relatively simple for every experience level. Most, but not all, traditional pastes are electrically non-conductive.

    Most cpu coolers come with their own thermal paste, so check yours before buying another one.

RAM

  • New: Introduce RAM, it's properties and how to buy it.

    RAM is a form of computer memory that can be read and changed in any order, typically used to store working data and machine code.

Power Supply Unit

  • New: Introduce Power Supply Unit.

    Power supply unit is the component of the computer that sources power from the primary source (the power coming from your wall outlet) and delivers it to its motherboard and all its components. Contrary to the common understanding, the PSU does not supply power to the computer; it instead converts the AC (Alternating Current) power from the source to the DC (Direct Current) power that the computer needs.

    There are two types of PSU: Linear and Switch-mode. Linear power supplies have a built-in transformer that steps down the voltage from the main to a usable one for the individual parts of the computer. The transformer makes the Linear PSU bulky, heavy, and expensive. Modern computers have switched to the switch-mode power supply, using switches instead of a transformer for voltage regulation. They’re also more practical and economical to use because they’re smaller, lighter, and cheaper than linear power supplies.

    PSU need to deliver at least the amount of power that each component requires, if it needs to deliver more, it simply won't work.

    Another puzzling question for most consumers is, “Does a PSU supply constant wattage to the computer?” The answer is a flat No. The wattage you see on the PSUs casing or labels only indicates the maximum power it can supply to the system, theoretically. For example, by theory, a 500W PSU can supply a maximum of 500W to the computer. In reality, the PSU will draw a small portion of the power for itself and distributes power to each of the PC components according to its need. The amount of power the components need varies from 3.3V to 12V. If the total power of the components needs to add up to 250W, it would only use 250W of the 500W, giving you an overhead for additional components or future upgrades.

    Additionally, the amount of power the PSU supplies varies during peak periods and idle times. When the components are pushed to their limits, say when a video editor maximizes the GPU for graphics-intensive tasks, it would require more power than when the computer is used for simple tasks like web-browsing. The amount of power drawn from the PSU would depend on two things; the amount of power each component requires and the tasks that each component performs.

    I've also added the next sections:

Operating Systems

Linux

i3wm

  • New: Introduce i3wm.

    i3 is a tiling window manager.

  • New: Layout saving.

    Layout saving/restoring allows you to load a JSON layout file so that you can have a base layout to start working with after powering on your computer.

    First of all arrange the windows in the workspace, then you can save the layout of either a single workspace or an entire output:

    i3-save-tree --workspace "1: terminal" > ~/.i3/workspace-1.json
    

    You need to open the created file and remove the comments that match the desired windows under the swallows keys, so transform the next snippet:

        ...
        "swallows": [
            {
            //  "class": "^URxvt$",
            //  "instance": "^irssi$"
            }
        ]
        ...
    

    Into:

        ...
        "swallows": [
            {
                "class": "^URxvt$",
                "instance": "^irssi$"
            }
        ]
        ...
    

    Once is ready close all the windows of the workspace you want to restore (moving them away is not enough!).

    Then on a terminal you can restore the layout with:

    i3-msg 'workspace "1: terminal"; append_layout ~/.i3/workspace-1.json'
    

    It's important that you don't use a relative path

    Even if you're in ~/.i3/ you have to use i3-msg append_layout ~/.i3/workspace-1.json.

    This command will create some fake windows (called placeholders) with the layout you had before, i3 will then wait for you to create the windows that match the selection criteria. Once they are, it will put them in their respective placeholders.

    If you wish to create the layouts at startup you can add the next snippet to your i3 config.

    exec --no-startup-id "i3-msg 'workspace \"1: terminal\"; append_layout ~/.i3/workspace-1.json'"
    

Khal

  • New: Introduce khal.

    khal is a standards based Python CLI (console) calendar program, able to synchronize with CalDAV servers through vdirsyncer.

    Features:

    • Can read and write events/icalendars to vdir, so vdirsyncer can be used to synchronize calendars with a variety of other programs, for example CalDAV servers.
    • Fast and easy way to add new events
    • ikhal (interactive khal) lets you browse and edit calendars and events.

    Limitations:

    • Only rudimentary support for creating and editing recursion rules
    • You cannot edit the timezones of events
  • New: Edit the events in a more pleasant way.

    The ikhal event editor is not comfortable for me. I usually only change the title or the start date and in the default interface you need to press many keystrokes to make it happen.

    A patch solution is to pass a custom script on the EDITOR environmental variable. Assuming you have questionary and ics installed you can save the next snippet into an edit_event file in your PATH:

    """Edit an ics calendar event."""
    
    import sys
    
    import questionary
    from ics import Calendar
    
    file = sys.argv[1]
    with open(file, "r") as fd:
        calendar = Calendar(fd.read())
    event = list(calendar.timeline)[0]
    
    event.name = questionary.text("Title: ", default=event.name).ask()
    start = questionary.text(
        "Start: ",
        default=f"{str(event.begin.hour).zfill(2)}:{str(event.begin.minute).zfill(2)}",
    ).ask()
    event.begin = event.begin.replace(
        hour=int(start.split(":")[0]), minute=int(start.split(":")[1])
    )
    
    with open(file, "w") as fd:
        fd.writelines(calendar.serialize_iter())
    

    Now if you open ikhal as EDITOR=edit_event ikhal, whenever you edit one event you'll get a better interface. Add to your .zshrc or .bashrc:

    alias ikhal='EDITOR=edit_event ikhal'
    

    The default keybinding for the edition is not very comfortable either, add the next snippet on your config:

    [keybindings]
    external_edit = e
    export = meta e
    

LUKS

vdirsyncer

  • New: Introduce vdirsyncer.

    vdirsyncer is a Python command-line tool for synchronizing calendars and addressbooks between a variety of servers and the local filesystem. The most popular usecase is to synchronize a server with a local folder and use a set of other programs such as khal to change the local events and contacts. Vdirsyncer can then synchronize those changes back to the server.

    However, vdirsyncer is not limited to synchronizing between clients and servers. It can also be used to synchronize calendars and/or addressbooks between two servers directly.

    It aims to be for calendars and contacts what OfflineIMAP is for emails.

Arts

Video Gaming

King Arthur Gold