4th Week of 2024
Life Management⚑
Task Management⚑
Org Mode⚑
-
New: Introduce org-rw.
org-rw
is a Python library to process your orgmode files.Installation:
pip install org-rw
Load an orgmode file:
from org_rw import load with open('your_file.org', 'r') as f: doc = load(f)
Habit management⚑
-
New: Introduce habit management.
A habit is a routine of behavior that is repeated regularly and tends to occur subconsciously.
A 2002 daily experience study found that approximately 43% of daily behaviors are performed out of habit. New behaviours can become automatic through the process of habit formation. Old habits are hard to break and new habits are hard to form because the behavioural patterns that humans repeat become imprinted in neural pathways, but it is possible to form new habits through repetition.
When behaviors are repeated in a consistent context, there is an incremental increase in the link between the context and the action. This increases the automaticity of the behavior in that context. Features of an automatic behavior are all or some of: efficiency, lack of awareness, unintentionality, and uncontrollability.
Mastering habit formation can be a powerful tool to change yourself. Usually with small changes you get massive outcomes in the long run. The downside is that it's not for the impatient people as it often appears to make no difference until you cross a critical threshold that unlocks a new level of performance.
-
New: Why are habits interesting.
Whenever you face a problem repeatedly, your brain begins to automate the process of solving it. Habits are a series of automatic resolutions that solve the problems and stresses you face regularly.
As habits are created, the level of activity in the brain decreases. You learn to lock in on the cues that predict success and tune out everything else. When a similar situation arises in the future, you know exactly what you look for. There is no longer a need to analyze every angle of a situation. Your brain skips the process of trial and error and creates a mental rule: if this, then that.
Habit formation is incredibly useful because the conscious mind is the bottleneck of the brain. It can only pay attention to one problem at a time. Habits reduce the cognitive load and free up mental capacity, so they can be carried on with your nonconscious mind and you can allocate your attention to other tasks.
-
New: Identity focused changes.
Changing our habits is challenging because we try to change the wrong thing in the wrong way.
There are three levels at which change can occur:
- Outcomes: Changing your results. Goals fall under this category: publishing a book, run daily
- Process: Changing your habits and systems: decluttering your desk for a better workflow, developing a meditation practice.
- Identity: Changing your beliefs, assumptions and biases: your world view, your self-image, your judgments.
Many people begin the process of changing their habits by focusing on what they want to achieve. This leads to outcome-based habits. The alternative is to build identity-based habits. With this approach, we start by focusing on who we wish to become.
The first path of change is doomed because maintaining behaviours that are incongruent with the self is expensive and will not last. Even if they make rational sense. Thus it's hard to change your habits if you never change the underlying beliefs that led to your past behaviour. On the other hand it's easy to find motivation once a habit has changed your identity as you may be proud of it and will be willing to maintain all the habits and systems associated with it. For example: The goal is not to read a book, but to become a reader.
Focusing on outcomes may also bring the next problems:
- Focusing on the results may lead you to temporal solutions. If you focus on the source of the issue at hand you may solve it with less effort and get you to a more stable one.
- Goals create an "either-or" conflict: either you achieve your goal and are successful or you fail and you are disappointed. Thus you only get a positive reward if you fulfill a goal. If you instead focus on the process rather than the result, you will be satisfied anytime your system is running.
- When your hard work is focused on a goal you may feel depleted once you meet it and that could make you loose the condition that made you meet the goal in the first place.
Research has shown that once a person believes in a particular aspect of their identity, they are more likely to act in alignment with that belief. This of course is a double-edged sword. Identity change can be a powerful force for self-improvement. When working against you, identity change can be a curse.
-
Whatever your identity is right now, you only believe it because you have proof of it. The more evidence you have for a belief, the more strongly you will believe it.
Your habits and systems are how you embody your identity. When you make your bed each day, you embody the identity of an organized person. The more you repeat a behaviour, the more you reinforce the identity associated with that behaviour. To the point that your self-image begins to change. The effect of one-off experiences tends to fade away while the effect of habits gets reinforced with time, which means your habits contribute most of the evidence that shapes your identity.
Every action you take is a vote for the type of person you wish to become. This is one reason why meaningful change does not require radical change. Small habits can make a meaningful difference by providing evidence of a new identity.
Once you start the ball rolling things become easier as building habits is a feedback loop. Your habits shape your identity, and your identity shapes your habits.
The most practical way to change the identity is to:
- Decide the type of person you want to be
- Prove it to yourself with small wins
Another advantage of focusing in what type of person you want to be is that maybe the outcome you wanted to focus on is not the wisest smallest step to achieve your identity change. Thinking on the identity you want to embrace can make you think outside the box.
-
New: Decide the type of person you want to be.
One way to decide the person you want to be is to answer big questions like: what do you want to stand for? What are your principles and values? Who do you wish to become?
As we're more result oriented, another way is to work backwards from them to the person you want to be. Ask yourself: Who is the type of person that could get the outcome I want?
-
The process of building a habit from a behaviour can be divided into four stages:
- Reward is the end goal.
- Cue is the trigger in your brain that initiate a behaviour. It's contains the information that predicts a reward.
- Cravings are the motivational force fueled by the desire of the reward. Without motivation we have no reason to act.
- Response is the thought or action you perform to obtain the reward. The response depends on the amount of motivation you have, how much friction is associated with the behaviour and your ability to actually do it.
If a behaviour is insufficient in any of the four stages, it will not become a habit. Eliminate the cue and your habit will never start. Reduce the craving and you won't have enough motivation to act. Make the behaviour difficult and you won't be able to do it. And if the reward fails to satisfy your desire, then you'll have no reason to do it again in the future.
We chase rewards because they:
- Deliver contentment.
- Satisfy your craving.
- Teach us which actions are worth remembering in the future.
If a reward is met then it becomes associated with the cue, thus closing the habit feedback loop.
If we keep these stages in mind then:
-
To build good habits we need to:
- Cue: Make it obvious
- Craving: Make it attractive
- Response: Make it easy
- Reward: Make it satisfying
-
To break bad habits we need to:
- Cue: Make it invisible
- Craving: Make it unattractive
- Response: Make it difficult
- Reward: Make it unsatisfying
Coding⚑
Languages⚑
Configure Docker to host the application⚑
-
The
journald
logging driver sends container logs to the systemd journal. Log entries can be retrieved using thejournalctl
command, through use of the journal API, or using the docker logs command.In addition to the text of the log message itself, the
journald
log driver stores the following metadata in the journal with each message: | Field | Description | | --- | ---- | | CONTAINER_ID | The container ID truncated to 12 characters. | | CONTAINER_ID_FULL | The full 64-character container ID. | | CONTAINER_NAME | The container name at the time it was started. If you use docker rename to rename a container, the new name isn't reflected in the journal entries. | | CONTAINER_TAG, | SYSLOG_IDENTIFIER The container tag ( log tag option documentation). | | CONTAINER_PARTIAL_MESSAGE | A field that flags log integrity. Improve logging of long log lines. |To use the journald driver as the default logging driver, set the log-driver and log-opts keys to appropriate values in the
daemon.json
file, which is located in/etc/docker/
.{ "log-driver": "journald" }
Restart Docker for the changes to take effect.
-
There are many ways to send logs to loki
- Using the json driver and sending them to loki with promtail with the docker driver
- Using the docker plugin: Grafana Loki officially supports a Docker plugin that will read logs from Docker containers and ship them to Loki.
I would not recommend to use this path because there is a known issue that deadlocks the docker daemon :S. The driver keeps all logs in memory and will drop log entries if Loki is not reachable and if the quantity of
max_retries
has been exceeded. To avoid the dropping of log entries, settingmax_retries
to zero allows unlimited retries; the driver will continue trying forever until Loki is again reachable. Trying forever may have undesired consequences, because the Docker daemon will wait for the Loki driver to process all logs of a container, until the container is removed. Thus, the Docker daemon might wait forever if the container is stuck.The wait time can be lowered by setting
loki-retries=2
,loki-max-backoff_800ms
,loki-timeout=1s
andkeep-file=true
. This way the daemon will be locked only for a short time and the logs will be persisted locally when the Loki client is unable to re-connect.To avoid this issue, use the Promtail Docker service discovery. - Using the journald driver and sending them to loki with promtail with the journald driver. This has worked for me but the labels extracted are not that great.
-
New: Solve syslog getting filled up with docker network recreation.
If you find yourself with your syslog getting filled up by lines similar to:
Jan 15 13:19:19 home kernel: [174716.097109] eth2: renamed from veth0adb07e Jan 15 13:19:20 home kernel: [174716.145281] IPv6: ADDRCONF(NETDEV_CHANGE): vethcd477bc: link becomes ready Jan 15 13:19:20 home kernel: [174716.145337] br-1ccd0f48be7c: port 5(vethcd477bc) entered blocking state Jan 15 13:19:20 home kernel: [174716.145338] br-1ccd0f48be7c: port 5(vethcd477bc) entered forwarding state Jan 15 13:19:20 home kernel: [174717.081132] br-fbe765bc7d0a: port 2(veth31cdd6f) entered disabled state Jan 15 13:19:20 home kernel: [174717.081176] vethc4da041: renamed from eth0 Jan 15 13:19:21 home kernel: [174717.214911] br-fbe765bc7d0a: port 2(veth31cdd6f) entered disabled state Jan 15 13:19:21 home kernel: [174717.215917] device veth31cdd6f left promiscuous mode Jan 15 13:19:21 home kernel: [174717.215919] br-fbe765bc7d0a: port 2(veth31cdd6f) entered disabled state
It probably means that some docker is getting recreated continuously. Those traces are normal logs of docker creating the networks, but as they do each time the docker starts, if it's restarting continuously then you have a problem.
Python Snippets⚑
-
New: Parse a datetime from an epoch.
>>> import datetime >>> datetime.datetime.fromtimestamp(1347517370).strftime('%c') '2012-09-13 02:22:50'
Inotify⚑
-
New: Introduce python_inotify.
inotify is a python library that acts as a bridge to the
inotify
linux kernel which allows you to register one or more directories for watching, and to simply block and wait for notification events. This is obviously far more efficient than polling one or more directories to determine if anything has changed.Installation:
pip install inotify
Basic example using a loop:
import inotify.adapters def _main(): i = inotify.adapters.Inotify() i.add_watch('/tmp') with open('/tmp/test_file', 'w'): pass for event in i.event_gen(yield_nones=False): (_, type_names, path, filename) = event print("PATH=[{}] FILENAME=[{}] EVENT_TYPES={}".format( path, filename, type_names)) if __name__ == '__main__': _main()
Output:
PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_MODIFY'] PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_OPEN'] PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_CLOSE_WRITE']
Basic example without a loop:
import inotify.adapters def _main(): i = inotify.adapters.Inotify() i.add_watch('/tmp') with open('/tmp/test_file', 'w'): pass events = i.event_gen(yield_nones=False, timeout_s=1) events = list(events) print(events) if __name__ == '__main__': _main()
The wait will be done in the
list(events)
line
Pydantic⚑
-
New: Create part of the attributes in the initialization stage.
class Sqlite(BaseModel): model_config = ConfigDict(arbitrary_types_allowed=True) path: Path db: sqlite3.Cursor def __init__(self, **kwargs): conn = sqlite3.connect(kwargs['path']) kwargs['db'] = conn.cursor() super().__init__(**kwargs)
DevOps⚑
Infrastructure as Code⚑
Ansible Snippets⚑
-
New: Fix the
ERROR! 'become' is not a valid attribute for a IncludeRole
error.If you're trying to do something like:
tasks: - name: "Install nfs" become: true ansible.builtin.include_role: name: nfs
You need to use this other syntax:
tasks: - name: "Install nfs" ansible.builtin.include_role: name: nfs apply: become: true
Infrastructure Solutions⚑
AWS Savings plan⚑
-
New: EC2 Instance savings plan versus reserved instances.
I've been comparing the EC2 Reserved Instances and of the EC2 instance family savings plans and decided to go with the second because:
- They both have almost the same rates. Reserved instances round the price at the 3rd decimal and the savings plan at the fourth, but this difference is neglegible.
- Savings plan are easier to calculate, as you just need to multiply the number of instances you want times the current rate and add them all up.
- Easier to understand: To reserve instances you need to take into account the instance flexibility and the normalization factors which makes it difficult both to make the plans and also to audit how well you're using it.
- Easier to audit: In addition to the above point, you have nice dashboards to see the coverage and utilization over time of your ec2 instance savings plans, which are at the same place as the other savings plans.
-
New: Important notes when doing a savings plan.
- Always use the reservation rates instead of the on-demand rates!
- Analyze your coverage reports. You don't want to have many points of 100% coverage as it means that you're using less resources than you've reserved. On the other hand it's fine to sometimes use less resources than the reserved if that will mean a greater overall savings. It's a tight balance.
- The Savings plan reservation is taken into account at hour level, not at month or year level. That means that if you reserve 1/hour of an instance type and you use for example 2/hour half the day and 0/hour half the day, you'll have a 100% coverage of your plan the first hour and another 1/hour of on-demand infrastructure cost for the first part of the day. On the second part of the day you'll have a 0% coverage. This means that you should only reserve the amount of resources you plan to be using 100% of the time throughout your savings plan. Again you may want to overcommit a little bit, reducing the utilization percentage of a plan but getting better savings in the end.
Monitoring⚑
Promtail⚑
-
On systems with
systemd
, Promtail also supports reading from the journal. Unlike file scraping which is defined in thestatic_configs
stanza, journal scraping is defined in ajournal
stanza:scrape_configs: - job_name: journal journal: json: false max_age: 12h path: /var/log/journal labels: job: systemd-journal relabel_configs: - source_labels: ['__journal__systemd_unit'] target_label: unit - source_labels: ['__journal__hostname'] target_label: hostname - source_labels: ['__journal_syslog_identifier'] target_label: syslog_identifier - source_labels: ['__journal_transport'] target_label: transport - source_labels: ['__journal_priority_keyword'] target_label: keyword
All fields defined in the journal section are optional, and are just provided here for reference.
max_age
ensures that no older entry than the time specified will be sent to Loki; this circumventsentry too old
errors.path
tells Promtail where to read journal entries from.labels
map defines a constant list of labels to add to every journal entry that Promtail reads.matches
field adds journal filters. If multiple filters are specified matching different fields, the log entries are filtered by both, if two filters apply to the same field, then they are automatically matched as alternatives.- When the
json
field is set to true, messages from the journal will be passed through the pipeline as JSON, keeping all of the original fields from the journal entry. This is useful when you don’t want to index some fields but you still want to know what values they contained. - When Promtail reads from the journal, it brings in all fields prefixed with
__journal_
as internal labels. Like in the example above, the_SYSTEMD_UNIT
field from the journal was transformed into a label calledunit
throughrelabel_configs
. Keep in mind that labels prefixed with__
will be dropped, so relabeling is required to keep these labels. Look at the systemd man pages for a list of fields exposed by the journal.
By default, Promtail reads from the journal by looking in the
/var/log/journal
and/run/log/journal
paths. If running Promtail inside of a Docker container, the path appropriate to your distribution should be bind mounted inside of Promtail along with binding/etc/machine-id
. Bind mounting/etc/machine-id
to the path of the same name is required for the journal reader to know which specific journal to read from.docker run \ -v /var/log/journal/:/var/log/journal/ \ -v /run/log/journal/:/run/log/journal/ \ -v /etc/machine-id:/etc/machine-id \ grafana/promtail:latest \ -config.file=/path/to/config/file.yaml
-
New: Scrape docker logs.
Docker service discovery allows retrieving targets from a Docker daemon. It will only watch containers of the Docker daemon referenced with the host parameter. Docker service discovery should run on each node in a distributed setup. The containers must run with either the
json-file
orjournald
logging driver.Note that the discovery will not pick up finished containers. That means Promtail will not scrape the remaining logs from finished containers after a restart.
The available meta labels are:scrape_configs: - job_name: docker docker_sd_configs: - host: unix:///var/run/docker.sock refresh_interval: 5s relabel_configs: - source_labels: ['__meta_docker_container_id'] target_label: docker_id - source_labels: ['__meta_docker_container_name'] target_label: docker_name
__meta_docker_container_id
: the ID of the container__meta_docker_container_name
: the name of the container__meta_docker_container_network_mode
: the network mode of the container__meta_docker_container_label_<labelname>
: each label of the container__meta_docker_container_log_stream
: the log stream type stdout or stderr__meta_docker_network_id
: the ID of the network__meta_docker_network_name
: the name of the network__meta_docker_network_ingress
: whether the network is ingress__meta_docker_network_internal
: whether the network is internal__meta_docker_network_label_<labelname>
: each label of the network__meta_docker_network_scope
: the scope of the network__meta_docker_network_ip
: the IP of the container in this network__meta_docker_port_private
: the port on the container__meta_docker_port_public
: the external port if a port-mapping exists__meta_docker_port_public_ip
: the public IP if a port-mapping exists
These labels can be used during relabeling. For instance, the following configuration scrapes the container named
flog
and removes the leading slash (/) from the container name. yamlscrape_configs: - job_name: flog_scrape docker_sd_configs: - host: unix:///var/run/docker.sock refresh_interval: 5s filters: - name: name values: [flog] relabel_configs: - source_labels: ['__meta_docker_container_name'] regex: '/(.*)' target_label: 'container'
Operating Systems⚑
Linux⚑
Tabs vs Buffers⚑
-
New: Switch to the previous opened buffer.
Often the buffer that you want to edit is the buffer that you have just left. Vim provides a couple of convenient commands to switch back to the previous buffer. These are
<C-^>
(or<C-6>
) and:b#
. All of them are inconvenient so I use the next mapping:nnoremap <Leader><Tab> :b#<CR>
Jellyfin⚑
-
New: Python library.
This is the API client from Jellyfin Kodi extracted as a python package so that other users may use the API without maintaining a fork of the API client. Please note that this API client is not complete. You may have to add API calls to perform certain tasks.
It doesn't (yet) support async
journald⚑
-
New: Introduce journald.
journald is a system service that collects and stores logging data. It creates and maintains structured, indexed journals based on logging information that is received from a variety of sources:
- Kernel log messages, via kmsg
- Simple system log messages, via the
libc syslog
call - Structured system log messages via the native Journal API.
- Standard output and standard error of service units.
- Audit records, originating from the kernel audit subsystem.
The daemon will implicitly collect numerous metadata fields for each log messages in a secure and unfakeable way.
Journald provides a good out-of-the-box logging experience for systemd. The trade-off is, journald is a bit of a monolith, having everything from log storage and rotation, to log transport and search. Some would argue that syslog is more UNIX-y: more lenient, easier to integrate with other tools. Which was its main criticism to begin with. When the change was made not everyone agreed with the migration from syslog or the general approach systemd took with journald. But by now, systemd is adopted by most Linux distributions, and it includes journald as well. journald happily coexists with syslog daemons, as:
- Some syslog daemons can both read from and write to the journal
- journald exposes the syslog API
It provides lots of features, most importantly:
- Indexing. journald uses a binary storage for logs, where data is indexed. Lookups are much faster than with plain text files.
- Structured logging. Though it’s possible with syslog, too, it’s enforced here. Combined with indexing, it means you can easily filter specific logs (e.g. with a set priority, in a set timeframe).
- Access control. By default, storage files are split by user, with different permissions to each. As a regular user, you won’t see everything root sees, but you’ll see your own logs.
- Automatic log rotation. You can configure journald to keep logs only up to a space limit, or based on free space.
Kodi⚑
-
New: Extract kodi data from the database.
At
~/.kodi/userdata/Database/MyVideos116.db
you can extract the data from the next tables:- In the
movie_view
table there is: idMovie
: kodi id for the moviec00
: Movie titleuserrating
uniqueid_value
: The id of the external web serviceuniqueid_type
: The web it extracts the id fromlastPlayed
: The reproduction date- In the
tvshow_view
table there is: idShow
: kodi id of a showc00
: titleuserrating
lastPlayed
: The reproduction dateuniqueid_value
: The id of the external web serviceuniqueid_type
: The web it extracts the id from- In the
season_view
there is no interesting data as the userrating is null on all rows. - In the
episode_view
table there is: idEpisodie
: kodi id for the episodeidShow
: kodi id of a show- `idSeason: kodi id of a season
c00
: titleuserrating
lastPlayed
: The reproduction dateuniqueid_value
: The id of the external web serviceuniqueid_type
: The web it extracts the id from. I've seen mainly tvdb and sonarr- Don't use the
rating
table as it only stores the ratings from external webs such as themoviedb:
- In the
Matrix Highlight⚑
-
New: Introduce matrix_highlight.
Matrix Highlight is a decentralized and federated way of annotating the web based on Matrix.
Think of it as an open source alternative to hypothesis.
It's similar to Populus but for the web.
I want to try it and investigate further specially if you can:
- Easily extract the annotations
- Activate it by default everywhere
Mediatracker⚑
-
New: Introduce python library.
There is a python library although it's doesn't (yet) have any documentation and the functionality so far is only to get information, not to push changes.
-
With
/api/items?mediaType=tv
you can get a list of all tv shows with the next interesting fields:id
: mediatracker idtmdbId
:tvdbId
:imdbId
:title
:lastTimeUpdated
: epoch timelastSeenAt
: epoch timeseen
: boolonWatchlist
: boolfirstUnwatchedEpisode
:id
: mediatracker episode idepisodeNumber
:seasonNumber
tvShowId
:seasonId
:lastAiredEpisode
: same schema as before
Then you can use the
api/details/{mediaItemId}
endpoint to get all the information of all the episodes of each tv show.
Arts⚑
Music⚑
Sister Rosetta Tharpe⚑
-
New: Introduce Sister Rosetta Tharpe.
Sister Rosetta Tharpe was a visionary, born in 1915 she started shredding the guitar in ways that did not exist in that time. Yes, she founded Rock and Roll. It's lovely to see a gospel singer with an electrical guitar.
In this video you'll be able to understand how awesome she ws.
Videos:
Science⚑
Data Analysis⚑
Parsers⚑
-
Parsers are a whole world. I kind of feel a bit lost right now and I'm searching for good books on the topic. So far I've found:
Pros: - Pleasant to read - Doesn't use external tools, you implement it from scratch. - Multiple format: EPUB, PDF, web - You can read it for free - Cute drawings <3
Cons: - Code snippets are on Java and C - Doesn't use external tools, you implement it from scratch - It's long
- Compilers: Principles, Techniques, and Tools by Aho, Alfred V. & Monica S. Lam & Ravi Sethi & Jeffrey D. Ullman
Pros: - EPUB
Cons: - Code snippets are on C++
- Parsing Techniques: A Practical Guide by Dick Grune and Ceriel J.H Jacobs
Pros: - Gives an overview of many grammars and parsers
Cons: - Only in PDF - It's long - Too focused on the theory, despite the name xD