9th Week of 2023
Life Management⚑
Task Management⚑
Getting Things Done⚑
-
The focus of this process is to capture everything that has your attention, otherwise some part of you will still not totally trust that you're working with the whole picture. While you're doing it, create a list of all the sources of inputs in your world.
What you're going to do is methodically go through each piece of your life and search for anything that doesn’t permanently belong where it is, the way it is, and put it into your in-tray. You’ll be gathering things that are incomplete or things that have some decision about potential action tied to them. They all go into your “inbox”, so they’ll be available for later processing.
Be patient, this process may take between 1 and 6 hours, and at the end you'll have a huge pile of stuff in your inbox. You might be scared and get the thought of "what am I doing with my life?", but don't worry you'll get everything in order soon :).
The steps described in the section so far are:
-
New: Digital general reference.
It is very helpful to have a visual map sorted in ways that make sense, either by indexes or data groups organized effectively, usually in an alphabetic order.
The biggest issue for digitally oriented people is that the ease of capturing and storing has generated a write-only syndrome: all they’re doing is capturing information—not actually accessing and using it intelligently. Some consciousness needs to be applied to keep one’s potentially huge digital library functional, versus a black hole of data easily dumped in there with a couple of keystrokes.
You need to consistently check how much room to give yourself so that the content remains meaningfully and easily accessible, without creating a black hole of an inordinate amount of information amorphously organized.
-
New: Physical general reference.
One idea is to have one system/place where you order the content alphabetically, not multiple ones. People have a tendency to want to use their files as a personal management system, and therefore they attempt to organize them in groupings by projects or areas of focus. This magnifies geometrically the number of places something isn’t when you forget where you filed it.
-
New: Use telescope plugin for refiling.
Refiling lets you easily move around elements of your org file, such as headings or TODOs. You can refile with
<leader>r
with the next snippet:org = { org_refile = '<leader>r', }
When you press the refile key binding you are supposed to press
<tab>
to see the available options, once you select the correct file, if you will be shown a autocomplete with the possible items to refile it to. Luckily there is a Telescope plugin.Install it by adding to your plugin config:
use 'joaomsa/telescope-orgmode.nvim'
Then install it with
:PackerInstall
.You can setup the extension by doing:
require('telescope').load_extension('orgmode')
To replace the default refile prompt:
vim.api.nvim_create_autocmd('FileType', { pattern = 'org', group = vim.api.nvim_create_augroup('orgmode_telescope_nvim', { clear = true }) callback = function() vim.keymap.set('n', '<leader>r', require('telescope').extensions.orgmode.refile_heading) vim.keymap.set('n', '<leader>g', require('telescope').extensions.orgmode.search_headings) end, })
If the auto command doesn't override the default
orgmode
one, bind it to another keys and never use it.The plugin also allows you to use
telescope
to search through the headings of the different files withsearch_headings
, with the configuration above you'd use<leader>g
.
Coding⚑
Languages⚑
SQLite⚑
-
New: Import a table from another database.
If you have an SQLite databases named
database1
with a tablet1
anddatabase2
with a tablet2
and want to import tablet2
fromdatabase2
intodatabase1
. You need to opendatabase1
withlitecli
:litecli database1
Attach the other database with the command:
ATTACH 'database2file' AS db2;
Then create the table
t2
, and copy the data over with:INSERT INTO t2 SELECT * FROM db2.t2;
DevOps⚑
Infrastructure Solutions⚑
Kubectl Commands⚑
-
New: Show the remaining space of a persistent volume claim.
Either look it in Prometheus or run in the pod that has the PVC mounted:
kubectl -n <namespace> exec <pod-name> -- df -ah
You may need to use
kubectl get pod <pod-name> -o yaml
to know what volume is mounted where.
Operating Systems⚑
Linux⚑
Linux Snippets⚑
-
New: Force umount nfs mounted directory.
umount -l path/to/mounted/dir
-
New: Configure fstab to mount nfs.
NFS stands for ‘Network File System’. This mechanism allows Unix machines to share files and directories over the network. Using this feature, a Linux machine can mount a remote directory (residing in a NFS server machine) just like a local directory and can access files from it.
An NFS share can be mounted on a machine by adding a line to the
/etc/fstab
file.The default syntax for
fstab
entry of NFS mounts is as follows.Server:/path/to/export /local_mountpoint nfs <options> 0 0
Where:
Server
: The hostname or IP address of the NFS server where the exported directory resides./path/to/export
: The shared directory (exported folder) path./local_mountpoint
: Existing directory in the host where you want to mount the NFS share.
You can specify a number of options that you want to set on the NFS mount:
soft/hard
: When the mount optionhard
is set, if the NFS server crashes or becomes unresponsive, the NFS requests will be retried indefinitely. You can set the mount optionintr
, so that the process can be interrupted. When the NFS server comes back online, the process can be continued from where it was while the server became unresponsive.
When the option
soft
is set, the process will be reported an error when the NFS server is unresponsive after waiting for a period of time (defined by thetimeo
option). In certain casessoft
option can cause data corruption and loss of data. So, it is recommended to usehard
andintr
options.noexec
: Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system via NFS containing incompatible binaries.nosuid
: Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program.tcp
: Specifies the NFS mount to use the TCP protocol.udp
: Specifies the NFS mount to use the UDP protocol.nofail
: Prevent issues when rebooting the host. The downside is that if you have services that depend on the volume to be mounted they won't behave as expected.
-
New: Fix limit on the number of inotify watches.
Programs that sync files such as dropbox, git etc use inotify to notice changes to the file system. The limit can be see by -
cat /proc/sys/fs/inotify/max_user_watches
For me, it shows
65536
. When this limit is not enough to monitor all files inside a directory it throws this error.If you want to increase the amount of inotify watchers, run the following in a terminal:
echo fs.inotify.max_user_watches=100000 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Where
100000
is the desired number of inotify watches.
Anki⚑
-
New: How long to do study sessions.
I have two study modes:
- When I'm up to date with my cards, I study them until I finish, but usually less than 15 minutes.
- If I have been lazy and haven't checked them in a while (like now) I assume I'm not going to see them all and define a limited amount of time to review them, say 10 to 20 minutes depending on the time/energy I have at the moment.
The relief thought you can have is that as long as you keep a steady pace of 10/20 mins each day, inevitably you'll eventually finish your pending cards as you're more effective reviewing cards than entering new ones
-
New: What to do with "hard" cards.
If you're afraid to be stuck in a loop of reviewing "hard" cards, don't be. In reality after you've seen that "hard" card three times in a row you won't mark it as hard again, because you will remember. If you don't maybe there are two reasons:
- The card has too much information that should be subdivided in smaller cards.
- You're not doing a good process of memorizing the contents once they show up.
Jellyfin⚑
-
New: Fix Corrupt: SQLitePCL.pretty.SQLiteException: database disk image is malformed.
If your server log file shows SQLite errors like the following example your jellyfin.db file needs attention.
'SQLitePCL.pretty.SQLiteException'
Typical causes of this are sudden and abrupt terminations of the Emby server process, such as a power loss, operating system crash, force killing the server process, etc.
To solve it there are many steps:
-
Jellyfin stores the watched information in one of the
.db
files, there are two ways to restore it:- Using scripts that interact with the API like
jelly-jar
orjellyfin-backup-watched
- Running sqlite queries on the database itself.
The user data is stored in the table
UserDatas
table in thelibrary.db
database file. The media data is stored in theTypedBaseItems
table of the same database.Comparing the contents of the tables of the broken database (lost watched content) and a backup database, I've seen that the media content is the same after a full library rescan, so the issue was fixed after injecting the missing user data from the backup to the working database through the importing a table from another database sqlite operation.
- Using scripts that interact with the API like
-
New: Fix ReadOnly: SQLitePCL.pretty.SQLiteException: attempt to write a readonly database.
Some of the database files of Jellyfin is not writable by the jellyfin user, check if you changed the ownership of the files, for example in the process of restoring a database file from backup.