September of 2023
Life Management⚑
Task Management⚑
Org Mode⚑
-
New: Refile from the capture window.
If you refile from the capture window, until this issue is solved, your task will be refiled but the capture window won't be closed.
Be careful that it only refiles the first task there is, so you need to close the capture before refiling the next
Coding⚑
Languages⚑
Bash snippets⚑
-
New: How to deal with HostContextSwitching alertmanager alert.
A context switch is described as the kernel suspending execution of one process on the CPU and resuming execution of some other process that had previously been suspended. A context switch is required for every interrupt and every task that the scheduler picks.
Context switching can be due to multitasking, Interrupt handling , user & kernel mode switching. The interrupt rate will naturally go high, if there is higher network traffic, or higher disk traffic. Also it is dependent on the application which every now and then invoking system calls.
If the cores/CPU's are not sufficient to handle load of threads created by application will also result in context switching.
It is not a cause of concern until performance breaks down. This is expected that CPU will do context switching. One shouldn't verify these data at first place since there are many statistical data which should be analyzed prior to looking into kernel activities. Verify the CPU, memory and network usage during this time.
You can see which process is causing issue with the next command:
10:15:24 AM UID PID cswch/s nvcswch/s Command 10:15:27 AM 0 1 162656.7 16656.7 systemd 10:15:27 AM 0 9 165451.04 15451.04 ksoftirqd/0 10:15:27 AM 0 10 158628.87 15828.87 rcu_sched 10:15:27 AM 0 11 156147.47 15647.47 migration/0 10:15:27 AM 0 17 150135.71 15035.71 ksoftirqd/1 10:15:27 AM 0 23 129769.61 12979.61 ksoftirqd/2 10:15:27 AM 0 29 2238.38 238.38 ksoftirqd/3 10:15:27 AM 0 43 1753 753 khugepaged 10:15:27 AM 0 443 1659 165 usb-storage 10:15:27 AM 0 456 1956.12 156.12 i915/signal:0 10:15:27 AM 0 465 29550 29550 kworker/3:1H-xfs-log/dm-3 10:15:27 AM 0 490 164700 14700 kworker/0:1H-kblockd 10:15:27 AM 0 506 163741.24 16741.24 kworker/1:1H-xfs-log/dm-3 10:15:27 AM 0 594 154742 154742 dmcrypt_write/2 10:15:27 AM 0 629 162021.65 16021.65 kworker/2:1H-kblockd 10:15:27 AM 0 715 147852.48 14852.48 xfsaild/dm-1 10:15:27 AM 0 886 150706.86 15706.86 irq/131-iwlwifi 10:15:27 AM 0 966 135597.92 13597.92 xfsaild/dm-3 10:15:27 AM 81 1037 2325.25 225.25 dbus-daemon 10:15:27 AM 998 1052 118755.1 11755.1 polkitd 10:15:27 AM 70 1056 158248.51 15848.51 avahi-daemon 10:15:27 AM 0 1061 133512.12 455.12 rngd 10:15:27 AM 0 1110 156230 16230 cupsd 10:15:27 AM 0 1192 152298.02 1598.02 sssd_nss 10:15:27 AM 0 1247 166132.99 16632.99 systemd-logind 10:15:27 AM 0 1265 165311.34 16511.34 cups-browsed 10:15:27 AM 0 1408 10556.57 1556.57 wpa_supplicant 10:15:27 AM 0 1687 3835 3835 splunkd 10:15:27 AM 42 1773 3728 3728 Xorg 10:15:27 AM 42 1996 3266.67 266.67 gsd-color 10:15:27 AM 0 3166 32036.36 3036.36 sssd_kcm 10:15:27 AM 119349 3194 151763.64 11763.64 dbus-daemon 10:15:27 AM 119349 3199 158306 18306 Xorg 10:15:27 AM 119349 3242 15.28 5.8 gnome-shell pidstat -wt 3 10 > /tmp/pidstat-t.out Linux 4.18.0-80.11.2.el8_0.x86_64 (hostname) 09/08/2020 _x86_64_ (4 CPU) 10:15:15 AM UID TGID TID cswch/s nvcswch/s Command 10:15:19 AM 0 1 - 152656.7 16656.7 systemd 10:15:19 AM 0 - 1 152656.7 16656.7 |__systemd 10:15:19 AM 0 9 - 165451.04 15451.04 ksoftirqd/0 10:15:19 AM 0 - 9 165451.04 15451.04 |__ksoftirqd/0 10:15:19 AM 0 10 - 158628.87 15828.87 rcu_sched 10:15:19 AM 0 - 10 158628.87 15828.87 |__rcu_sched 10:15:19 AM 0 23 - 129769.61 12979.61 ksoftirqd/2 10:15:19 AM 0 - 23 129769.61 12979.33 |__ksoftirqd/2 10:15:19 AM 0 29 - 32424.5 2445 ksoftirqd/3 10:15:19 AM 0 - 29 32424.5 2445 |__ksoftirqd/3 10:15:19 AM 0 43 - 334 34 khugepaged 10:15:19 AM 0 - 43 334 34 |__khugepaged 10:15:19 AM 0 443 - 11465 566 usb-storage 10:15:19 AM 0 - 443 6433 93 |__usb-storage 10:15:19 AM 0 456 - 15.41 0.00 i915/signal:0 10:15:19 AM 0 - 456 15.41 0.00 |__i915/signal:0 10:15:19 AM 0 715 - 19.34 0.00 xfsaild/dm-1 10:15:19 AM 0 - 715 19.34 0.00 |__xfsaild/dm-1 10:15:19 AM 0 886 - 23.28 0.00 irq/131-iwlwifi 10:15:19 AM 0 - 886 23.28 0.00 |__irq/131-iwlwifi 10:15:19 AM 0 966 - 19.67 0.00 xfsaild/dm-3 10:15:19 AM 0 - 966 19.67 0.00 |__xfsaild/dm-3 10:15:19 AM 81 1037 - 6.89 0.33 dbus-daemon 10:15:19 AM 81 - 1037 6.89 0.33 |__dbus-daemon 10:15:19 AM 0 1038 - 11567.31 4436 NetworkManager 10:15:19 AM 0 - 1038 1.31 0.00 |__NetworkManager 10:15:19 AM 0 - 1088 0.33 0.00 |__gmain 10:15:19 AM 0 - 1094 1340.66 0.00 |__gdbus 10:15:19 AM 998 1052 - 118755.1 11755.1 polkitd 10:15:19 AM 998 - 1052 32420.66 25545 |__polkitd 10:15:19 AM 998 - 1132 0.66 0.00 |__gdbus
Then with help of PID which is causing issue, one can get all system calls details: Raw
Let this command run for a few minutes while the load/context switch rates are high. It is safe to run this on a production system so you could run it on a good system as well to provide a comparative baseline. Through strace, one can debug & troubleshoot the issue, by looking at system calls the process has made.
-
New: Redirect stderr of all subsequent commands of a script to a file.
{ somecommand somecommand2 somecommand3 } 2>&1 | tee -a $DEBUGLOG
Libraries⚑
-
New: Receive keys from a keyserver.
import_result = gpg.recv_keys('server-name', 'keyid1', 'keyid2', ...)
BeautifulSoup⚑
-
New: Searching by attribute and value.
soup = BeautifulSoup(html) results = soup.findAll("td", {"valign" : "top"})
-
New: Install a specific version of Docker.
Follow these instructions
If that doesn't install the version of
docker-compose
that you want use the next snippet:VERSION=$(curl --silent https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*\d') DESTINATION=/usr/local/bin/docker-compose sudo curl -L https://github.com/docker/compose/releases/download/${VERSION}/docker-compose-$(uname -s)-$(uname -m) -o $DESTINATION sudo chmod 755 $DESTINATION
If you don't want the latest version set the
VERSION
variable.
Python Snippets⚑
-
file_ = Path('/to/some/file') file_.read_text()
-
New: Get changed time of a file.
import os os.path.getmtime(path)
Pydantic⚑
-
Correction: Initialize attributes at object creation.
pydantic
recommends using root validators, but it's difficult to undestand how to do it and to debug the errors. You also don't have easy access to the default values of the model. I'd rather use the overwriting the__init__
method.class fish(BaseModel): name: str color: str def __init__(self, **kwargs): super().__init__(**kwargs) print("Fish initialization successful!") self.color=complex_function()
questionary⚑
-
If you want autocomplete with fuzzy finding use:
import questionary from prompt_toolkit.completion import FuzzyWordCompleter questionary.autocomplete( "Save to (q to cancel): ", choices=destination_directories, completer=FuzzyWordCompleter(destination_directories), ).ask()
DevOps⚑
Infrastructure as Code⚑
Ansible Snippets⚑
-
New: Ansible condition that uses a regexp.
- name: Check if an instance name or hostname matches a regex pattern when: inventory_hostname is not match('molecule-.*') fail: msg: "not a molecule instance"
-
New: Ansible-lint doesn't find requirements.
It may be because you're using
requirements.yaml
instead ofrequirements.yml
. Create a temporal link from one file to the other, run the command and then remove the link.It will work from then on even if you remove the link.
¯\(°_o)/¯
-
New: Run task only once.
Add
run_once: true
on the task definition:- name: Do a thing on the first host in a group. debug: msg: "Yay only prints once" run_once: true
-
New: Ansible add a sleep.
- name: Pause for 5 minutes to build app cache ansible.builtin.pause: minutes: 5
Gitea⚑
-
Correction: Using
paths-filter
custom action to skip job actions.jobs: test: if: "!startsWith(github.event.head_commit.message, 'bump:')" name: Test runs-on: ubuntu-latest steps: - name: Checkout the codebase uses: https://github.com/actions/checkout@v3 - name: Check if we need to run the molecule tests uses: https://github.com/dorny/paths-filter@v2 id: filter with: filters: | molecule: - 'defaults/**' - 'tasks/**' - 'handlers/**' - 'tasks/**' - 'templates/**' - 'molecule/**' - 'requirements.yaml' - '.github/workflows/tests.yaml' - name: Run Molecule tests if: steps.filter.outputs.molecule == 'true' run: make molecule
You can find more examples on how to use
paths-filter
here. -
New: Get variables from the environment.
You can configure your
molecule.yaml
file to read variables from the environment with:provisioner: name: ansible inventory: group_vars: all: my_secret: ${MY_SECRET}
It's useful to have a task that checks if this secret exists:
- name: Verify that the secret is set fail: msg: 'Please export my_secret: export MY_SECRET=$(pass show my_secret)' run_once: true when: my_secret == None
In the CI you can set it as a secret in the repository.
Infrastructure Solutions⚑
AWS Snippets⚑
- New: [Remove the lock screen in ubuntu.](../aws_snippets.md#invalidate-a-cloudfront-distribution
aws-cloudfront-create-invalidation---paths-"/pages/about"---distribution-id-my-distribution-id
feat(bash_snippets)
Create the `/usr/share/glib-2.0/schemas/90_ubuntu-settings.gschema.override` file with the next content:
```ini
[org.gnome.desktop.screensaver]
lock-enabled = false
[org.gnome.settings-daemon.plugins.power]
idle-dim = false
```
Then reload the schemas with:
```bash
sudo glib-compile-schemas /usr/share/glib-2.0/schemas/
```
Storage⚑
OpenZFS⚑
-
First let’s offline the device we are going to replace:
zpool offline tank0 ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx
Now let us have a look at the pool status.
zpool status NAME STATE READ WRITE CKSUM tank0 DEGRADED 0 0 0 raidz2-1 DEGRADED 0 0 0 ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx ONLINE 0 0 0 ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx OFFLINE 0 0 0 ata-ST4000VX007-2DT166_xxxxxxxx ONLINE 0 0 0
Sweet, the device is offline (last time it didn't show as offline for me, but the offline command returned a status code of 0).
Time to shut the server down and physically replace the disk.
shutdown -h now
When you start again the server, it’s time to instruct ZFS to replace the removed device with the disk we just installed.
zpool replace tank0 \ ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx \ /dev/disk/by-id/ata-TOSHIBA_HDWG180_xxxxxxxxxxxx
zpool status tank0 pool: main state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Fri Sep 22 12:40:28 2023 4.00T scanned at 6.85G/s, 222G issued at 380M/s, 24.3T total 54.7G resilvered, 0.89% done, 18:28:03 to go NAME STATE READ WRITE CKSUM tank0 DEGRADED 0 0 0 raidz2-1 DEGRADED 0 0 0 ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx ONLINE 0 0 0 ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 replacing-6 DEGRADED 0 0 0 ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx OFFLINE 0 0 0 ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 (resilvering) ata-ST4000VX007-2DT166_xxxxxxxx ONLINE 0 0 0
The disk is replaced and getting resilvered (which may take a long time to run (18 hours in a 8TB disk in my case).
Once the resilvering is done; this is what the pool looks like.
zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT tank0 43.5T 33.0T 10.5T 14.5T 7% 75% 1.00x ONLINE -
If you want to read other blogs that have covered the same topic check out 1.
-
New: Stop a ZFS scrub.
zpool scrub -s my_pool
Operating Systems⚑
Linux⚑
Linux Snippets⚑
-
New: Limit the resources a docker is using.
You can either use limits in the
docker
service itself, see 1 and 2.
aleph⚑
-
New: Ingest gets stuck.
It looks that Aleph doesn't yet give an easy way to debug it. It can be seen in the next webs:
- Improve the UX for bulk uploading and processing of large number of files
- Document ingestion gets stuck effectively at 100%
- Display detailed ingestion status to see if everything is alright and when the collection is ready
Some interesting ideas I've extracted while diving into these issues is that:
- You can also upload files using the
alephclient
python command line tool - Some of the files might fail to be processed without leaving any hint to the uploader or the viewer.
- This results in an incomplete dataset and the users don't get to know that the dataset is incomplete. This is problematic if the completeness of the dataset is crucial for an investigation.
- There is no way to upload only the files that failed to be processed without re-uploading the entire set of documents or manually making a list of the failed documents and re-uploading them
- There is no way for uploaders or Aleph admins to see an overview of processing errors to figure out why some files are failing to be processed without going through docker logs (which is not very user-friendly)
- There was an attempt to improve the way ingest-files manages the pending tasks, it's merged into the release/4.0.0 branch, but it has not yet arrived
main
.
There are some tickets that attempt to address these issues on the command line:
I think it's interesting either to contribute to
alephclient
to solve those issues or if it's complicated create a small python script to detect which files were not uploaded and try to reindex them and/or open issues that will prevent future ingests to fail.
gitsigns⚑
-
New: Introduce gitsigns.
Gitsigns is a neovim plugin to create git decorations similar to the vim plugin gitgutter but written purely in Lua.
Installation:
Add to your
plugins.lua
file:use {'lewis6991/gitsigns.nvim'}
Install it with
:PackerInstall
.Configure it in your
init.lua
with:-- Configure gitsigns require('gitsigns').setup({ on_attach = function(bufnr) local gs = package.loaded.gitsigns local function map(mode, l, r, opts) opts = opts or {} opts.buffer = bufnr vim.keymap.set(mode, l, r, opts) end -- Navigation map('n', ']c', function() if vim.wo.diff then return ']c' end vim.schedule(function() gs.next_hunk() end) return '<Ignore>' end, {expr=true}) map('n', '[c', function() if vim.wo.diff then return '[c' end vim.schedule(function() gs.prev_hunk() end) return '<Ignore>' end, {expr=true}) -- Actions map('n', '<leader>gs', gs.stage_hunk) map('n', '<leader>gr', gs.reset_hunk) map('v', '<leader>gs', function() gs.stage_hunk {vim.fn.line('.'), vim.fn.line('v')} end) map('v', '<leader>gr', function() gs.reset_hunk {vim.fn.line('.'), vim.fn.line('v')} end) map('n', '<leader>gS', gs.stage_buffer) map('n', '<leader>gu', gs.undo_stage_hunk) map('n', '<leader>gR', gs.reset_buffer) map('n', '<leader>gp', gs.preview_hunk) map('n', '<leader>gb', function() gs.blame_line{full=true} end) map('n', '<leader>gb', gs.toggle_current_line_blame) map('n', '<leader>gd', gs.diffthis) map('n', '<leader>gD', function() gs.diffthis('~') end) map('n', '<leader>ge', gs.toggle_deleted) -- Text object map({'o', 'x'}, 'ih', ':<C-U>Gitsigns select_hunk<CR>') end })
Usage:
Some interesting bindings:
]c
: Go to next diff chunk[c
: Go to previous diff chunk<leader>gs
: Stage chunk, it works both in normal and visual mode<leader>gr
: Restore chunk from index, it works both in normal and visual mode<leader>gp
: Preview diff, you can use it with]c
and[c
to see all the chunk diffs<leader>gb
: Show the git blame of the line as a shadowed comment
Diffview⚑
-
New: Use the same binding to open and close the diffview windows.
vim.keymap.set('n', 'dv', function() if next(require('diffview.lib').views) == nil then vim.cmd('DiffviewOpen') else vim.cmd('DiffviewClose') end end)
Grafana⚑
-
Correction: Install grafana.
--- version: "3.8" services: grafana: image: grafana/grafana-oss:${GRAFANA_VERSION:-latest} container_name: grafana restart: unless-stopped volumes: - data:/var/lib/grafana networks: - grafana - monitorization - swag env_file: - .env depends_on: - db db: image: postgres:${DATABASE_VERSION:-15} restart: unless-stopped container_name: grafana-db environment: - POSTGRES_DB=${GF_DATABASE_NAME:-grafana} - POSTGRES_USER=${GF_DATABASE_USER:-grafana} - POSTGRES_PASSWORD=${GF_DATABASE_PASSWORD:?database password required} networks: - grafana volumes: - db-data:/var/lib/postgresql/data env_file: - .env networks: grafana: external: name: grafana monitorization: external: name: monitorization swag: external: name: swag volumes: data: driver: local driver_opts: type: none o: bind device: /data/grafana/app db-data: driver: local driver_opts: type: none o: bind device: /data/grafana/database
Where the
monitorization
network is where prometheus and the rest of the stack listens, andswag
the network to the gateway proxy.It uses the
.env
file to store the required configuration, to connect grafana with authentik you need to add the next variables:GF_AUTH_GENERIC_OAUTH_ENABLED="true" GF_AUTH_GENERIC_OAUTH_NAME="authentik" GF_AUTH_GENERIC_OAUTH_CLIENT_ID="<Client ID from above>" GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET="<Client Secret from above>" GF_AUTH_GENERIC_OAUTH_SCOPES="openid profile email" GF_AUTH_GENERIC_OAUTH_AUTH_URL="https://authentik.company/application/o/authorize/" GF_AUTH_GENERIC_OAUTH_TOKEN_URL="https://authentik.company/application/o/token/" GF_AUTH_GENERIC_OAUTH_API_URL="https://authentik.company/application/o/userinfo/" GF_AUTH_SIGNOUT_REDIRECT_URL="https://authentik.company/application/o/<Slug of the application from above>/end-session/" GF_AUTH_OAUTH_AUTO_LOGIN="true" GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"
In the configuration above you can see an example of a role mapping. Upon login, this configuration looks at the groups of which the current user is a member. If any of the specified group names are found, the user will be granted the resulting role in Grafana.
In the example shown above, one of the specified group names is "Grafana Admins". If the user is a member of this group, they will be granted the "Admin" role in Grafana. If the user is not a member of the "Grafana Admins" group, it moves on to see if the user is a member of the "Grafana Editors" group. If they are, they are granted the "Editor" role. Finally, if the user is not found to be a member of either of these groups, it fails back to granting the "Viewer" role.
Also make sure in your configuration that
root_url
is set correctly, otherwise your redirect url might get processed incorrectly. For example, if your grafana instance is running on the default configuration and is accessible behind a reverse proxy at https://grafana.company, your redirect url will end up looking like this, https://grafana.company/. If you getuser does not belong to org
error when trying to log into grafana for the first time via OAuth, check if you have an organization with the ID of 1, if not, then you have to add the following to your grafana config:[users] auto_assign_org = true auto_assign_org_id = <id-of-your-default-organization>
Once you've made sure that the oauth works, go to
/admin/users
and remove theadmin
user. -
New: Configure grafana.
Grafana has default and custom configuration files. You can customize your Grafana instance by modifying the custom configuration file or by using environment variables. To see the list of settings for a Grafana instance, refer to View server settings.
To override an option use
GF_<SectionName>_<KeyName>
. Where thesection name
is the text within the brackets. Everything should be uppercase,.
and-
should be replaced by_
. For example, if you have these configuration settings:instance_name = ${HOSTNAME} [security] admin_user = admin [auth.google] client_secret = 0ldS3cretKey [plugin.grafana-image-renderer] rendering_ignore_https_errors = true [feature_toggles] enable = newNavigation
You can override variables on Linux machines with:
export GF_DEFAULT_INSTANCE_NAME=my-instance export GF_SECURITY_ADMIN_USER=owner export GF_AUTH_GOOGLE_CLIENT_SECRET=newS3cretKey export GF_PLUGIN_GRAFANA_IMAGE_RENDERER_RENDERING_IGNORE_HTTPS_ERRORS=true export GF_FEATURE_TOGGLES_ENABLE=newNavigation
And in the docker compose you can edit the
.env
file. Mine looks similar to:GRAFANA_VERSION=latest GF_DEFAULT_INSTANCE_NAME="production" GF_SERVER_ROOT_URL="https://your.domain.org" GF_DATABASE_TYPE=postgres DATABASE_VERSION=15 GF_DATABASE_HOST=grafana-db:5432 GF_DATABASE_NAME=grafana GF_DATABASE_USER=grafana GF_DATABASE_PASSWORD="change-for-a-long-password" GF_DATABASE_SSL_MODE=disable GF_AUTH_GENERIC_OAUTH_ENABLED="true" GF_AUTH_GENERIC_OAUTH_NAME="authentik" GF_AUTH_GENERIC_OAUTH_CLIENT_ID="<Client ID from above>" GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET="<Client Secret from above>" GF_AUTH_GENERIC_OAUTH_SCOPES="openid profile email" GF_AUTH_GENERIC_OAUTH_AUTH_URL="https://authentik.company/application/o/authorize/" GF_AUTH_GENERIC_OAUTH_TOKEN_URL="https://authentik.company/application/o/token/" GF_AUTH_GENERIC_OAUTH_API_URL="https://authentik.company/application/o/userinfo/" GF_AUTH_SIGNOUT_REDIRECT_URL="https://authentik.company/application/o/<Slug of the application from above>/end-session/" GF_AUTH_OAUTH_AUTO_LOGIN="true" GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"
-
You can manage data sources in Grafana by adding YAML configuration files in the
provisioning/datasources
directory. Each config file can contain a list of datasources to add or update during startup. If the data source already exists, Grafana reconfigures it to match the provisioned configuration file.The configuration file can also list data sources to automatically delete, called
deleteDatasources
. Grafana deletes the data sources listed indeleteDatasources
before adding or updating those in the datasources list.For example to configure a Prometheus datasource use:
apiVersion: 1 datasources: - name: Prometheus type: prometheus access: proxy # Access mode - proxy (server in the UI) or direct (browser in the UI). url: http://prometheus:9090 jsonData: httpMethod: POST manageAlerts: true prometheusType: Prometheus prometheusVersion: 2.44.0 cacheLevel: 'High' disableRecordingRules: false incrementalQueryOverlapWindow: 10m exemplarTraceIdDestinations: []
-
You can manage dashboards in Grafana by adding one or more YAML config files in the
provisioning/dashboards
directory. Each config file can contain a list of dashboards providers that load dashboards into Grafana from the local filesystem.Create one file called
dashboards.yaml
with the next contents:--- apiVersion: 1 providers: - name: default # A uniquely identifiable name for the provider type: file options: path: /etc/grafana/provisioning/dashboards/definitions
Then inside the config directory of your docker compose create the directory
provisioning/dashboards/definitions
and add the json of the dashboards themselves. You can download them from the dashboard pages. For example: -
To install plugins in the Docker container, complete the following steps:
- Pass the plugins you want to be installed to Docker with the
GF_INSTALL_PLUGINS
environment variable as a comma-separated list. - This sends each plugin name to
grafana-cli plugins install ${plugin}
and installs them when Grafana starts.
For example:
docker run -d -p 3000:3000 --name=grafana \ -e "GF_INSTALL_PLUGINS=grafana-clock-panel, grafana-simple-json-datasource" \ grafana/grafana-oss
To specify the version of a plugin, add the version number to the
GF_INSTALL_PLUGINS
environment variable. For example:GF_INSTALL_PLUGINS=grafana-clock-panel 1.0.1
.To install a plugin from a custom URL, use the following convention to specify the URL:
<url to plugin zip>;<plugin install folder name>
. For example:GF_INSTALL_PLUGINS=https://github.com/VolkovLabs/custom-plugin.zip;custom-plugin
. - Pass the plugins you want to be installed to Docker with the
-
Correction: Improve installation method.
Add more configuration values such as:
GF_SERVER_ENABLE_GZIP="true" GF_AUTH_GENERIC_OAUTH_ALLOW_ASSIGN_GRAFANA_ADMIN="true" GF_LOG_MODE="console file" GF_LOG_LEVEL="info"
-
Correction: Warning when configuring datasources.
Be careful to set the
timeInterval
variable to the value of how often you scrape the data from the node exporter to avoid this issue.
Jellyfin⚑
-
New: Forgot Password. Please try again within your home network to initiate the password reset process.
If you're an external jellyfin user you can't reset your password unless you are part of the LAN. This is done because the reset password process is simple and insecure.
If you don't care about that and still think that the internet is a happy and safe place here and here are some instructions on how to bypass the security measure.
Matrix⚑
-
New: How to install matrix.
sudo apt install -y wget apt-transport-https sudo wget -O /usr/share/keyrings/element-io-archive-keyring.gpg https://packages.element.io/debian/element-io-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/element-io-archive-keyring.gpg] https://packages.element.io/debian/ default main" | sudo tee /etc/apt/sources.list.d/element-io.list sudo apt update sudo apt install element-desktop
Mediatracker⚑
-
Correction: Update ryot comparison with mediatracker.
Ryot has a better web design, it also has a jellyfin scrobbler, although it's not yet stable. There are other UI tweaks that is preventing me from migrating to ryot such as the easier media rating and the percentage over five starts rating system.
retroarch⚑
-
New: Install retroarch instructions.
To add the stable branch to your system type:
sudo add-apt-repository ppa:libretro/stable sudo apt-get update sudo apt-get install retroarch
Go to Main Menu/Online Updater and then update everything you can:
- Update Core Info Files
- Update Assets
- Update controller Profiles
- Update Databases
- Update Overlays
- Update GLSL Shaders
Vim⚑
-
New: Update treesitter language definitions.
To do so you need to run:
:TSInstall <language>
To update the parsers run
:TSUpdate
-
New: Telescope changes working directory when opening a file.
In my case was due to a snippet I have to remember the folds:
vim.cmd[[ augroup remember_folds autocmd! autocmd BufWinLeave * silent! mkview autocmd BufWinEnter * silent! loadview augroup END ]]
It looks that it had saved a view with the other working directory so when a file was loaded the
cwd
changed. To solve it I created a newmkview
in the correct directory.
Arts⚑
Dancing⚑
Lindy Hop⚑
-
New: New Charleston, lindy and solo jazz videos.
Charleston:
- The DecaVita Sisters:
- Freestyle Lindy Hop & Charleston
- Moby "Honey"
Solo Jazz:
Lindy Hop:
- The DecaVita Sisters:
- Compromise - agreement in the moment
- Lindy hop improv
Maker⚑
Vial⚑
-
New: Introduce Vial.
Vial is an open-source cross-platform (Windows, Linux and Mac) GUI and a QMK fork for configuring your keyboard in real time.
Even though you can use a web version you can install it locally through an AppImage
- Download the latest version
- Give it execution permissions
- Add the file somewhere in your
$PATH
On linux you need to configure an
udev
rule.For a universal access rule for any device with Vial firmware, run this in your shell while logged in as your user (this will only work with sudo installed):
export USER_GID=`id -g`; sudo --preserve-env=USER_GID sh -c 'echo "KERNEL==\"hidraw*\", SUBSYSTEM==\"hidraw\", ATTRS{serial}==\"*vial:f64c2b3c*\", MODE=\"0660\", GROUP=\"$USER_GID\", TAG+=\"uaccess\", TAG+=\"udev-acl\"" > /etc/udev/rules.d/99-vial.rules && udevadm control --reload && udevadm trigger'
This command will automatically create a
udev
rule and reload theudev
system.
Video Gaming⚑
Age of Empires⚑
-
New: How to fight Vietnamese with Mongols.
Gain early map control with scouts, then switch into steppe lancers and front siege, finally castle in the face when you clicked to imperial.
Gardening⚑
-
Manure is one of the best organic fertilizers for plants. It's made by the accumulation of excrements of bats, sea birds and seals and it usually doesn't contain additives or synthetic chemical components.
This fertilizer is rich in nitrogen, phosphorus and potassium, which are key minerals for the growth of plants. These components help the regeneration of the soil, the enrichment in terms of nutrients and also acts as fungicide preventing plagues.
Manure is a fertilizer of slow absorption, which means that it's released to the plants in an efficient, controlled and slow pace. That way the plants take the nutrients when they need them.
The best moment to use it is at spring and depending on the type of plant you should apply it between each month and a half and three months. It's use in winter is not recommended, as it may burn the plant's roots.
Manure can be obtained in dust or liquid state. The first is perfect to scatter directly over the earth, while the second is better used on plant pots. You don't need to use much, in fact, with just a pair of spoons per pot is enough. Apply it around the base of the plant, avoiding it's touch with leaves, stem or exposed roots, as it may burn them. After you apply them remember to water them often, keep in mind that it's like a heavy greasy sandwich for the plants, and they need water to digest it.
For my indoor plants I'm going to apply a small dose (one spoon per plant) at the start of Autumn (first days of September), and two spoons at the start of spring (first days of March).