February of 2023
Life Management⚑
Task Management⚑
Org Mode⚑
-
New: Introduce Nvim Org Mode.
nvim-orgmode
is a Orgmode clone written in Lua for Neovim. Org-mode is a flexible note-taking system that was originally created for Emacs. It has gained wide-spread acclaim and was eventually ported to Neovim.The article includes:
Computer configuration management⚑
-
New: Introduce configuration management.
Configuring your devices is boring, disgusting and complex. Specially when your device dies and you need to reinstall. You usually don't have the time or energy to deal with it, you just want it to work.
To have a system that allows you to recover from a disaster it's expensive in both time and knowledge, and many people have different solutions.
This article shows the latest step of how I'm doing it.
Coding⚑
Languages⚑
Python⚑
-
New: Move a file.
Use one of the following
import os import shutil os.rename("path/to/current/file.foo", "path/to/new/destination/for/file.foo") os.replace("path/to/current/file.foo", "path/to/new/destination/for/file.foo") shutil.move("path/to/current/file.foo", "path/to/new/destination/for/file.foo")
PDM⚑
-
During the build, you may want to generate other files or download resources from the internet. You can achieve this by the setup-script build configuration:
``toml
[tool.pdm.build] setup-script = "build.py"In the `build.py` script, pdm-pep517 looks for a build function and calls it with two arguments: * `src`: the path to the source directory * `dst`: the path to the distribution directory Example: ```python def build(src, dst): target_file = os.path.join(dst, "mypackage/myfile.txt") os.makedirs(os.path.dirname(target_file), exist_ok=True) download_file_to(dst)
The generated file will be copied to the resulted wheel with the same hierarchy, you need to create the parent directories if necessary.
Libraries⚑
-
New: How to encrypt a file.
gpg.encrypt_file('path/to/file', recipients)
Where
recipients
is aList[str]
of gpg Key IDs. -
New: List the recipients that can decrypt a file.
feat(requests#Use a proxy): Use a proxydef list_recipients(self, path: Path) -> List['GPGKey']: """List the keys that can decrypt a file. Args: path: Path to the file to check. """ keys = [] for short_key in self.gpg.get_recipients_file(str(path)): keys.append(self.gpg.list_keys(keys=[short_key])[0]['fingerprint']) return keys
http_proxy = "http://10.10.1.10:3128" https_proxy = "https://10.10.1.11:1080" ftp_proxy = "ftp://10.10.1.10:3128" proxies = { "http" : http_proxy, "https" : https_proxy, "ftp" : ftp_proxy } r = requests.get(url, headers=headers, proxies=proxies)
Configure Docker to host the application⚑
-
New: Troubleshoot Docker python not showning prints.
Use
CMD ["python","-u","main.py"]
instead ofCMD ["python","main.py"]
. -
New: [Get the difference of two lists.](../coding/python/python_project_template/python_docker.md#prevent-pip-install--r-requirements.txt-to-run-on-each-docker-build:-prevent-
pip-install--r-requirements.txt
-to-run-on-each-docker-build
i'm-assuming-that-at-some-point-in-your-build-process,-you're-copying-your-entire-application-into-the-docker-image-with-copy-or-add:
copy-.-/opt/app
workdir-/opt/app
run-pip-install--r-requirements.txt
the-problem-is-that-you're-invalidating-the-docker-build-cache-every-time-you're-copying-the-entire-application-into-the-image.-this-will-also-invalidate-the-cache-for-all-subsequent-build-steps.
to-prevent-this,-i'd-suggest-copying-only-the-requirements.txt-file-in-a-separate-build-step-before-adding-the-entire-application-into-the-image:
copy-requirements.txt-/opt/app/requirements.txt
workdir-/opt/app
run-pip-install--r-requirements.txt
copy-.-/opt/app
feat(python_snippets)
If we want to substract the elements of one list from the other you can use:
```python
for x in b:
if x in a:
a.remove(x)
```
-
New: Override entrypoint.
sudo docker run -it --entrypoint /bin/bash [docker_image]
Click⚑
-
New: Split stdout from stderr in tests.
By default the
runner
is configured to mixstdout
andstderr
, if you wish to tell apart both sources use:def test(runner: CliRunner): ... runner.mix_stderr = False
Promql⚑
-
Selecting series:
- Select latest sample for series with a given metric name:
node_cpu_seconds_total
- Select 5-minute range of samples for series with a given metric name:
node_cpu_seconds_total[5m]
- Only series with given label values:
node_cpu_seconds_total{cpu="0",mode="idle"}
- Complex label matchers (
=
: equality,!=
: non-equality,=~
: regex match,!~
: negative regex match):
node_cpu_seconds_total{cpu!="0",mode=~"user|system"}
- Select data from one day ago and shift it to the current time:
process_resident_memory_bytes offset 1d
Rates of increase for counters:
- Per-second rate of increase, averaged over last 5 minutes:
rate(demo_api_request_duration_seconds_count[5m])
- Per-second rate of increase, calculated over last two samples in a 1-minute time window:
irate(demo_api_request_duration_seconds_count[1m])
- Absolute increase over last hour:
increase(demo_api_request_duration_seconds_count[1h])
Aggregating over multiple series:
- Sum over all series:
sum(node_filesystem_size_bytes)
- Preserve the instance and job label dimensions:
sum by(job, instance) (node_filesystem_size_bytes)
- Aggregate away the instance and job label dimensions:
sum without(instance, job) (node_filesystem_size_bytes)
Available aggregation operators:
sum()
,min()
,max()
,avg()
,stddev()
,stdvar()
,count()
,count_values()
,group()
,bottomk()
,topk()
,quantile()
.Time:
- Get the Unix time in seconds at each resolution step:
time()
- Get the age of the last successful batch job run:
time() - demo_batch_last_success_timestamp_seconds
- Find batch jobs which haven't succeeded in an hour:
time() - demo_batch_last_success_timestamp_seconds > 3600
rich⚑
-
New: Tree console view.
Rich has a
Tree
class which can generate a tree view in the terminal. A tree view is a great way of presenting the contents of a filesystem or any other hierarchical data. Each branch of the tree can have a label which may be text or any other Rich renderable.The following code creates and prints a tree with a simple text label:
from rich.tree import Tree from rich import print tree = Tree("Rich Tree") print(tree)
With only a single
Tree
instance this will output nothing more than the text “Rich Tree”. Things get more interesting when we calladd()
to add more branches to theTree
. The following code adds two more branches:tree.add("foo") tree.add("bar") print(tree)
The
tree
will now have two branches connected to the original tree with guide lines.When you call
add()
a newTree
instance is returned. You can use this instance to add more branches to, and build up a more complex tree. Let’s add a few more levels to the tree:baz_tree = tree.add("baz") baz_tree.add("[red]Red").add("[green]Green").add("[blue]Blue") print(tree)
Selenium⚑
-
New: Solve element isn't clickable in headless mode.
There are many things you can try to fix this issue. Being the first to configure the
driver
to use the full screen. Assuming you're using the undetectedchromedriver:import undetected_chromedriver.v2 as uc options = uc.ChromeOptions() options.add_argument("--disable-dev-shm-usage") options.add_argument("--no-sandbox") options.add_argument("--headless") options.add_argument("--start-maximized") options.add_argument("--window-size=1920,1080") driver = uc.Chrome(options=options)
If that doesn't solve the issue use the next function:
def click(driver: uc.Chrome, xpath: str, mode: Optional[str] = None) -> None: """Click the element marked by the XPATH. Args: driver: Object to interact with selenium. xpath: Identifier of the element to click. mode: Type of click. It needs to be one of [None, position, wait] The different ways to click are: * None: The normal click of the driver. * wait: Wait until the element is clickable and then click it. * position: Deduce the position of the element and then click it with a javascript script. """ if mode is None: driver.find_element(By.XPATH, xpath).click() elif mode == 'wait': # https://stackoverflow.com/questions/59808158/element-isnt-clickable-in-headless-mode WebDriverWait(driver, 20).until( EC.element_to_be_clickable((By.XPATH, xpath)) ).click() elif mode == 'position': # https://stackoverflow.com/questions/16807258/selenium-click-at-certain-position element = driver.find_element(By.XPATH, xpath) driver.execute_script("arguments[0].click();", element)
sh⚑
-
New: Passing environmental variables to commands.
The
_env
specialkwarg
allows you to pass a dictionary of environment variables and their corresponding values:import sh sh.google_chrome(_env={"SOCKS_SERVER": "localhost:1234"})
_env
replaces your process’s environment completely. Only the key-value pairs in_env
will be used for its environment. If you want to add new environment variables for a process in addition to your existing environment, try something like this:import os import sh new_env = os.environ.copy() new_env["SOCKS_SERVER"] = "localhost:1234" sh.google_chrome(_env=new_env)
-
New: Use commands that return a SyntaxError.
pass
is a reserved python word sosh
fails when calling the password store commandpass
.pass_command = sh.Command('pass') pass_command('show', 'new_file')
Typer⚑
-
New: Print to stderr.
You can print to "standard error" with a Rich
Console(stderr=True)
from rich.console import Console err_console = Console(stderr=True) err_console.print("error message")
DevOps⚑
Infrastructure as Code⚑
Gitea⚑
-
New: Disable the regular login, use only Oauth.
You need to add a file inside your
custom
directory. The file is too big to add in this digest, please access the article to get it. -
New: Configure it with terraform.
Gitea can be configured through terraform too. There is an official provider that doesn't work, there's a fork that does though. Sadly it doesn't yet support configuring Oauth Authentication sources. Be careful
gitea_oauth2_app
looks to be the right resource to do that, but instead it configures Gitea to be the Oauth provider, not a consumer.In the article you can find how to configure and use it to:
-
New: Create an admin user through the command line.
gitea --config /etc/gitea/app.ini admin user create --admin --username user_name --password password --email email
Or you can change the admin's password:
feat(gtd): Introduce Getting things donegitea --config /etc/gitea/app.ini admin user change-password -u username -p password
First summary of David Allen's book Getting things done. It includes:
Chezmoi⚑
-
New: Introduce chezmoi.
Chezmoi stores the desired state of your dotfiles in the directory
~/.local/share/chezmoi
. When you runchezmoi apply
,chezmoi
calculates the desired contents for each of your dotfiles and then makes the minimum changes required to make your dotfiles match your desired state.What I like:
- Supports
pass
to retrieve credentials. - Popular
- Can remove directories on
apply
- It has a
diff
- It can include dotfiles from an URL
- Encrypt files with gpg
- There's a vim plugin
- Actively maintained
- Good documentation
What I don't like:
- Go templates, although it supports autotemplating and it's well explained
- Written in Go
In the article you can also find:
- Supports
-
Correction: Update the project url of helm-secrets.
From https://github.com/futuresimple/helm-secrets to https://github.com/jkroepke/helm-secrets
Helmfile⚑
-
New: Troubleshoot Yaml templates in go templates.
If you are using a
values.yaml.gotmpl
file you won't be able to use{{ whatever }}
. The solution is to extract that part to a yaml file and include it in the go template. For example:values.yaml.gotmpl
:
metrics: serviceMonitor: enabled: true annotations: additionalLabels: release: prometheus-operator {{ readFile "prometheus_rules.yaml" }}
prometheus_rules.yaml
prometheusRule: enabled: true additionalLabels: release: prometheus-operator spec: - alert: VeleroBackupPartialFailures annotations: message: Velero backup {{ $labels.schedule }} has {{ $value | humanizePercentage }} partialy failed backups. expr: increase(velero_backup_partial_failure_total{schedule!=""}[1h]) > 0 for: 15m labels: severity: warning
-
New: Introduce dotdrop.
The main idea of Dotdropis to have the ability to store each dotfile only once and deploy them with a different content on different hosts/setups. To achieve this, it uses a templating engine that allows to specify, during the dotfile installation with dotdrop, based on a selected profile, how (with what content) each dotfile will be installed.
What I like:
- Popular
- Actively maintained
- Written in Python
- Uses jinja2
- Has a nice to read config file
What I don't like:
- Updating dotfiles doesn't look as smooth as with chezmoi
- Uses
{{@@ @@}}
instead of{{ }}
:S - Doesn't support
pass
. - Not easy way to edit the files.
Terraform⚑
-
New: How to store sensitive information in terraform.
One of the most common questions we get about using Terraform to manage infrastructure as code is how to handle secrets such as passwords, API keys, and other sensitive data.
In the article you'll find how to store your sensitive data in:
- The Terraform state: Using the state backend encryption
- The Terraform source code: Using
sops
andgpg
.
Dotfiles⚑
-
New: Introduce dotfiles.
User-specific application configuration is traditionally stored in so called dotfiles (files whose filename starts with a dot). It is common practice to track dotfiles with a version control system such as Git to keep track of changes and synchronize dotfiles across various hosts. There are various approaches to managing your dotfiles (e.g. directly tracking dotfiles in the home directory v.s. storing them in a subdirectory and symlinking/copying/generating files with a shell script or a dedicated tool).
Note: this is not meant to configure files that are outside your home directory, use Ansible for that use case.
You can find different ways to track your dotfiles:
Infrastructure Solutions⚑
Velero⚑
-
New: Introduce velero.
Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
In the article you'll also find how to:
Automating Processes⚑
cruft⚑
-
Correction: Suggest to use copier instead.
copier looks a more maintained solution nowadays.
letsencrypt⚑
-
New: Introduce letsencrypt.
Letsencrypt is a free, automated, and open certificate authority brought to you by the nonprofit Internet Security Research Group (ISRG). Basically it gives away SSL certificates, which are required to configure webservers to use HTTPS instead of HTTP for example.
In the article you can also find:
Storage⚑
OpenZFS storage planning⚑
- New: Introduce ZFS storage planning.
OpenZFS⚑
- New: How to create a pool and datasets.
-
New: Configure NFS.
With ZFS you can share a specific dataset via NFS. If for whatever reason the dataset does not mount, then the export will not be available to the application, and the NFS client will be blocked.
You still must install the necessary daemon software to make the share available. For example, if you wish to share a dataset via NFS, then you need to install the NFS server software, and it must be running. Then, all you need to do is flip the sharing NFS switch on the dataset, and it will be immediately available.
-
New: Backup.
Please remember that RAID is not a backup, it guards against one kind of hardware failure. There's lots of failure modes that it doesn't guard against though:
- File corruption
- Human error (deleting files by mistake)
- Catastrophic damage (someone dumps water onto the server)
- Viruses and other malware
- Software bugs that wipe out data
- Hardware problems that wipe out data or cause hardware damage (controller malfunctions, firmware bugs, voltage spikes, ...)
That's why you still need to make backups.
ZFS has the builtin feature to make snapshots of the pool. A snapshot is a first class read-only filesystem. It is a mirrored copy of the state of the filesystem at the time you took the snapshot. They are persistent across reboots, and they don't require any additional backing store; they use the same storage pool as the rest of your data.
If you remember ZFS's awesome nature of copy-on-write filesystems, you will remember the discussion about Merkle trees. A ZFS snapshot is a copy of the Merkle tree in that state, except we make sure that the snapshot of that Merkle tree is never modified.
Creating snapshots is near instantaneous, and they are cheap. However, once the data begins to change, the snapshot will begin storing data. If you have multiple snapshots, then multiple deltas will be tracked across all the snapshots. However, depending on your needs, snapshots can still be exceptionally cheap.
The article also includes:
-
New: Introduce Sanoid.
Sanoid is the most popular tool right now, with it you can create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file.
The article includes:
- Installation
- Configuration
- Pros and cons
- Usage
- Troubleshooting
Authentication⚑
Authentik⚑
-
New: Introduce Authentik.
Authentik is an open-source Identity Provider focused on flexibility and versatility.
What I like:
- Is maintained and popular
- It has a clean interface
- They have their own terraform provider Oo!
What I don't like:
- It's heavy focused on GUI interaction, but you can export the configuration to YAML files to be applied without the GUI interaction.
- The documentation is oriented to developers and not users. It's a little difficult to get a grasp on how to do things in the platform without following blog posts.
In the article you can also find:
-
Correction: Configure the invitation flow with terraform.
-
New: Hide and application from a user.
Application access can be configured using (Policy) Bindings. Click on an application in the applications list, and select the Policy / Group / User Bindings tab. There you can bind users/groups/policies to grant them access. When nothing is bound, everyone has access. You can use this to grant access to one or multiple users/groups, or dynamically give access using policies.
With terraform you can use
authentik_policy_binding
, for example:resource "authentik_policy_binding" "admin" { target = authentik_application.gitea.uuid group = authentik_group.admins.id order = 0 }
-
New: Configure password recovery.
Password recovery is not set by default, in the article you can find the terraform resources needed for it to work.
Operating Systems⚑
Linux⚑
Linux Snippets⚑
-
New: Use a
pass
password in a Makefile.TOKEN ?= $(shell bash -c '/usr/bin/pass show path/to/token') diff: @AUTHENTIK_TOKEN=$(TOKEN) terraform plan
-
New: Install a new font.
Install a font manually by downloading the appropriate
.ttf
orotf
files and placing them into/usr/local/share/fonts
(system-wide),~/.local/share/fonts
(user-specific) or~/.fonts
(user-specific). These files should have the permission 644 (-rw-r--r--
), otherwise they may not be usable. -
New: Get VPN password from
pass
.To be able to retrieve the user and password from pass you need to run the openvpn command with the next flags:
sudo bash -c "openvpn --config config.ovpn --auth-user-pass <(echo -e 'user_name\n$(pass show vpn)')"
Assuming that
vpn
is an entry of yourpass
password store. -
New: Measure the performance, IOPS of a disk.
To measure disk IOPS performance in Linux, you can use the
fio
tool. Install it withapt-get install fio
Then you need to go to the directory where your disk is mounted. The test is done by performing read/write operations in this directory.
To do a random read/write operation test an 8 GB file will be created. Then
fio
will read/write a 4KB block (a standard block size) with the 75/25% by the number of reads and writes operations and measure the performance.fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75
-
New: What is
/var/log/tallylog
./var/log/tallylog
is the file where thePAM
linux module (used for authentication of the machine) keeps track of the failed ssh logins in order to temporarily block users. -
New: Manage users.
- Change main group of user
usermod -g {{ group_name }} {{ user_name }}
- Add user to group
usermod -a -G {{ group_name }} {{ user_name }}
- Remove user from group.
usermod -G {{ remaining_group_names }} {{ user_name }}
You have to execute
groups {{ user }}
get the list and pass the remaining to the above command- Change uid and gid of the user
usermod -u {{ newuid }} {{ login }} groupmod -g {{ newgid }} {{ group }} find / -user {{ olduid }} -exec chown -h {{ newuid }} {} \; find / -group {{ oldgid }} -exec chgrp -h {{ newgid }} {} \; usermod -g {{ newgid }} {{ login }}
-
New: Manage ssh keys.
- Generate ed25519 key
ssh-keygen -t ed25519 -f {{ path_to_keyfile }}
- Generate RSA key
ssh-keygen -t rsa -b 4096 -o -a 100 -f {{ path_to_keyfile }}
- Generate different comment
ssh-keygen -t ed25519 -f {{ path_to_keyfile }} -C {{ email }}
- Generate key headless, batch
ssh-keygen -t ed25519 -f {{ path_to_keyfile }} -q -N ""
- Generate public key from private key
ssh-keygen -y -f {{ path_to_keyfile }} > {{ path_to_public_key_file }}
- Get fingerprint of key
ssh-keygen -lf {{ path_to_key }}
-
New: Measure the network performance between two machines.
Install
iperf3
withapt-get install iperf3
on both server and client.On the server system run:
server#: iperf3 -i 10 -s
Where:
-i
: the interval to provide periodic bandwidth updates-s
: listen as a server
On the client system:
client#: iperf3 -i 10 -w 1M -t 60 -c [server hostname or ip address]
Where:
-i
: the interval to provide periodic bandwidth updates-w
: the socket buffer size (which affects the TCP Window). The buffer size is also set on the server by this client command.-t
: the time to run the test in seconds-c
: connect to a listening server at…
Sometimes is interesting to test both ways as they may return different outcomes
Anki⚑
-
New: How to install the latest version.
Install the dependencies:
sudo apt-get install zstd
Download the latest release package.
Open a terminal and run the following commands, replacing the filename as appropriate:
tar xaf Downloads/anki-2.1.XX-linux-qt6.tar.zst cd anki-2.1.XX-linux-qt6 sudo ./install.sh
Tridactyl⚑
-
New: Introduce tridactyl.
Tridactyl is a Vim-like interface for Firefox, inspired by Vimperator/Pentadactyl.
In the article you'll also find:
google chrome⚑
-
Correction: Update the installation steps.
-
Import the GPG key, and use the following command.
sudo wget -O- https://dl.google.com/linux/linux_signing_key.pub | gpg --dearmor > /usr/share/keyrings/google-chrome.gpg
-
Once the GPG import is complete, you will need to import the Google Chrome repository.
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/google-chrome.gpg] http://dl.google.com/linux/chrome/deb/ stable main' | sudo tee /etc/apt/sources.list.d/google-chrome.list
- Install the program:
apt-get update apt-get install google-chrome-stable
-
Kitty⚑
-
New: How to add fonts to kitty.
- Add your fonts to the
~/.local/share/fonts
directory - Check they are available when you run
kitty +list-fonts
- Add them to your config:
feat(kitty#Screen not working on server with sudo): Troubleshoot the Screen not working on server with sudo issuefont_family Operator Mono Book bold_font Operator Mono Medium italic_font Operator Mono Book Italic bold_italic_font Operator Mono Medium Italic
Make sure you're using the ssh alias below
alias ssh="kitty +kitten ssh"
And then copy the
~/.terminfo
into/root
sudo copy -r ~/.terminfo /root
- Add your fonts to the
sed⚑
- New: Introduce sed snippets.
Vim⚑
-
Nvim moved away from vimscript and now needs to be configured in lua. You can access the config file in
~/.config/nvim/init.lua
. It's not created by default so you need to do it yourself.In the article it explains how to do the basic configuration with lua:
- Set variables
- Set key bindings
- Set spelling
- Set test runners: With
neotest
- Set the buffer and file management: with
Telescope
. - Use Plugin managers: it evaluates the different solutions and then explains how to install and use
packer
- What is and how to use
Treesitter
- Set git integration: Evaluate the different solutions and configure
neogit
- How to run lua snippets
And some troubleshooting:
-
Correction: Update the leader key section.
There are different opinions on what key to use as the
<leader>
key. The<space>
is the most comfortable as it's always close to your thumbs, and it works well with both hands. Nevertheless, you can only use it in normal mode, because in insert<space><whatever>
will be triggered as you write. An alternative is to use;
which is also comfortable (if you use the english key distribution) and you can use it in insert mode.If you want to define more than one leader key you can either:
- Change the
mapleader
many times in your file: As the value ofmapleader
is used at the moment the mapping is defined, you can indeed change that while plugins are loading. For that, you have to explicitly:runtime
the plugins in your~/.vimrc
(and count on the canonical include guard to prevent redefinition later):
* Use the keys directly instead of usinglet mapleader = ',' runtime! plugin/NERD_commenter.vim runtime! ... let mapleader = '\' runime! plugin/mark.vim ...
<leader>
" editing mappings nnoremap ,a <something> nnoremap ,k <something else> nnoremap ,d <and something else> " window management mappings nnoremap gw <something> nnoremap gb <something else>
Defining
mapleader
and/or using<leader>
may be useful if you change your mind often on what key to use a leader but it won't be of any use if your mappings are stable. - Change the
Android⚑
Seedvault⚑
-
New: Introduce seedvault.
Seedvault is an open-source encrypted backup app for inclusion in Android-based operating systems.
While every smartphone user wants to be prepared with comprehensive data backups in case their phone is lost or stolen, not every Android user wants to entrust their sensitive data to Google's cloud-based storage. By storing data outside Google's reach, and by using client-side encryption to protect all backed-up data, Seedvault offers users maximum data privacy with minimal hassle.
Seedvault allows Android users to store their phone data without relying on Google's proprietary cloud storage. Users can decide where their phone's backup will be stored, with options ranging from a USB flash drive to a remote self-hosted cloud storage alternative such as NextCloud. Seedvault also offers an Auto-Restore feature: instead of permanently losing all data for an app when it is uninstalled, Seedvault's Auto-Restore will restore all backed-up data for the app upon reinstallation.
Seedvault protects users' private data by encrypting it on the device with a key known only to the user. Each Seedvault account is protected by client-side encryption (AES/GCM/NoPadding). This encryption is unlockable only with a 12-word randomly-generated key.
With Seedvault, backups run automatically in the background of the phone's operating system, ensuring that no data will be left behind if the device is lost or stolen. The Seedvault application requires no technical knowledge to operate, and does not require a rooted device.
In the article you'll also find:
- How to install it
- How to store the backup remotely
- How to restore a backup
Signal⚑
-
New: Add installation steps.
These instructions only work for 64 bit Debian-based Linux distributions such as Ubuntu, Mint etc.
- Install our official public software signing key
wget -O- https://updates.signal.org/desktop/apt/keys.asc | gpg --dearmor > signal-desktop-keyring.gpg cat signal-desktop-keyring.gpg | sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null
- Add our repository to your list of repositories
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main' |\ sudo tee -a /etc/apt/sources.list.d/signal-xenial.list
- Update your package database and install signal
sudo apt update && sudo apt install signal-desktop