29th April 2025
Activism⚑
-
New: Anticolonialism in technology.
Art - Decolonizing technology
Articles - Shanzhai: An Opportunity to Decolonize Technology? by Sherry Liao - Técnicas autoritarias y técnicas democráticas by Lewis Mumford (pdf)
Books
- Race after technology by Ruha Benjamin
- The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World by Antony Loewenstein
- Frantz Fanon books
- Hacking del sé by Ippolita
- Tecnologie conviviali by Carlo Milani
- Descolonizar y despatriarcalizar las tecnologías by Paola Ricaurte Quijano
Research
Talks
Feminism⚑
- New: New References.
Palestine⚑
-
New: Add Al Jazeera documenaries.
A la luz de los acontecimientos actuales en Palestina, un gran número de cineastas han puesto sus películas sobre Palestina a disposición en línea de forma gratuita.
Están en árabe y no tienen subtítulos así que no se puede bajar, pero si se pueden ver directamente en youtube con los subtítulos autogenerados.
- Una colección de documentales publicada por Al Jazeera Documentary: 1, 2, 3
- El documental "Guardián de la memoria"
- El documental "Un asiento vacío"
- El documental "El piloto de la resistencia"
- El documental "Jenin"
- El documental "El olivo"
- El documental "Escenas de la ocupación en Gaza 1973"
- El documental "Gaza lucha por la libertad"
- El documental "Los hijos de Arna"
- El cortometraje "Strawberry"
- El cortometraje "The Place"
- El documental "El alcalde"
- El documental "La creación y la Nakba 1948"
- El documental "Ocupación 101"
- El documental "La sombra de la ausencia"
- El documental "Los que no existen"
- El documental "Como dijo el poeta"
- El documental "Cinco cámaras rotas"
- El largometraje "Paradise Now"
- El cortometraje "Abnadam"
- El largometraje "Bodas de Galilea"
- El largometraje "Kofia"
- El largometraje documental "Slingshot Hip Hop"
- El largometraje documental "Tel Al-Zaatar"
- El largometraje documental "Tal al-Zaatar - Detrás de la batalla"
- El documental "In the Grip of the Resistance"
- El documental "Swings"
- El documental "Naji al-Ali es un artista visionario"
- El documental "La puerta superior"
- El largometraje documental "En busca de Palestina"
- El largometraje "La sal de este mar"
- El largometraje documental "Hakki Ya Bird"
- La serie "Palestina Al-Taghriba"
- La serie "Yo soy Jerusalén"
Conflicto⚑
-
New: Añadir notas sobre el conflicto desde un punto de vista antipunitivista.
Pensamientos sueltos sobre la visión del conflicto desde un punto de vista antipunitivista
- Dejar de ver los conflictos como una batalla, es una oportunidad de transformación
- Los conflictos deben de ser resueltos en colectivo siempre que se pueda
- Si se veta a un pavo por comportamientos machistas sólo estás trasladando el problema. Seguirá pululando por diferentes colectivos hasta que arraigue en uno más débil y lo torpedeará
- Es difícil de ver el límite entre lo terapéutico y lo transformativo
- Cuál es la responsabilidad colectiva de la transformación de una persona?
- Nos faltan herramientas para:
- la gestión de conflictos en general
- la gestión de conflictos físicos en particular
- el acompañamiento a ambas partes de un conflicto
- el acompañamiento a una persona agresora
- es todo lo que me produce malestar violencia?
- Cada situación es tan particular que los protocolos no sirven. Es mucho mejor someternos en colectivo y a menudo a situaciones de conflicto y generar desde esa práctica las herramientas que nos puedan servir, de manera que en el momento de la verdad salgan de manera intuitiva.
Referencias
Películas
Libros
Series
Podcast
- El marido (Ciberlocutorio)
- El cancelado (Ciberlocutorio)
- Antipunitivismo con Laura Macaya (Sabor a Queer)
- Procesos restaurativos, feministas y sistémicos (Fil a l´agulla en el curso de Nociones Comunes "Me cuidan mis amigas")
Artículos
- Con penas y sin glorias: reflexiones desde un feminismo antipunitivo y comunitario:
- Expulsar a los agresores no reduce necesariamente la violencia:
- Antipunitivismo remasterizado
- Reflexiones sobre antipunitivismo en tiempos de violencias
- Indispuestas. Cuando nadie quiere poner la vida en ello
- La deriva neoliberal de los cuidados
- Justicia transformativa: del dicho al hecho
- Las malas víctimas responden
Otras herramientas
Conference organisation⚑
pretalx⚑
-
New: Install.
NOTE: it's probably too much for a small event.
The default docker compose doesn't work as it still uses mysql which was dropped. If you want to use sqlite just remove the database configuration.
--- services: pretalx: image: pretalx/standalone:v2024.3.0 container_name: pretalx restart: unless-stopped depends_on: - redis environment: # Hint: Make sure you serve all requests for the `/static/` and `/media/` paths when debug is False. See [installation](https://docs.pretalx.org/administrator/installation/#step-7-ssl) for more information PRETALX_FILESYSTEM_MEDIA: /public/media PRETALX_FILESYSTEM_STATIC: /public/static ports: - "127.0.0.1:80:80" volumes: - ./conf/pretalx.cfg:/etc/pretalx/pretalx.cfg:ro - pretalx-data:/data - pretalx-public:/public redis: image: redis:latest container_name: pretalx-redis restart: unless-stopped volumes: - pretalx-redis:/data volumes: pretalx-data: pretalx-public: pretalx-redis:
I was not able to find the default admin user so I had to create it manually. Get into the docker:
docker exec -it pretalx bash
When you run the commands by default it uses another database file
/pretalx/src/data/db.sqlite3
, so I removed it and created a symbolic link to the actual place of the database/data/db.sqlite
pretalxuser@82f886a58c57:/$ rm /pretalx/src/data/db.sqlite3 pretalxuser@82f886a58c57:/$ ln -s /data/db.sqlite3 /pretalx/src/data/db.sqlite3
Then you can create the admin user:
python -m pretalx createsuperuser
Life navigation⚑
Content Management⚑
moonlight⚑
-
New: Add note on apollo.
Also checkout apollo a sunshine fork.
Book Management⚑
-
New: Convert images based pdf to epub.
NOTE: before proceeding inspect the next tools that use AI so it will probably give a better output:
If the pdf is based on images
Then you need to use OCR to extract the text.
First, convert the PDF to images:
pdftoppm -png input.pdf page
Apply OCR to your PDF
Use
tesseract
to extract text from each image:for img in page-*.png; do tesseract "$img" "${img%.png}" -l eng done
This produces
page-1.txt
,page-2.txt
, etc. -
New: Protect ombi behind authentik.
This option allows the user to select a HTTP header value that contains the desired login username.
Note that if the header value is present and matches an existing user, default authentication is bypassed - use with caution.
This is most commonly utilized when Ombi is behind a reverse proxy which handles authentication. For example, if using Authentik, the X-authentik-username HTTP header which contains the logged in user's username is set by Authentik's proxy outpost.
Coding⚑
Languages⚑
pipx⚑
-
New: Upgrading python version of all your pipx packages.
If you upgrade the main python version and remove the old one (a dist upgrade) then you won't be able to use the installed packages.
If you're lucky enough to have the old one you can use:
pipx reinstall-all --python <the Python executable file>
Otherwise you need to export all the packages with
pipx list --json > ~/pipx.json
Then reinstall one by one:
set -ux if [[ -e ~/pipx.json ]]; then for p in $(cat ~/pipx.json | jq -r '.venvs[].metadata.main_package.package_or_url'); do pipx install $p done fi
The problem is that this method does not respect the version constrains nor the injects, so you may need to debug each package a bit.
Configure Docker to host the application⚑
-
New: Limit the access of a docker on a server to the access on the docker of another server.
WARNING: I had issues with this path and I ended up not using docker swarm networks.
If you want to restrict access to a docker (running on server 1) so that only another specific docker container running on another server (server 2) can access it. You need more than just IP-based filtering between hosts. The solution is then to:
-
Create a Docker network that spans both hosts using Docker Swarm or a custom overlay network.
-
Use Docker's built-in DNS resolution to allow specific container-to-container communication.
Here's a step-by-step approach:
1. Set up Docker Swarm (if not already done)
On server 1:
docker swarm init --advertise-addr <ip of server 1>
This will output a command to join the swarm. Run that command on server 2.
2. Create an overlay network
docker network create --driver overlay --attachable <name of the network>
3. Update the docker compose on server 1
magine for example that we want to deploy wg-easy.
services: wg-easy: image: ghcr.io/wg-easy/wg-easy:latest container_name: wg-easy networks: - wg - <name of the network> # Add the overlay network volumes: - wireguard:/etc/wireguard - /lib/modules:/lib/modules:ro ports: - "51820:51820/udp" # - "127.0.0.1:51821:51821/tcp" # Don't expose the http interface, it will be accessed from within the docker network restart: unless-stopped cap_add: - NET_ADMIN - SYS_MODULE sysctls: - net.ipv4.ip_forward=1 - net.ipv4.conf.all.src_valid_mark=1 - net.ipv6.conf.all.disable_ipv6=1 networks: wg: # Your existing network config <name of the network>: external: true # Reference the overlay network created above
4. On server 2, create a Docker Compose file for your client container
services: wg-client: image: your-client-image container_name: wg-client networks: - <name of the network> # Other configuration for your client container networks: <name of the network>: external: true # Reference the same overlay network
5. Access the WireGuard interface from the client container
Now, from within the client container on server 2, you can access the WireGuard interface using the container name:
http://wg-easy:51821
This approach ensures that:
- The WireGuard web interface is not exposed to the public (not even localhost on server 1)
- Only containers on the shared overlay network can access it
- The specific container on server 2 can access it using Docker's internal DNS
Testing the network is well set
You may be confused if the new network is not shown on server 2 when running
docker network ls
but that's normal. Server 2 is a swarm worker node. The issue with not seeing the overlay network on server 2 is actually expected behavior - worker nodes cannot list or manage networks directly. However, even though you can't see them, containers on the server 2 can still connect to the overlay network when properly configured.To see that the swarm is well set you can use
docker node ls
on server 1 (you'll see an error on server 2 as it's a worker node)Weird network issues with swarm overlays
I've seen cases where after a server reboot you need to remove the overlay network from the docker compose and then add it again.
After many hours of debugging I came with the patch of removing the overlay network from the docker-compose and attaching it with the systemd service
[Unit] Description=wg-easy Requires=docker.service After=docker.service [Service] Restart=always User=root Group=docker WorkingDirectory=/data/apps/wg-easy TimeoutStartSec=100 RestartSec=2s ExecStart=/usr/bin/docker compose -f docker-compose.yaml up ExecStartPost=/bin/bash -c '\ sleep 30; \ /usr/bin/docker network connect wg-easy wg-easy; \ ' ExecStop=/usr/bin/docker compose -f docker-compose.yaml down [Install] WantedBy=multi-user.target
-
Plugin System⚑
- Correction: Write python plugins with entrypoints.
Python Snippets⚑
-
New: Download book previews from google books.
You will only get some of the pages but it can help in the ending pdf
This first script gets the images data:
import asyncio import os import json import re from urllib.parse import urlparse, parse_qs from playwright.async_api import async_playwright import aiohttp import aiofiles async def download_image(session, src, output_path): """Download image from URL and save to specified path""" try: headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; rv:128.0) Gecko/20100101 Firefox/128.0", "Accept": "*/*", "Accept-Language": "en-US,en;q=0.5", "Referer": "https://books.google.es/", "DNT": "1", "Sec-GPC": "1", "Connection": "keep-alive", } async with session.get(src, headers=headers) as response: response.raise_for_status() async with aiofiles.open(output_path, "wb") as f: await f.write(await response.read()) print(f"Downloaded: {output_path}") return True except Exception as e: print(f"Error downloading {src}: {e}") return False def extract_page_number(pid): """Extract numeric page number from page ID""" match = re.search(r"PA(\d+)", pid) if match: return int(match.group(1)) try: return int(pid.replace("PA", "").replace("PP", "")) except: return 9999 async def main(): # Create output directory output_dir = "book_images" os.makedirs(output_dir, exist_ok=True) # Keep track of all pages found seen_pids = set() page_counter = 0 download_tasks = [] # Create HTTP session for downloads async with aiohttp.ClientSession() as session: async with async_playwright() as p: browser = await p.firefox.launch(headless=False) context = await browser.new_context( user_agent="Mozilla/5.0 (Windows NT 10.0; rv:128.0) Gecko/20100101 Firefox/128.0" ) # Create a page and set up response handling page = await context.new_page() # Store seen URLs to avoid duplicates seen_urls = set() # Set up response handling for JSON data async def handle_response(response): nonlocal page_counter url = response.url # Only process URLs with jscmd=click3 if "jscmd=click3" in url and url not in seen_urls: try: # Try to parse as JSON json_data = await response.json() seen_urls.add(url) # Process and download page data immediately if "page" in json_data and isinstance(json_data["page"], list): for page_data in json_data["page"]: if "src" in page_data and "pid" in page_data: pid = page_data["pid"] if pid not in seen_pids: seen_pids.add(pid) src = page_data["src"] # Create filename with sequential numbering formatted_index = ( f"{int(pid.replace('PA', '')):03d}" ) output_file = os.path.join( output_dir, f"page-{formatted_index}.png" ) page_counter += 1 print( f"Found new page: {pid}, scheduling download" ) # Start download immediately task = asyncio.create_task( download_image(session, src, output_file) ) download_tasks.append(task) return len(seen_pids) except Exception as e: print(f"Error processing response from {url}: {e}") # Register response handler page.on("response", handle_response) # Navigate to the starting URL book_url = ( "https://books.google.es/books?id=412loEMJA9sC&lpg=PP1&hl=es&pg=PA5" ) await page.goto(book_url) # Wait for initial page load await page.wait_for_load_state("networkidle") # Scroll loop variables max_scroll_attempts = 500 # Safety limit scroll_count = 0 pages_before_scroll = 0 consecutive_no_new_pages = 0 # Continue scrolling until we find no new pages for several consecutive attempts while scroll_count < max_scroll_attempts and consecutive_no_new_pages < 5: # Get current page count before scrolling pages_before_scroll = len(seen_pids) # Use PageDown key to scroll await page.keyboard.press("PageDown") scroll_count += 1 # Wait for network activity await asyncio.sleep(2) # Check if we found new pages after scrolling if len(seen_pids) > pages_before_scroll: consecutive_no_new_pages = 0 print( f"Scroll {scroll_count}: Found {len(seen_pids) - pages_before_scroll} new pages" ) else: consecutive_no_new_pages += 1 print( f"Scroll {scroll_count}: No new pages found ({consecutive_no_new_pages}/5)" ) print(f"Scrolling complete. Found {len(seen_pids)} pages total.") await browser.close() # Wait for any remaining downloads to complete if download_tasks: print(f"Waiting for {len(download_tasks)} downloads to complete...") await asyncio.gather(*download_tasks) print(f"Download complete! Downloaded {page_counter} images.") if __name__ == "__main__": asyncio.run(main())
-
New: Send keystrokes to an active window.
import subprocess subprocess.run(['xdotool', 'type', 'Hello world!']) subprocess.run(['xdotool', 'key', 'Return']) # press enter subprocess.run(['xdotool', 'key', 'ctrl+c']) window_id = subprocess.check_output(['xdotool', 'getactivewindow']).decode().strip() subprocess.run(['xdotool', 'windowactivate', window_id])
-
New: Make temporal file.
import tempfile with tempfile.NamedTemporaryFile( suffix=".tmp", mode="w+", encoding="utf-8" ) as temp: temp.write( "# Enter commit message body. Lines starting with '#' will be ignored.\n" ) temp.write("# Leave file empty to skip the body.\n") temp.flush() subprocess.call([editor, temp.name]) temp.seek(0) lines = temp.readlines()
GitPython⚑
-
New: Checking out an existing branch.
heads = repo.heads develop = heads.develop repo.head.reference = develop
Elasticsearch⚑
-
New: Delete documents from all indices in an elasticsearch cluster.
ES_HOST="${1:-http://localhost:9200}" DEFAULT_SETTING="5" # Target default value (5%) INDICES=$(curl -s -XGET "$ES_HOST/_cat/indices?h=index") for INDEX in $INDICES; do echo "Processing index: $INDEX" # Close the index to modify static settings curl -s -XPOST "$ES_HOST/$INDEX/_close" > /dev/null # Update expunge_deletes_allowed to 1% curl -s -XPUT "$ES_HOST/$INDEX/_settings" -H 'Content-Type: application/json' -d' { "index.merge.policy.expunge_deletes_allowed": "0" }' > /dev/null # Reopen the index curl -s -XPOST "$ES_HOST/$INDEX/_open" > /dev/null # Trigger forcemerge (async) # curl -s -XPOST "$ES_HOST/$INDEX/_forcemerge?only_expunge_deletes=true&wait_for_completion=false" > /dev/null echo "Forcemerge triggered for $INDEX" curl -s -XPOST "$ES_HOST/$INDEX/_forcemerge?only_expunge_deletes=true" > /dev/null & echo "Waiting until all forcemerge tasks are done" while curl -s $ES_HOST/_cat/tasks\?v | grep forcemerge > /dev/null ; do curl -s $ES_HOST/_cat/indices | grep $INDEX sleep 10 done # Close the index again curl -s -XPOST "$ES_HOST/$INDEX/_close" > /dev/null # Update to the new default (5%) curl -s -XPUT "$ES_HOST/$INDEX/_settings" -H 'Content-Type: application/json' -d' { "index.merge.policy.expunge_deletes_allowed": "'"$DEFAULT_SETTING"'" }' > /dev/null # Reopen the index curl -s -XPOST "$ES_HOST/$INDEX/_open" > /dev/null done echo "Done! All indices updated."
Coding tools⚑
Gitea⚑
-
New: Configure triggers not to push to a branch.
There is now a branches-ignore option:
on: push: branches-ignore: - main
-
New: Not there yet.
- Being able to run two jobs on the same branch: It will be implemented with concurrency with this pr. This behavior didn't happen before 2023-07-25
DevSecOps⚑
Storage⚑
OpenZFS⚑
-
New: List all datasets that have zfs native encryption.
```bash ROOT_FS="main" is_encryption_enabled() { zfs get -H -o value encryption $1 | grep -q 'aes' }
list_datasets_with_encryption() {
# Initialize an array to hold dataset names datasets=() # List and iterate over all datasets starting from the root filesystem for dataset in $(zfs list -H -o name | grep -E '^'$ROOT_FS'/'); do if is_encryption_enabled "$dataset"; then datasets+=("$dataset") fi done # Output the results echo "ZFS datasets with encryption enabled:" printf '%s\n' "${datasets[@]}"
}
list_datasets_with_encryption
-
New: Troubleshoot cannot destroy dataset: dataset is busy.
If you're experiencing this error and can reproduce the next traces:
cannot destroy 'zroot/2013-10-15T065955229209': dataset is busy cannot unmount 'zroot/2013-10-15T065955229209': not currently mounted zroot/2013-10-15T065955229209 2.86G 25.0G 11.0G /var/lib/heaver/instances/2013-10-15T065955229209 umount: /var/lib/heaver/instances/2013-10-15T065955229209: not mounted
You can
grep zroot/2013-10-15T065955229209 /proc/*/mounts
to see which process is still using the dataset.Another possible culprit are snapshots, you can then run:
zfs holds $snapshotname
To see if it has any holds, and if so,
zfs release
to remove the hold. -
New: Upgrading ZFS Storage Pools.
If you have ZFS storage pools from a previous zfs release you can upgrade your pools with the
zpool upgrade
command to take advantage of the pool features in the current release. In addition, the zpool status command has been modified to notify you when your pools are running older versions. For example:zpool status pool: tank state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 errors: No known data errors
You can use the following syntax to identify additional information about a particular version and supported releases:
zpool upgrade -v This system is currently running ZFS pool version 22. The following versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property 14 passthrough-x aclinherit 15 user/group space accounting 16 stmf property support 17 Triple-parity RAID-Z 18 Snapshot user holds 19 Log device removal 20 Compression using zle (zero-length encoding) 21 Reserved 22 Received properties For more information on a particular version, including supported releases, see the ZFS Administration Guide.
Then, you can run the zpool upgrade command to upgrade all of your pools. For example:
zpool upgrade -a
Operating Systems⚑
Linux⚑
dunst⚑
-
Correction: Tweak installation steps.
If it didn't create the systemd service you can create it yourself with this service filesudo apt install libdbus-1-dev libx11-dev libxinerama-dev libxrandr-dev libxss-dev libglib2.0-dev \ libpango1.0-dev libgtk-3-dev libxdg-basedir-dev libgdk-pixbuf-2.0-dev make WAYLAND=0 sudo make WAYLAND=0 install
[Unit] Description=Dunst notification daemon Documentation=man:dunst(1) PartOf=graphical-session.target [Service] Type=dbus BusName=org.freedesktop.Notifications ExecStart=/usr/local/bin/dunst Slice=session.slice Environment=PATH=%h/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games [Install] WantedBy=default.target
You may need to add more paths to PATH.
To see the logs of the service use
journalctl --user -u dunst.service -f --since "15 minutes ago"
-
New: Configuration.
Read and tweak the
~/.dunst/dunstrc
file to your liking. You have the default one hereYou'll also need to configure the actions in your window manager. In my case i3wm:
bindsym $mod+b exec dunstctl close-all bindsym $mod+v exec dunstctl context
-
New: Configure each application notification.
You can look at rosoau config for inspiration
References * Some dunst configs * Smarttech101 tutorials (1, 2) * Archwiki page on dunst
Linux Snippets⚑
-
New: Use lftp.
Connect with:
lftp -p <port> user@host
Navigate with
ls
andcd
. Get withmget
for multiple things -
New: Difference between apt-get upgrate and apt-get full-upgrade.
The difference between
upgrade
andfull-upgrade
is that the later will remove the installed packages if that is needed to upgrade the whole system. Be extra careful when using this commandI will more frequently use
autoremove
to remove old packages and then just useupgrade
. -
New: Upgrade debian.
sudo apt-get update sudo apt-get upgrade sudo apt-get full-upgrade sudo vi /etc/apt/sources.list /etc/apt/sources.list.d/* sudo apt-get clean sudo apt-get update sudo apt-get upgrade sudo apt-get full-upgrade sudo apt-get autoremove sudo shutdown -r now
-
New: Get a list of extensions by file type.
There are community made lists such as dyne's file extension list
-
Correction: Upgrade ubuntu.
Upgrade your system:
sudo apt update sudo apt upgrade reboot
You must install ubuntu-release-upgrader-core package:
sudo apt install ubuntu-release-upgrader-core
Ensure the Prompt line in
/etc/update-manager/release-upgrades
is set to ‘lts‘ using the “grep” or “cat”grep 'lts' /etc/update-manager/release-upgrades cat /etc/update-manager/release-upgrades
Opening up TCP port 1022
For those using ssh-based sessions, open an additional SSH port using the ufw command, starting at port 1022. This is the default port set by the upgrade procedure as a fallback if the default SSH port dies during upgrades.
sudo /sbin/iptables -I INPUT -p tcp --dport 1022 -j ACCEPT
Finally, start the upgrade from Ubuntu 22.04 to 24.04 LTS version. Type:
sudo do-release-upgrade -d
i3wm⚑
-
New: Add i3wm python actions.
You can also use it with async
Create the connection object
from i3ipc import Connection, Event i3 = Connection()
Focus on a window by it's class
tree = i3.get_tree() ff = tree.find_classed('Firefox')[0] ff.command('focus')
Wireguard⚑
-
New: Troubleshoot Failed to resolve interface "tun": No such device.
sudo apt purge resolvconf
Arts⚑
Calistenia⚑
-
New: Introduce calistenia.
Técnica básica
Dominadas
Referencias
Vídeos
Science⚑
Artificial Intelligence⚑
OCR⚑
-
New: Add ocr references.
Birding⚑
-
New: Introduce android apps for birding.
- whoBIRD
- Merlin Bird ID: I've seen it working and it's amazing, I'm however trying first whoBIRD as it's in F-droid